Setting of forms and methods for forest use must be
based on assessment and forecast characteristics of forests modern state taking
into account probable trends of their formation (or reconstitution) on the base
of the analysis of modern physical-geographic conditions on a concrete
territory. A very efficient method for this can be an ecological mapping
monitoring of forest use with outlining of a series of ecological-geobotanic
maps and express schematic maps of the state of forest communities for a
concrete time between forests taxation periods; these are often 15-25-years
(sometimes more) their components. This work is considering as theoretic aspect
in discution for establishing some new methodical approaches for organization
of forestry at all. According to this, we have to say that use its position in forestry will be more objective
and allow to preserve natural basis of forest use and its restoration. Next
step is using that approaches on the practice.

At the territories of
three key sites with different environmental conditions (coast of Irkutsk
Reservoir, the Goloustnaya River basin, surroundings of Gakhany settlement in60 kmnorthward from the
urbanized area Ust’-Orda, Irkutsk Region), we performed studies aimed to reveal
the structural-dynamic organization and trends of forests reconstitution at the
places of burnt areas and of cuttings of different targets. We took into account
the character of impact of anthropogenic factors due to modern land use in the
region, in particular. A considerable limitation of cuttings during last
decades and decrease of areas was designed for agriculture and pasture on the background of climate
dynamics and variability in the region during last years.

Abstract:
We address combinatorial problems that can be formulated as minimization of a partially separable function of discrete variables (energy minimization in graphical models, weighted constraint satisfaction, pseudo-Boolean optimization, 0-1 polynomial programming). For polyhedral relaxations of such problems it is generally not true that variables integer in the relaxed solution will retain the same values in the optimal discrete solution. Those which do are called persistent. Such persistent variables define a part of a globally optimal solution. Once identified, they can be excluded from the problem, reducing its size. To any polyhedral relaxation we associate a sufficient condition proving persistency of a subset of variables. We set up a specially constructed linear program which determines the set of persistent variables maximal with respect to the relaxation. The condition improves as the relaxation is tightened and possesses all its invariances. The proposed framework explains a variety of existing methods originating from different areas of research and based on different principles. A theoretical comparison is established that relates these methods to the standard linear relaxation and proves that the proposed technique identifies same or larger set of persistent variables.

Abstract:
We consider discrete pairwise energy minimization problem (weighted constraint satisfaction, max-sum labeling) and methods that identify a globally optimal partial assignment of variables. When finding a complete optimal assignment is intractable, determining optimal values for a part of variables is an interesting possibility. Existing methods are based on different sufficient conditions. We propose a new sufficient condition for partial optimality which is: (1) verifiable in polynomial time (2) invariant to reparametrization of the problem and permutation of labels and (3) includes many existing sufficient conditions as special cases. We pose the problem of finding the maximum optimal partial assignment identifiable by the new sufficient condition. A polynomial method is proposed which is guaranteed to assign same or larger part of variables than several existing approaches. The core of the method is a specially constructed linear program that identifies persistent assignments in an arbitrary multi-label setting.

Abstract:
In this work, we prove several relations between three different energy minimization techniques. A recently proposed methods for determining a provably optimal partial assignment of variables by Ivan Kovtun (IK), the linear programming relaxation approach (LP) and the popular expansion move algorithm by Yuri Boykov. We propose a novel sufficient condition of optimal partial assignment, which is based on LP relaxation and called LP-autarky. We show that methods of Kovtun, which build auxiliary submodular problems, fulfill this sufficient condition. The following link is thus established: LP relaxation cannot be tightened by IK. For non-submodular problems this is a non-trivial result. In the case of two labels, LP relaxation provides optimal partial assignment, known as persistency, which, as we show, dominates IK. Relating IK with expansion move, we show that the set of fixed points of expansion move with any "truncation" rule for the initial problem and the problem restricted by one-vs-all method of IK would coincide -- i.e. expansion move cannot be improved by this method. In the case of two labels, expansion move with a particular truncation rule coincide with one-vs-all method.

Abstract:
We develop a novel distributed algorithm for the minimum cut problem. We primarily aim at solving large sparse problems. Assuming vertices of the graph are partitioned into several regions, the algorithm performs path augmentations inside the regions and updates of the push-relabel style between the regions. The interaction between regions is considered expensive (regions are loaded into the memory one-by-one or located on separate machines in a network). The algorithm works in sweeps - passes over all regions. Let $B$ be the set of vertices incident to inter-region edges of the graph. We present a sequential and parallel versions of the algorithm which terminate in at most $2|B|^2+1$ sweeps. The competing algorithm by Delong and Boykov uses push-relabel updates inside regions. In the case of a fixed partition we prove that this algorithm has a tight $O(n^2)$ bound on the number of sweeps, where $n$ is the number of vertices. We tested sequential versions of the algorithms on instances of maxflow problems in computer vision. Experimentally, the number of sweeps required by the new algorithm is much lower than for the Delong and Boykov's variant. Large problems (up to $10^8$ vertices and $6\cdot 10^8$ edges) are solved using under 1GB of memory in about 10 sweeps.

Abstract:
Opisthorchis felineus or Siberian liver fluke is a trematode parasite (Opisthorchiidae) that infects the hepato-biliary system of humans and other mammals. Despite its public health significance, this wide-spread Eurasian species is one of the most poorly studied human liver flukes and nothing is known about its population genetic structure and demographic history. In this paper, we attempt to fill this gap for the first time and to explore the genetic diversity in O. felineus populations from Eastern Europe (Ukraine, European part of Russia), Northern Asia (Siberia) and Central Asia (Northern Kazakhstan). Analysis of marker DNA fragments from O. felineus mitochondrial cytochrome c oxidase subunit 1 and 3 (cox1, cox3) and nuclear rDNA internal transcribed spacer 1 (ITS1) sequences revealed that genetic diversity is very low across the large geographic range of this species. Microevolutionary processes in populations of trematodes may well be influenced by their peculiar biology. Nevertheless, we suggest that lack of population genetics structure observed in O. felineus can be primarily explained by the Pleistocene glacial events and subsequent sudden population growth from a very limited group of founders. Rapid range expansion of O. felineus through Asian and European territories after severe bottleneck points to a high dispersal potential of this trematode species.

Abstract:
Most image labeling problems such as segmentation and image reconstruction are fundamentally ill-posed and suffer from ambiguities and noise. Higher order image priors encode high level structural dependencies between pixels and are key to overcoming these problems. However, these priors in general lead to computationally intractable models. This paper addresses the problem of discovering compact representations of higher order priors which allow efficient inference. We propose a framework for solving this problem which uses a recently proposed representation of higher order functions where they are encoded as lower envelopes of linear functions. Maximum a Posterior inference on our learned models reduces to minimizing a pairwise function of discrete variables, which can be done approximately using standard methods. Although this is a primarily theoretical paper, we also demonstrate the practical effectiveness of our framework on the problem of learning a shape prior for image segmentation and reconstruction. We show that our framework can learn a compact representation that approximates a prior that encourages low curvature shapes. We evaluate the approximation accuracy, discuss properties of the trained model, and show various results for shape inpainting and image segmentation.

Abstract:
We consider the NP-hard problem of MAP-inference for graphical models. We propose a polynomial time practically efficient algorithm for finding a part of its optimal solution. Specifically, our algorithm marks each label in each node of the considered graphical model either as (i) optimal, meaning that it belongs to all optimal solutions of the inference problem; (ii) non-optimal if it provably does not belong to any solution; or (iii) undefined, which means our algorithm can not make a decision regarding the label. Moreover, we prove optimality of our approach: it delivers in a certain sense the largest total number of labels marked as optimal or non-optimal. We demonstrate superiority of our approach on problems from machine learning and computer vision benchmarks.

Abstract:
The prognostic properties of four low-frequency seismic noise statistics are discussed: multi- fractal singularity spectrum support width, wavelet-based smoothness index of seismic noise waveforms, minimum normalized entropy of squared orthogonal wavelet coefficients and index of linear predictability. The proposed methods are illustrated by data analysis from broad-band seismic network F-net in Japan for more than 15 years of observation: since the beginning of 1997 up to 15 of May 2012. The previous analysis of multi-fractal properties of low-frequency seismic noise allowed a hypothesis about approaching Japan Islands to a future seismic catastrophe to be formulated at the middle of 2008. The base for such a hypothesis was statistically significant decreasing of multi-fractal singularity spectrum support width mean value. The peculiarities of correlation coefficient estimate within 1 year time window between median values of singularity spectra support width and generalized Hurst exponent allowed to make a decision that starting from July of 2010 Japan come to the state of waiting strong earthquake. This prediction of Tohoku mega-earthquake, initially with estimate of lower magnitude as 8.3 only (at the middle of 2008) and further on with estimate of the time beginning of waiting earthquake (from the middle of 2010) was published in advance in a number of scientific articles and abstracts on international conferences. It is shown that other 3 statistics (except singularity spectrum support width) could extract seismically danger domains as well. The analysis of seismic noise data after Tohoku mega-earthquake indicates increasing of probability of the 2nd strong earthquake within the region where the north part of Philippine sea plate is approaching island Honshu (Nankai Trough).