Abstract:
An understanding of the dynamics of soil organic matter (SOM) is important for our ability to develop management practices that preserve soil quality and sequester carbon. Most SOM decomposition models represent the heterogeneity of organic matter by a few discrete compartments with different turnover rates, while other models employ a continuous quality distribution. To make the multi-compartment models more mechanistic in nature, it has been argued that the compartments should be related to soil fractions actually occurring and having a functional role in the soil. In this paper, we make the case that fractionation methods that can measure continuous quality distributions should be developed, and that the temporal development of these distributions should be incorporated into SOM models. The measured continuous SOM quality distributions should hold valuable information not only for model development, but also for direct interpretation. Measuring continuous distributions requires that the measurements along the quality variable are so frequent that the distribution approaches the underlying continuum. Continuous distributions lead to possible simplifications of the model formulations, which considerably reduce the number of parameters needed to describe SOM turnover. A general framework for SOM models representing SOM across measurable quality distributions is presented and simplifications for specific situations are discussed. Finally, methods that have been used or have the potential to be used to measure continuous quality SOM distributions are reviewed. Generally, existing fractionation methods will have to be modified to allow measurement of distributions or new fractionation techniques will have to be developed. Developing the distributional models in concert with the fractionation methods to measure the distributions will be a major task. We hope the current paper will help generate the interest needed to accommodate this.

Abstract:
An understanding of the dynamics of soil organic matter (SOM) is important for our ability to develop management practices that preserve soil quality and sequester carbon. Most SOM decomposition models represent the heterogeneity of organic matter by a few discrete compartments with different turnover rates, while other models employ a continuous quality distribution. To make the multi-compartment models more mechanistic in nature, it has been argued that the compartments should be related to soil fractions actually occurring and having a functional role in the soil. In this paper, we make the case that fractionation methods that can measure continuous quality distributions should be developed, and that the temporal development of these distributions should be incorporated into SOM models. The measured continuous SOM quality distributions should hold valuable information not only for model development, but also for direct interpretation. Measuring continuous distributions requires that the measurements along the quality variable are so frequent that the distribution is approaching the underlying continuum. Continuous distributions leads to possible simplifications of the model formulations, which considerably reduce the number of parameters needed to describe SOM turnover. A general framework for SOM models representing SOM across measurable quality distributions is presented and simplifications for specific situations are discussed. Finally, methods that have been used or have the potential to be used to measure continuous quality SOM distributions are reviewed. Generally, existing fractionation methods have to be modified to allow measurement of distributions or new fractionation techniques will have to be developed. Developing the distributional models in concert with the fractionation methods to measure the distributions will be a major task. We hope the current paper will help spawning the interest needed to accommodate this.

Abstract:
This work introduces a complexity measure which addresses some conflicting issues between existing ones by using a new principle - measuring the average amount of symmetry broken by an object. It attributes low (although different) complexity to either deterministic or random homogeneous densities and higher complexity to the intermediate cases. This new measure is easily computable, breaks the coarse graining paradigm and can be straightforwardly generalised, including to continuous cases and general networks. By applying this measure to a series of objects, it is shown that it can be consistently used for both small scale structures with exact symmetry breaking and large scale patterns, for which, differently from similar measures, it consistently discriminates between repetitive patterns, random configurations and self-similar structures.

Abstract:
In this paper we prove the probabilistic continuous complexity conjecture. In continuous complexity theory, this states that the complexity of solving a continuous problem with probability approaching 1 converges (in this limit) to the complexity of solving the same problem in its worst case. We prove the conjecture holds if and only if space of problem elements is uniformly convex. The non-uniformly convex case has a striking counterexample in the problem of identifying a Brownian path in Wiener space, where it is shown that probabilistic complexity converges to only half of the worst case complexity in this limit.

Abstract:
The number of qubits used by a quantum algorithm will be a crucial computational resource for the foreseeable future. We show how to obtain the classical query complexity for continuous problems. We then establish a simple formula for a lower bound on the qubit complexity in terms of the classical query complexity

Abstract:
The concept of “pattern ” is introduced, formally defined, and used to analyze various measures of the complexity of finite binary sequences and other objects. The standard Kolmogoroff-Chaitin-Solomonoff complexity measure is considered, along with Bennett's ‘logical depth ’, Koppel's ‘sophistication' ’, and Chaitin's analysis of the complexity of geometric objects. The pattern-theoretic point of view illuminates the shortcomings of these measures and leads to specific improvements, it gives rise to two novel mathematical concepts-- “orders ” of complexity and “levels ” of pattern, and it yields a new measure of complexity, the “structural complexity ”, which measures the total amount of structure an entity possesses.

Abstract:
In this paper we measure one internal measure of software products, namely software complexity. We present one method to determine the software complexity proposed in literature and we try to validate the method empirically using 10 small programs (the first five are written in Pascal and the last five in C++). We have obtained results which are intuitively correct, that is we have obtained higher values for average structural complexity and total complexity for the programs which “look” more complex than the others, not only in terms of length of program but also in terms of the contained structures.

Abstract:
We define {\em semantic complexity} using a new concept of {\em meaning automata}. We measure the semantic complexity of understanding of prepositional phrases, of an "in depth understanding system", and of a natural language interface to an on-line calendar. We argue that it is possible to measure some semantic complexities of natural language processing systems before building them, and that systems that exhibit relatively complex behavior can be built from semantically simple components.

Abstract:
We apply formal measures of emergence, self-organization, homeostasis, autopoiesis and complexity to an aquatic ecosystem; in particular to the physiochemical component of an Arctic lake. These measures are based on information theory. Variables with an homogeneous distribution have higher values of emergence, while variables with a more heterogeneous distribution have a higher self-organization. Variables with a high complexity reflect a balance between change (emergence) and regularity/order (self-organization). In addition, homeostasis values coincide with the variation of the winter and summer seasons. Autopoiesis values show a higher degree of independence of biological components over their environment. Our approach shows how the ecological dynamics can be described in terms of information.

Abstract:
A formal definition of epsilon-complexity of an individual continuous function defined on a unit cube is proposed. This definition is consistent with the Kolmogorov's idea of the complexity of an object. A definition of epsilon-complexity for a class of continuous functions with a given modulus of continuity is also proposed. Additionally, an explicit formula for the epsilon-complexity of a functional class is obtained. As a consequence, the paper finds that the epsilon-complexity for the Holder class of functions can be characterized by a pair of real numbers. Based on these results the papers formulates a conjecture concerning the epsilon-complexity of an individual function from the Holder class. We also propose a conjecture about characterization of epsilon-complexity of a function from the Holder class given on a discrete grid.