Abstract:
Nowadays we are often faced with huge databases resulting from the rapid growth of data storage technologies. This is particularly true when dealing with music databases. In this context, it is essential to have techniques and tools able to discriminate properties from these massive sets. In this work, we report on a statistical analysis of more than ten thousand songs aiming to obtain a complexity hierarchy. Our approach is based on the estimation of the permutation entropy combined with an intensive complexity measure, building up the complexity-entropy causality plane. The results obtained indicate that this representation space is very promising to discriminate songs as well as to allow a relative quantitative comparison among songs. Additionally, we believe that the here-reported method may be applied in practical situations since it is simple, robust and has a fast numerical implementation.

Abstract:
We deal here with the issue of determinism versus randomness in time series. One wishes to identify their relative weights in a given time series. Two different tools have been advanced in the literature to such effect, namely, i) the "causal" entropy-complexity plane [Rosso et al. Phys. Rev. Lett. 99 (2007) 154102] and ii) the estimation of the decay rate of missing ordinal patterns [Amig\'o et al. Europhys. Lett. 79 (2007) 50001, and Carpi et al. Physica A 389 (2010) 2020-2029]. In this work we extend the use of these techniques to address the analysis of deterministic finite time series contaminated with additive noises of different degree of correlation. The chaotic series studied here was via the logistic map (r = 4) to which we added correlated noise (colored noise with f-k Power Spectrum, 0 {\leq} k {\leq} 2) of varying amplitudes. In such a fashion important insights pertaining to the deterministic component of the original time series can be gained. We find that in the entropy-complexity plane this goal can be achieved without additional computations.

Abstract:
Entropy has been a common index to quantify the complexity of time series in a variety of fields. Here, we introduce increment entropy to measure the complexity of time series in which each increment is mapped into a word of two letters, one letter corresponding to direction and the other corresponding to magnitude. The Shannon entropy of the words is termed as increment entropy (IncrEn). Simulations on synthetic data and tests on epileptic EEG signals have demonstrated its ability of detecting the abrupt change, regardless of energetic (e.g. spikes or bursts) or structural changes. The computation of IncrEn does not make any assumption on time series and it can be applicable to arbitrary real-world data.

Abstract:
Using wavelet analysis approach, we can derive a measure of the disorder content of solar activity, following the temporal evolution of the so-called wavelet entropy. The interesting feature of this parameter is its ability to extract a dynamical complexity information, in terms of frequency distribution of the energy content, avoiding restrictions, common in the nonlinear dynamics theory, such as stationarity. The analysis is performed on the monthly time series of sunspot numbers. From the time behaviour of the wavelet entropy we found a clear increase in the disorder content of solar activity for the current 23th solar cycle. This result suggests general low accuracies for current solar cycleprediction methods. Moreover, we pointed out a possible connection between wavelet entropy behaviour and solar excursion phases of solar dipole.

Abstract:
Entropy measures have become increasingly popular as an evaluation metric for complexity in the analysis of time series data, especially in physiology and medicine. Entropy measures the rate of information gain, or degree of regularity in a time series e.g. heartbeat. Ideally, entropy should be able to quantify the complexity of any underlying structure in the series, as well as determine if the variation arises from a random process. Unfortunately current entropy measures mostly are unable to perform the latter differentiation. Thus, a high entropy score indicates a random or chaotic series, whereas a low score indicates a high degree of regularity. This leads to the observation that current entropy measures are equivalent to evaluating how random a series is, or conversely the degree of regularity in a time series. This raises the possibility that existing tests for randomness, such as the runs test or permutation test, may have similar utility in diagnosing certain conditions. This paper compares various tests for randomness with existing entropy-based measurements such as sample entropy, permutation entropy and multi-scale entropy. Our experimental results indicate that the test statistics of the runs test and permutation test are often highly correlated with entropy scores and may be able to provide further information regarding the complexity of time series.

Abstract:
approximate entropy (apen), a model-independent statistics to quantify serial irregularities, was used to evaluate changes in sap flow temporal dynamics of two tropical species of trees subjected to water deficit. water deficit induced a decrease in sap flow of g. ulmifolia, whereas c. legalis held stable their sap flow levels. slight increases in time series complexity were observed in both species under drought condition. this study showed that apen could be used as a helpful tool to assess slight changes in temporal dynamics of physiological data, and to uncover some patterns of plant physiological responses to environmental stimuli.

Abstract:
Approximate Entropy (ApEn), a model-independent statistics to quantify serial irregularities, was used to evaluate changes in sap flow temporal dynamics of two tropical species of trees subjected to water deficit. Water deficit induced a decrease in sap flow of G. ulmifolia, whereas C. legalis held stable their sap flow levels. Slight increases in time series complexity were observed in both species under drought condition. This study showed that ApEn could be used as a helpful tool to assess slight changes in temporal dynamics of physiological data, and to uncover some patterns of plant physiological responses to environmental stimuli.

Abstract:
Recent studies determined that the skew-normal Shannon entropy corresponds to the difference between the common gaussian Shannon entropy and a term that depends on the skewness parameter. This allows to identify the departure from normality of a perturbed distribution. In this paper, we provide the R\'enyi entropy and complexity measure for a novel, flexible class of multivariate skew-normal distributions and their related families, as a characteristic form of the skew-gaussian Shannon entropy. We give closed expressions considering a more general class of closed skew-normal distributions and the weighted moments estimation method. In addition, closed expressions of R\'enyi entropy are presented for univariate truncated skew-normal and multivariate extended skew-normal distributions. Finally, additional inequalities for skew-normal and extended skew-normal R\'enyi and Shannon entropies are reported.

Abstract:
In a recent paper 2002 Phys. Rev. Lett. 88 174102], Bandt and Pompe propose permutation entropy (PE) as a natural complexity measure for arbitrary time series which may be stationary or nonstationary, deterministic or stochastic. Their method is based on a comparison of neighbouring values. This paper further develops PE, and proposes the concept of fine-grained PE (FGPE) defined by the order pattern and magnitude of the difference between neighbouring values. This measure excludes the case where vectors with a distinct appearance are mistakenly mapped onto the same permutation type, and consequently FGPE becomes more sensitive to the dynamical change of time series than does PE, according to our simulation and experimental results.

Abstract:
A novel topological and computational method for 'motion' is described. Motion is constrained by inequalities in terms of Kolmogorov Complexity. Causality is obtained as the output of a high-pass filter, passing through only high values of Kolmogorov Complexity. Motion under the electromagnetic field described with immediate relationship with Subscript[G, 2] Holonomy group and its corresponding dense free 2-subgroup. Similar to Causality, Spin emerges as an immediate and inevitable consequence of high values of Kolmogorov Complexity. Consequently, the physical laws are nothing but a low-pass filter for small values of Kolmogorov Complexity.