Abstract:
The occurrence of atrial tachycardias (AT) is a direct function of the volume of atrial tissue ablated in the patients with atrial fibrillation (AF). Thus, the incidence of AT is highest in persistent AF patients undergoing stepwise ablation using the strategic combination of pulmonary vein isolation, electrogram based ablation and left atrial linear ablation. Using deductive mapping strategy, AT can be divided into three clinical categories viz. the macroreentry, the focal and the newly described localized reentry all of which are amenable to catheter ablation with success rate of 95%. Perimitral, roof dependent and cavotricuspid isthmus dependent AT involve large reentrant circuits which can be successfully ablated at the left mitral isthmus, left atrial roof and tricuspid isthmus respectively. Complete bidirectional block across the sites of linear ablation is a necessary endpoint. Focal and localized reentrant AT commonly originate from but are not limited to the septum, posteroinferior left atrium, venous ostia, base of the left atrial appendage and left mitral isthmus and they respond quickly to focal ablation. AT not only represents ablation-induced proarrhythmia but also forms a bridge between AF and sinus rhythm in longstanding AF patients treated successfully with catheter ablation. 1. Introduction Atrial fibrillation (AF) is no longer a formidable rhythm since ablationists challenged this notorious arrhythmia more than a decade ago in their unprecedented quest for sinus rhythm (SR) [1]. Ablation strategies are based on clinical types of AF but nevertheless, the volume of tissue ablated to treat AF is highest for any cardiac arrhythmia described so far. Paroxysmal AF is amenable to catheter ablation with minimum atrial tissue destruction such that electrical isolation of pulmonary veins (PVs) suffices for establishing cure [2]. Persistent and longer lasting forms of AF necessitate extensive atrial tissue ablation in addition to PV isolation to restore SR [3–7]. Besides having evolved as a therapeutic option in symptomatic AF, surgical ablation has become a routine adjunct to many valvular surgeries and may be employed with surgical coronary revascularization and also as a “standalone” procedure [8, 9]. Despite improvements in ablation strategies, relatively high volume of tissue ablation is performed in AF. Together with remodeling of atria, it provides a favourable substrate for the development of sustained atrial tachycardia(s) during and after AF ablation (ATp) [4]. 2. Magnitude of ATp Burden Based on our observation and also that of

Abstract:
Parkinson’s disease is the second most common neu- rodegenerative disorder after Alzheimer disease affecting 1% - 2% in people >60 years old and 3% - 4% in people >80.

Gravitation is one of
the central forces playing an important role
in formation of natural systems like
galaxies and planets. Gravitational forces between particles of a
gaseous cloud transform the cloud into spherical shells and disks of higher
density during gravitational contraction. The density can reach that of a solid
body. The theoretical model was tested to model the formation of a spiral
galaxy and Saturn. The formations of a spiral galaxy and Saturn and its disk
are simulated using a novel N-body self-gravitational model. It is
demonstrated that the formation of the spirals of the galaxy and disk of the
planet is the result of gravitational contraction of a slowly rotated
particle cloud that has a shape of slightly deformed sphere for Saturn and
ellipsoid for the spiral galaxy. For Saturn, the sphere was flattened by a
coefficient of 0.8 along the axis of rotation. During the gravitational contraction, the major part of the cloud transformed into a planet and a minor part transformed into a disk. The
thin structured disk is a result of the electromagnetic
interaction in which the magnetic forces acting on charged particles of
the cloud originate from the core of the planet.

Abstract:
If the wave functions of matter expanded with time dilation for an outside observer in the same way as photons do in gravitational redshift; with some modifications the general relativity might alone explain dark matter, galaxy rotation curves, and part of the energy released in supernova explosions. Also, the event horizons of black holes couldn’t be formed when packing matter more and more densely together. Essentially, if the time dilation increases enough, the particles turn less localized to outside observers and the mass distribution of the same particles would expand into larger volume of space. Small particles deep inside a black hole might seem like dark matter instead by their gravitational influence if the time dilation alters their size enough for outside observers. At the same time, the surface particles of the black hole would be less dispersed, creating the Newtonian gravitational potential we see closer to black holes. The following research doesn’t attempt to reformulate the general relativity itself, but only proposes the idea while approximating the Milky way gravity profile to compare the hypothesis with measurements. Therefore, actually proving the hypothesis is still far off while the idea is sound at its core.

We establish the uniqueness and local existence of weak solutions for a
system of partial differential equations which describes non-linear motions of
viscous stratified fluid in a homogeneous gravity field. Due to the presence of
the stratification equation for the density, the model and the problem are new
and thus different from the classical Navier-Stokes equations.

Abstract:
In this work, the classical Borsuk conjecture is discussed, which states that any set of diameter 1 in the Euclidean space $ {\mathbb R}^d $ can be divided into $ d+1 $ parts of smaller diameter. During the last two decades, many counterexamples to the conjecture have been proposed in high dimensions. However, all of them are sets of diameter 1 that lie on spheres whose radii are close to the value $ {1}{\sqrt{2}} $. The main result of this paper is as follows: {\it for any $ r > {1}{2} $, there exists a $ d_0 $ such that for all $ d \ge d_0 $, a counterexample to Borsuk's conjecture can be found on a sphere $ S_r^{d-1} \subset {\mathbb R}^d $.

Abstract:
We use a special space of integrable functions for studying theCauchy problem for linear functional-differential equations withnonintegrable singularities. We use the ideas developed byAzbelev and his students (1995). We show that bychoosing the function ψ generating the space, one canguarantee resolubility and certain behavior of the solution nearthe point of singularity.

Abstract:
Nowadays it is practically forgotten that for observables with degenerate spectra the original von Neumann projection postulate differs crucially from the version of the projection postulate which was later formalized by Lüders. The latter (and not that due to von Neumann) plays the crucial role in the basic constructions of quantum information theory. We start this paper with the presentation of the notions related to the projection postulate. Then we remind that the argument of Einstein-Podolsky-Rosen against completeness of QM was based on the version of the projection postulate which is nowadays called Lüders postulate. Then we recall that all basic measurements on composite systems are represented by observables with degenerate spectra. This implies that the difference in the formulation of the projection postulate (due to von Neumann and Lüders) should be taken into account seriously in the analysis of the basic constructions of quantum information theory. This paper is a review devoted to such an analysis. 1. Introduction We recall that for observables with nondegenerate spectra the two versions of the projection postulate, see von Neumann [1] and Lüders [2], coincide. We restrict our considerations to observables with purely discrete spectra. In this case each pure state is projected as the result of measurement onto another pure state, the corresponding eigenvector. Lüders postulated that the situation is not changed even in the case of degenerate spectra; see [2]. By projecting a pure state we obtain again a pure state, the orthogonal projection on the corresponding eigen-subspace. However, von Neumann pointed out that in general the postmeasurement state is not pure, it is a mixed state. The difference is crucial! And it is surprising that so little attention was paid up to now to this important problem. It is especially surprising if one takes into account the fundamental role which is played by the projection postulate in quantum information (QI) theory. QI is approaching the stage of technological verification and the absence of a detailed analysis of the mentioned problem is a weak point in its foundations. This paper is devoted to such an analysis. We start with a short recollection of the basic facts on the projection postulates and conditional probabilities in QM. Then we analyze the EPR argument against completeness of QM [3]. Since Einstein et al. proceeded on the physical level of rigorousness, it is a difficult task to extract from their considerations which version of the projection postulate was used. We did this in [4, 5]. Now

Abstract:
The main aim of this report is to inform the quantum information community about investigations on the problem of probabilistic compatibility of a family of random variables: a possibility to realize such a family on the basis of a single probability measure (to construct a single Kolmogorov probability space). These investigations were started hundred of years ago by J. Boole (who invented Boolean algebras). The complete solution of the problem was obtained by Soviet mathematician Vorobjev in 60th. Surprisingly probabilists and statisticians obtained inequalities for probabilities and correlations among which one can find the famous Bell’s inequality and its generalizations. Such inequalities appeared simply as constraints for probabilistic compatibility. In this framework one can not see a priori any link to such problems as nonlocality and “death of reality” which are typically linked to Bell’s type inequalities in physical literature. We analyze the difference between positions of mathematicians and quantum physicists. In particular, we found that one of the most reasonable explanations of probabilistic incompatibility is mixing in Bell’s type inequalities statistical data from a number of experiments performed under different experimental contexts.