Abstract:
We develop the mathematical theory of epistemic updates with the tools of duality theory. We focus on the Logic of Epistemic Actions and Knowledge (EAK), introduced by Baltag-Moss- Solecki, without the common knowledge operator. We dually characterize the product update construction of EAK as a certain construction transforming the complex algebras associated with the given model into the complex algebra associated with the updated model. This dual characterization naturally generalizes to much wider classes of algebras, which include, but are not limited to, arbitrary BAOs and arbitrary modal expansions of Heyting algebras (HAOs). As an application of this dual characterization, we axiomatize the intuitionistic analogue of the logic of epistemic knowledge and actions, which we refer to as IEAK, prove soundness and completeness of IEAK w.r.t. both algebraic and relational models, and illustrate how IEAK encodes the reasoning of agents in a concrete epistemic scenario.

Abstract:
It is well known that if G is an \'etale topological groupoid then its topology can be recovered as the sup-lattice generated by G-sets, i.e. by the images of local bisections. This topology has a natural structure of unital involutive quantale. We present the analogous construction for any non \'etale groupoid with sober unit space G_0. We associate a canonical unital involutive quantale with any inverse semigroup of G-sets which is also a sheaf over G_0. We introduce axiomatically the class of quantales so obtained, and revert the construction mentioned above by proving a representability theorem for this class of quantales, under a natural spatiality condition.

Abstract:
We establish a formal connection between algorithmic correspondence theory and certain dual characterization results for finite lattices, similar to Nation's characterization of a hierarchy of pseudovarieties of finite lattices, progressively generalizing finite distributive lattices. This formal connection is mediated through monotone modal logic. Indeed, we adapt the correspondence algorithm ALBA to the setting of monotone modal logic, and we use a certain duality-induced encoding of finite lattices as monotone neighbourhood frames to translate lattice terms into formulas in monotone modal logic.

Abstract:
We generalize Venema's result on the canonicity of the additivity of positive terms, from classical modal logic to a vast class of logics the algebraic semantics of which is given by varieties of normal distributive lattice expansions (normal DLEs), aka `distributive lattices with operators'. We provide two contrasting proofs for this result: the first is along the lines of Venema's pseudo-correspondence argument but using the insights and tools of unified correspondence theory, and in particular the algorithm ALBA; the second closer to the style of J\'onsson. Using insights gleaned from the second proof, we define a suitable enhancement of the algorithm ALBA, which we use prove the canonicity of certain syntactically defined classes of DLE-inequalities (called the meta-inductive inequalities), relative to the structures in which the formulas asserting the additivity of some given terms are valid.

Abstract:
Purpose Identification of critical areas in presurgical evaluations of patients with temporal lobe epilepsy is the most important step prior to resection. According to the “epileptic focus model”, localization of seizure onset zones is the main task to be accomplished. Nevertheless, a significant minority of epileptic patients continue to experience seizures after surgery (even when the focus is correctly located), an observation that is difficult to explain under this approach. However, if attention is shifted from a specific cortical location toward the network properties themselves, then the epileptic network model does allow us to explain unsuccessful surgical outcomes. Methods The intraoperative electrocorticography records of 20 patients with temporal lobe epilepsy were analyzed in search of interictal synchronization clusters. Synchronization was analyzed, and the stability of highly synchronized areas was quantified. Surrogate data were constructed and used to statistically validate the results. Our results show the existence of highly localized and stable synchronization areas in both the lateral and the mesial areas of the temporal lobe ipsilateral to the clinical seizures. Synchronization areas seem to play a central role in the capacity of the epileptic network to generate clinical seizures. Resection of stable synchronization areas is associated with elimination of seizures; nonresection of synchronization clusters is associated with the persistence of seizures after surgery. Discussion We suggest that synchronization clusters and their stability play a central role in the epileptic network, favoring seizure onset and propagation. We further speculate that the stability distribution of these synchronization areas would differentiate normal from pathologic cases.

Abstract:
This review seeks to describe the use and effects of the drug modafinil. Specifically, it presents the research of the impact of modafinil for people with diagnosis and experience of schizophrenia. Recent reviews have shown that modafinil can positively impact on cognitive function in people with a diagnosis of schizophrenia. There is emerging evidence for the positive impact of modafinil on negative symptoms, functioning, quality of life, wellbeing, and body mass index (BMI) for people with schizophrenia. Compared to other central nerve stimulant (CNS) drugs, modafinil has a low risk of dependency and few negative side effects; but there are risks of triggering positive symptoms in schizophrenia. A well designed and sufficiently large randomised control trial is required to test the potential of the impact of modafinil in the lives of people with a diagnosis of schizophrenia. Future research should report participant’s perspective of the value of modafinil connected to what concerns them and what they want to achieve in their lives.

Abstract:
growth hormone quantification in serum is essential for confirming or ruling out its excess. the absence of clinical criteria sufficiently sensitive to evaluate the treatment success enables gh as the key diagnostic procedure and for that, its measurements must be done in a reliable way and must allow uniform interpretation. several different biochemical criteria for remission have been suggested in the past, including a random gh measurement less than 2.5 μg/l, mean gh value from a day curve less than 2.5 μg/l, nadir gh value after an oral glucose tolerance test (oggt) less than 1.0 μg/l and a normal age-related igf-i level. the importance of adequate treatment is highlighted by data indicating that lowering gh levels to less than 2.5 μg/l reverses the premature mortality of acromegaly. with the advances of ultrasensitive assays for gh measurement, strictest remission criteria to determine remission or cure were necessary. in this review, we describe the changes of assay methodology and its consequences in serum gh results and cut off point values to define activity and remission of acromegaly.

Abstract:
We derive a general master equation relating the gravitational-wave observables r and Omega_gw(f). Here r is the tensor-to-scalar ratio, constrained by cosmic-microwave-background (CMB) experiments; and Omega_gw(f) is the energy spectrum of primordial gravitational-waves, constrained e.g. by pulsar-timing measurements, laser-interferometer experiments, and Big Bang Nucleosynthesis (BBN). Differentiating the master equation yields a new expression for the tilt d(ln Omega_gw(f))/d(ln f). The relationship between r and Omega_gw(f) depends sensitively on the uncertain physics of the early universe, and we show that this uncertainty may be encapsulated (in a model-independent way) by two quantities: w_hat(f) and nt_hat(f), where nt_hat(f) is a certain logarithmic average over nt(k) (the primordial tensor spectral index); and w_hat(f) is a certain logarithmic average over w_tilde(a) (the effective equation-of-state in the early universe, after horizon re-entry). Here the effective equation-of-state parameter w_tilde(a) is a combination of the ordinary equation-of-state parameter w(a) and the bulk viscosity zeta(a). Thus, by comparing constraints on r and Omega_gw(f), one can obtain (remarkably tight) constraints in the [w_hat(f), nt_hat(f)] plane. In particular, this is the best way to constrain (or detect) the presence of a ``stiff'' energy component (with w > 1/3) in the early universe, prior to BBN. Finally, although most of our analysis does not assume inflation, we point out that if CMB experiments detect a non-zero value for r, then we will immediately obtain (as a free by-product) a new upper bound w_hat < 0.55 on the logarithmically averaged effective equation-of-state parameter during the ``primordial dark age'' between the end of inflation and the start of BBN.

Abstract:
The development of acoustic methods for measuring depths and ranges in the ocean environment began in the second decade of the twentieth century. The two world wars and the “Cold War” produced three eras of rapid technological development in the field of acoustic oceanography. By the mid-1920s, researchers had identified echoes from fish, Gadus morhua, in the traces from their echo sounders. The first tank experiments establishing the basics for detection of fish were performed in 1928. Through the 1930s, the use of SONAR as a means of locating schools of fish was developed. The end of World War II was quickly followed by the advent of using SONAR to track and hunt whales in the Southern Ocean and the marketing of commercial fish finding SONARs for use by commercial fisherman. The “deep scattering layer” composed of invertebrates and fish was discovered in the late 1940s on the echo sounder records. SONARs employing high frequencies, broadband, split beam, and multiple frequencies were developed as methods for the detection, quantification and identification of fish and invertebrates. The study of fish behavior has seen some use of passive acoustic techniques. Advancements in computer technology have been important throughout the last four decades of the twentieth century. 1. Introduction During the twentieth century, the use of acoustics to study life in the oceans was developed into a significant tool for research in marine biology. The purpose of this paper is to briefly recount the process by which the use of acoustics as a biological research tool took place. The general pattern was the development of acoustic technology for nonbiological research uses, navigation and military operations to name two and then the application of that technology to the detection and study of marine life. By the end of the twentieth century, acoustic technology had become a significant factor in marine biological research. Marine biologists were developing acoustic equipment for the specific purpose of studying life in the oceans. The development of modern acoustic technologies for use in the ocean environment began during the second decade of the twentieth century. The First World War provided a significant stimulus for the advancement of ocean acoustics research. Following the war, active acoustic ranging devices in the form of echo sounders began to be employed in measuring ocean depths. Soon, acousticians began to recognize the ability to detect marine organisms, principally fish, using these devices. The use of sound to detect fish as a tool in the fishing

Abstract:
We derive a general master equation relating the gravitational-wave observables r and Omega_gw(f). Here r is the tensor-to-scalar ratio, constrained by cosmic-microwave-background (CMB) experiments; and Omega_gw(f) is the energy spectrum of primordial gravitational-waves, constrained e.g. by pulsar-timing measurements, laser-interferometer experiments, and Big Bang Nucleosynthesis (BBN). Differentiating the master equation yields a new expression for the tilt d(ln Omega_gw(f))/d(ln f). The relationship between r and Omega_gw(f) depends sensitively on the uncertain physics of the early universe, and we show that this uncertainty may be encapsulated (in a model-independent way) by two quantities: w_hat(f) and nt_hat(f), where nt_hat(f) is a certain logarithmic average over nt(k) (the primordial tensor spectral index); and w_hat(f) is a certain logarithmic average over w_tilde(a) (the effective equation-of-state in the early universe, after horizon re-entry). Here the effective equation-of-state parameter w_tilde(a) is a combination of the ordinary equation-of-state parameter w(a) and the bulk viscosity zeta(a). Thus, by comparing constraints on r and Omega_gw(f), one can obtain (remarkably tight) constraints in the [w_hat(f), nt_hat(f)] plane. In particular, this is the best way to constrain (or detect) the presence of a ``stiff'' energy component (with w > 1/3) in the early universe, prior to BBN. Finally, although most of our analysis does not assume inflation, we point out that if CMB experiments detect a non-zero value for r, then we will immediately obtain (as a free by-product) a new upper bound w_hat < 0.55 on the logarithmically averaged effective equation-of-state parameter during the ``primordial dark age'' between the end of inflation and the start of BBN.