Abstract:
Aim:To evaluate the influence of identifying victims of
air disasters in São Paulo on experts’ quality of life (QoL). Methods:QoL was evaluated using the abbreviated
version of the World Health Organization (WHO) quality of life questionnaire
(WHOQOL-bref). We assessed 29 forensic experts who worked in air disasters in
São Paulo and 29 experts who have not worked. The results were analyzed with
Student’s t-tests; we compared the
QoL scores of individuals at the time of the accident with their current QoL
scores, and the scores of the control group were compared with the current
scores of the disaster group. Results:Statistical analyses revealed a significant decrease in forensic
expert QoL when they worked at the accident site, and this result was evident
in all WHOQOL-bref domains. No significant difference was observed between the
experts’ current QoL scores and those of the control group. Conclusions:The identification ofair disaster victims in the city of São
Paulo significantly decreased expert health-related QoL (HRQoL) with regard to
physical and psychological aspects, social relationships and environment
domains. This disturbance on the QoL was not persistent over the years.

Abstract:
objective: to describe a child with round pneumonia. case description: a 4-years and 11 months old child presented the following symptoms for about two days: pain in the body, headache and fever. the chest radiography showed a round image at the upper lobe of the right lung and in the medium lobe of the left lung. leukocytosis with neutrophilia was present in the blood count. therapy with antibiotics improved the clinical status and the radiological picture. comments: considering the other differential diagnosis and looking at the radiological presentation of a child with the clinical picture of an infectious disease, it is reasonable to test antibiotic therapy prior to perform more invasive diagnostic procedures.

Abstract:
Cycles of ice pack fragmentation in the Arctic Ocean are caused by the irregular drift dynamics. In February 2004, the Russian ice-research camp North Pole 32 established on a floe in the Arctic Ocean ceased its working activity and was abandoned after a catastrophic icequake. In this communication, the data collected during the last month of the field observations were used for calculating the changes in the kinetic energy of the ice floe. The energy distribution functions corresponding to periods of different drift intensity were analyzed using the Tsallis statistics, which allow one to assess a degree of deviation of an open dynamic system, such as the drifting ice, from its equilibrium state. The obtained results evidenced that the above-mentioned critical fragmentation has occurred in the period of substantially nonequilibrium dynamics of the system of ice floes. The determination of the state of the pack (in the sense of its equilibrium/nonequilibrium) could provide some useful information on forthcoming icequakes. 1. Introduction From the viewpoint of conventional mechanics, the Arctic sea ice cover (ASIC) is the consolidated, mobile, deformable system. Prevalently shearing deformations result in regular pattern of fragmented pack in accordance with the Mohr’s mechanism of semibrittle failure of solids. At the same time, the size distribution of sea ice floes does not exhibit the random (Poissonian-like) statistics and follows the power law typical for scaling (fractal) structures [1, 2]. The conventional mechanics cannot explain this phenomenon. The scaling is a manifestation of long-range correlations between separated events in a statistical system. The correlation radius is determined by the spatial decay of the event effect. The decaying is fast (exponential) in equilibrium systems but slow (power law) in nonequilibrium ones. Therefore, the fractal structures are formed only under nonequilibrium conditions in statistical systems driven by external forcing. In recent years, a new statistical concept called the non-extensive statistical mechanics (NESM) was developed by Tsallis [3, 4] for thermodynamic description of the behavior of multiscale systems revealing fluctuations around their equilibrium state. The NESM was successively applied for assessing the degree of deviation of natural dynamic systems from their equilibrium state prior to large-scale hazards, such as earthquakes [5, 6] and floods [7]. In the ASIC, cycles of pack fragmentation and significant icequakes occur due to irregularity in the ice drift, which causes critical

Abstract:
In this article we study post-model selection estimators that apply ordinary least squares (OLS) to the model selected by first-step penalized estimators, typically Lasso. It is well known that Lasso can estimate the nonparametric regression function at nearly the oracle rate, and is thus hard to improve upon. We show that the OLS post-Lasso estimator performs at least as well as Lasso in terms of the rate of convergence, and has the advantage of a smaller bias. Remarkably, this performance occurs even if the Lasso-based model selection "fails" in the sense of missing some components of the "true" regression model. By the "true" model, we mean the best s-dimensional approximation to the nonparametric regression function chosen by the oracle. Furthermore, OLS post-Lasso estimator can perform strictly better than Lasso, in the sense of a strictly faster rate of convergence, if the Lasso-based model selection correctly includes all components of the "true" model as a subset and also achieves sufficient sparsity. In the extreme case, when Lasso perfectly selects the "true" model, the OLS post-Lasso estimator becomes the oracle estimator. An important ingredient in our analysis is a new sparsity bound on the dimension of the model selected by Lasso, which guarantees that this dimension is at most of the same order as the dimension of the "true" model. Our rate results are nonasymptotic and hold in both parametric and nonparametric models. Moreover, our analysis is not limited to the Lasso estimator acting as a selector in the first step, but also applies to any other estimator, for example, various forms of thresholded Lasso, with good rates and good sparsity properties. Our analysis covers both traditional thresholding and a new practical, data-driven thresholding scheme that induces additional sparsity subject to maintaining a certain goodness of fit. The latter scheme has theoretical guarantees similar to those of Lasso or OLS post-Lasso, but it dominates those procedures as well as traditional thresholding in a wide variety of experiments.

Abstract:
In this paper we examine the implications of the statistical large sample theory for the computational complexity of Bayesian and quasi-Bayesian estimation carried out using Metropolis random walks. Our analysis is motivated by the Laplace-Bernstein-Von Mises central limit theorem, which states that in large samples the posterior or quasi-posterior approaches a normal density. Using the conditions required for the central limit theorem to hold, we establish polynomial bounds on the computational complexity of general Metropolis random walks methods in large samples. Our analysis covers cases where the underlying log-likelihood or extremum criterion function is possibly non-concave, discontinuous, and with increasing parameter dimension. However, the central limit theorem restricts the deviations from continuity and log-concavity of the log-likelihood or extremum criterion function in a very specific manner. Under minimal assumptions required for the central limit theorem to hold under the increasing parameter dimension, we show that the Metropolis algorithm is theoretically efficient even for the canonical Gaussian walk which is studied in detail. Specifically, we show that the running time of the algorithm in large samples is bounded in probability by a polynomial in the parameter dimension $d$, and, in particular, is of stochastic order $d^2$ in the leading cases after the burn-in period. We then give applications to exponential families, curved exponential families, and Z-estimation of increasing dimension.

Abstract:
We consider median regression and, more generally, a possibly infinite collection of quantile regressions in high-dimensional sparse models. In these models the overall number of regressors $p$ is very large, possibly larger than the sample size $n$, but only $s$ of these regressors have non-zero impact on the conditional quantile of the response variable, where $s$ grows slower than $n$. We consider quantile regression penalized by the $\ell_1$-norm of coefficients ($\ell_1$-QR). First, we show that $\ell_1$-QR is consistent at the rate $\sqrt{s/n} \sqrt{\log p}$. The overall number of regressors $p$ affects the rate only through the $\log p$ factor, thus allowing nearly exponential growth in the number of zero-impact regressors. The rate result holds under relatively weak conditions, requiring that $s/n$ converges to zero at a super-logarithmic speed and that regularization parameter satisfies certain theoretical constraints. Second, we propose a pivotal, data-driven choice of the regularization parameter and show that it satisfies these theoretical constraints. Third, we show that $\ell_1$-QR correctly selects the true minimal model as a valid submodel, when the non-zero coefficients of the true model are well separated from zero. We also show that the number of non-zero coefficients in $\ell_1$-QR is of same stochastic order as $s$. Fourth, we analyze the rate of convergence of a two-step estimator that applies ordinary quantile regression to the selected model. Fifth, we evaluate the performance of $\ell_1$-QR in a Monte-Carlo experiment, and illustrate its use on an international economic growth application.

Abstract:
This work studies the large sample properties of the posterior-based inference in the curved exponential family under increasing dimension. The curved structure arises from the imposition of various restrictions on the model, such as moment restrictions, and plays a fundamental role in econometrics and others branches of data analysis. We establish conditions under which the posterior distribution is approximately normal, which in turn implies various good properties of estimation and inference procedures based on the posterior. In the process we also revisit and improve upon previous results for the exponential family under increasing dimension by making use of concentration of measure. We also discuss a variety of applications to high-dimensional versions of the classical econometric models including the multinomial model with moment restrictions, seemingly unrelated regression equations, and single structural equation models. In our analysis, both the parameter dimension and the number of moments are increasing with the sample size.

Abstract:
In this chapter we discuss conceptually high dimensional sparse econometric models as well as estimation of these models using L1-penalization and post-L1-penalization methods. Focusing on linear and nonparametric regression frameworks, we discuss various econometric examples, present basic theoretical results, and illustrate the concepts and methods with Monte Carlo simulations and an empirical application. In the application, we examine and confirm the empirical validity of the Solow-Swan model for international economic growth.

Abstract:
We show that in {\em ballistic} mesoscopic SNS junctions the period of critical current vs. magnetic flux dependence (magnetic interference pattern), $I_c(\Phi)$, changes {\em continuously and non-monotonically} from $\Phi_0$ to $2\Phi_0$ as the length-to-width ratio of the junction grows, or temperature drops. In {\em diffusive} mesoscopic junctions the change is even more drastic, with the first zero of $I_c(\Phi)$ appearing at $3\Phi_0$. The effect is a manifestation of nonlocal relation between the supercurrent density and superfluid velocity in the normal part of the system, with the characteristic scale $\xi_T = \hbar v_F/2\pi k_BT$ (ballistic limit) or $\tilde{\xi}_T = \sqrt{\hbar D/2\pi k_BT}$ (diffusive limit), the normal metal coherence length, and arises due to restriction of the quasiparticle phase space near the lateral boundaries of the junction. It explains the $2\Phi_0$-periodicity recently observed by Heida et al. (Phys. Rev. B {\bf 57}, R5618 (1998)). We obtained explicit analytical expressions for the magnetic interference pattern for a junction with an arbitrary length-to-width ratio. Experiments are proposed to directly observe the $\Phi_0\to 2\Phi_0$- and $\Phi_0\to 3\Phi_0$-transitions.

Abstract:
this study aimed to evaluate maternal and fetal factors related to vertical transmission of hiv-1. participants included 47 mother-child pairs. behavioral, demographic, and obstetric data were obtained through interviews. data related to delivery and newborns were collected from registries in the maternity hospitals. during the third trimester of pregnancy, cd4+ t lymphocytes and maternal viral load were measured. mean age of the mothers was 25 years and 23.4% of the pregnant women were primigravidae. the most prevalent behavioral factor was lack of condom use. 48.9% of the women presented a cd4+ count greater than 500 cells/ mm3, and 93.6% belonged to clinical category a. 95.7% of the women received zidovudine prophylaxis during pregnancy or childbirth, and the medication was also administered to all the neonates. 50.0% of patients were submitted to elective cesareans. despite several risk and protective factors, none of the children was infected. vertical transmission is an outcome of an imbalance among factors, with a predominance of risk over protective factors.