Abstract:
tient preferences for inhaler devices in chronic obstructive pulmonary disease: experience with Respimat Soft Mist Inhaler Review (6685) Total Article Views Authors: Richard Hodder, David Price Published Date October 2009 Volume 2009:4 Pages 381 - 390 DOI: http://dx.doi.org/10.2147/COPD.S3391 Richard Hodder,1 David Price2 1Divisions of Pulmonary and Critical Care, University of Ottawa, Ottawa, Ontario, Canada; 2Department of General Practice and Primary Care, University of Aberdeen, Aberdeen, Scotland Abstract: Current guidelines for the management of chronic obstructive pulmonary disease (COPD) recommend the regular use of inhaled bronchodilator therapy in order to relieve symptoms and prevent exacerbations. A variety of inhaler devices are currently available to COPD patients, and the choice of device is an important consideration because it can influence patients’ adherence to treatment, and thus potentially affect the long-term outcome. The Respimat Soft Mist Inhaler (SMI) generates a slow-moving aerosol with a high fine particle fraction, resulting in deposition of a higher proportion of the dose in the lungs than pressurized metered-dose inhalers (pMDIs) or some dry powder inhalers (DPIs). We review clinical studies of inhaler satisfaction and preference comparing Respimat SMI against other inhalers in COPD patients. Using objective and validated patient satisfaction instruments, Respimat SMI was consistently shown to be well accepted by COPD patients, largely due to its inhalation and handling characteristics. In comparative studies with pMDIs, the patient total satisfaction score with Respimat SMI was statistically and clinically significantly higher than with the pMDI. In comparative studies with DPIs, the total satisfaction score was statistically significantly higher than for the Turbuhaler DPI, but only the performance domain of satisfaction was clinically significantly higher for Respimat SMI. Whether the observed higher levels of patient satisfaction reported with Respimat SMI might be expected to result in improved adherence to therapy and thus provide benefits consistent with those recently shown to be associated with sustained bronchodilator treatment in patients with COPD remains to be proven.

Abstract:
Richard Hodder,1 David Price21Divisions of Pulmonary and Critical Care, University of Ottawa, Ottawa, Ontario, Canada; 2Department of General Practice and Primary Care, University of Aberdeen, Aberdeen, ScotlandAbstract: Current guidelines for the management of chronic obstructive pulmonary disease (COPD) recommend the regular use of inhaled bronchodilator therapy in order to relieve symptoms and prevent exacerbations. A variety of inhaler devices are currently available to COPD patients, and the choice of device is an important consideration because it can influence patients’ adherence to treatment, and thus potentially affect the long-term outcome. The Respimat Soft Mist Inhaler (SMI) generates a slow-moving aerosol with a high fine particle fraction, resulting in deposition of a higher proportion of the dose in the lungs than pressurized metered-dose inhalers (pMDIs) or some dry powder inhalers (DPIs). We review clinical studies of inhaler satisfaction and preference comparing Respimat SMI against other inhalers in COPD patients. Using objective and validated patient satisfaction instruments, Respimat SMI was consistently shown to be well accepted by COPD patients, largely due to its inhalation and handling characteristics. In comparative studies with pMDIs, the patient total satisfaction score with Respimat SMI was statistically and clinically significantly higher than with the pMDI. In comparative studies with DPIs, the total satisfaction score was statistically significantly higher than for the Turbuhaler DPI, but only the performance domain of satisfaction was clinically significantly higher for Respimat SMI. Whether the observed higher levels of patient satisfaction reported with Respimat SMI might be expected to result in improved adherence to therapy and thus provide benefits consistent with those recently shown to be associated with sustained bronchodilator treatment in patients with COPD remains to be proven.Keywords: Respimat Soft Mist Inhaler, pressurized metered-dose inhalers, pressurized metered-dose inhalers, inhaler devices

Abstract:
The problem central to sparse recovery and compressive sensing is that of stable sparse recovery: we want a distribution of matrices A in R^{m\times n} such that, for any x \in R^n and with probability at least 2/3 over A, there is an algorithm to recover x* from Ax with ||x* - x||_p <= C min_{k-sparse x'} ||x - x'||_p for some constant C > 1 and norm p. The measurement complexity of this problem is well understood for constant C > 1. However, in a variety of applications it is important to obtain C = 1 + eps for a small eps > 0, and this complexity is not well understood. We resolve the dependence on eps in the number of measurements required of a k-sparse recovery algorithm, up to polylogarithmic factors for the central cases of p = 1 and p = 2. Namely, we give new algorithms and lower bounds that show the number of measurements required is (1/eps^{p/2})k polylog(n). For p = 2, our bound of (1/eps) k log(n/k) is tight up to constant factors. We also give matching bounds when the output is required to be k-sparse, in which case we achieve (1/eps^p) k polylog(n). This shows the distinction between the complexity of sparse and non-sparse outputs is fundamental.

Abstract:
We give lower bounds for the problem of stable sparse recovery from /adaptive/ linear measurements. In this problem, one would like to estimate a vector $x \in \R^n$ from $m$ linear measurements $A_1x,..., A_mx$. One may choose each vector $A_i$ based on $A_1x,..., A_{i-1}x$, and must output $x*$ satisfying |x* - x|_p \leq (1 + \epsilon) \min_{k\text{-sparse} x'} |x - x'|_p with probability at least $1-\delta>2/3$, for some $p \in \{1,2\}$. For $p=2$, it was recently shown that this is possible with $m = O(\frac{1}{\epsilon}k \log \log (n/k))$, while nonadaptively it requires $\Theta(\frac{1}{\epsilon}k \log (n/k))$. It is also known that even adaptively, it takes $m = \Omega(k/\epsilon)$ for $p = 2$. For $p = 1$, there is a non-adaptive upper bound of $\tilde{O}(\frac{1}{\sqrt{\epsilon}} k\log n)$. We show: * For $p=2$, $m = \Omega(\log \log n)$. This is tight for $k = O(1)$ and constant $\epsilon$, and shows that the $\log \log n$ dependence is correct. * If the measurement vectors are chosen in $R$ "rounds", then $m = \Omega(R \log^{1/R} n)$. For constant $\epsilon$, this matches the previously known upper bound up to an O(1) factor in $R$. * For $p=1$, $m = \Omega(k/(\sqrt{\epsilon} \cdot \log k/\epsilon))$. This shows that adaptivity cannot improve more than logarithmic factors, providing the analog of the $m = \Omega(k/\epsilon)$ bound for $p = 2$.

Abstract:
HIV-1 usurps the RNA polymerase II elongation control machinery to regulate the expression of its genome during lytic and latent viral stages. After integration into the host genome, the HIV promoter within the long terminal repeat (LTR) is subject to potent downregulation in a postinitiation step of transcription. Once produced, the viral protein Tat commandeers the positive transcription elongation factor, P-TEFb, and brings it to the engaged RNA polymerase II (Pol II), leading to the production of viral proteins and genomic RNA. HIV can also enter a latent phase during which factors that regulate Pol II elongation may play a role in keeping the virus silent. HIV, the causative agent of AIDS, is a worldwide health concern. It is hoped that knowledge of the mechanisms regulating the expression of the HIV genome will lead to treatments and ultimately a cure. 1. Introduction According to the 2010 UNAIDS AIDS Epidemic Update, over 33 million people live with human immunodeficiency virus (HIV) type 1, a number that is increasing due to a combination of improved treatment and continued transmission. Upon crossing the mucosa, HIV docks with CD4+ cells such as T-lymphocytes and macrophages, fuses with the host cell, and releases viral single-stranded RNA, reverse transcriptase, and integrase into the cytoplasm. Reverse transcriptase converts the HIV RNA into double-stranded DNA, at which point integrase chaperones the viral DNA into the nucleus for integration into the host genome. An initial round of host-induced gene expression by Pol II results in expression of Tat, the primary transactivator of HIV, which then recruits the positive transcription elongation factor P-TEFb containing Cdk9 and Cyclin T1 to the HIV LTR [1, 2]. This leads to increased viral gene expression and, eventually, replication of the HIV genome, assembly into new viral particles, and budding. HIV is capable of establishing life-long latent infection by suppressing its transcription, thus evading current antiretroviral therapies [3]. How HIV subverts Pol II elongation control during both active and latent infections has received a significant amount of attention, and it is hoped that these inquires will lead to the development of more effective treatments and an eventual cure. Regulation of transcription of many human genes is accomplished by a process termed RNA polymerase II elongation control, and, after integration, the HIV LTR falls under this control. In fact, the HIV LTR has been used as a model to study the regulation of transcription at the level of elongation. In general, most

Abstract:
Current methods for inferring population structure from genetic data do not provide formal significance tests for population differentiation. We discuss an approach to studying population structure (principal components analysis) that was first applied to genetic data by Cavalli-Sforza and colleagues. We place the method on a solid statistical footing, using results from modern statistics to develop formal significance tests. We also uncover a general “phase change” phenomenon about the ability to detect structure in genetic data, which emerges from the statistical theory we use, and has an important implication for the ability to discover structure in genetic data: for a fixed but large dataset size, divergence between two populations (as measured, for example, by a statistic like FST) below a threshold is essentially undetectable, but a little above threshold, detection will be easy. This means that we can predict the dataset size needed to detect structure.