Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+
 Title Keywords Abstract Author All
Search Results: 1 - 10 of 100 matches for " "
 Page 1 /100 Display every page 5 10 20 Item
 BMC Bioinformatics , 2008, DOI: 10.1186/1471-2105-9-304 Abstract: We run a series of simulation studies to gauge how well we do in selection estimation, especially in comparison to the use of a fixed alignment. We show that the standard practice of using a ClustalW alignment can lead to considerable biases and that estimation accuracy increases substantially when explicitly integrating over the uncertainty in inferred alignments. We even manage to compete favourably for general evolutionary distances with an alignment produced by GenAl. We subsequently run our method on HIV2 and Hepatitis B sequences.We propose that marginalizing over all alignments, as opposed to using a fixed one, should be considered in any parametric inference from divergent sequence data for which the alignments are not known with certainty. Moreover, we discover in HIV2 that double coding regions appear to be under less stringent selection than single coding ones. Additionally, there appears to be evidence for differential selection, where one overlapping reading frame is under positive and the other under negative selection.In the past few years we have witnessed an explosion in the viral genomic data available. GenBank alone holds over 80,000 close to complete viral genomes, and numbers are rising fast. For example, since the submission of the first SARS genome in May 2003, over 140 more have been published. With this genomic data at hand we hope to finally be able to tackle our understanding of viruses. Mechanisms of selection, that is to say the rate f at which a mutation resulting in a change in amino acid is accepted, and evolution on viruses are still strongly debated, and a methodology which is trimmed towards answering these questions is required. A step towards this is our attempt to develop a method which can deal with the vast amount of viral data, as well as the complexity of viral genomes and their high divergence and subsequent unreliability of alignment.Several papers [1,4,5,7,13-16,18] have been dedicated towards the study of selection on vi
 Physics , 2010, DOI: 10.1088/0004-637X/723/1/737 Abstract: Variability is a property shared by practically all AGN. This makes variability selection a possible technique for identifying AGN. Given that variability selection makes no prior assumption about spectral properties, it is a powerful technique for detecting both low-luminosity AGN in which the host galaxy emission is dominating and AGN with unusual spectral properties. In this paper, we will discuss and test different statistical methods for the detection of variability in sparsely sampled data that allow full control over the false positive rates. We will apply these methods to the GOODS North and South fields and present a catalog of variable sources in the z band in both GOODS fields. Out of 11931 objects checked, we find 155 variable sources at a significance level of 99.9%, corresponding to about 1.3% of all objects. After rejection of stars and supernovae, 139 variability selected AGN remain. Their magnitudes reach down as faint as 25.5 mag in z. Spectroscopic redshifts are available for 22 of the variability selected AGN, ranging from 0.046 to 3.7. The absolute magnitudes in the rest-frame z-band range from ~ -18 to -24, reaching substantially fainter than the typical luminosities probed by traditional X-ray and spectroscopic AGN selection in these fields. Therefore, this is a powerful technique for future exploration of the evolution of the faint end of the AGN luminosity function up to high redshifts.
 Statistics , 2014, Abstract: Set classification problems arise when classification tasks are based on sets of observations as opposed to individual observations. In set classification, a classification rule is trained with $N$ sets of observations, where each set is labeled with class information, and the prediction of a class label is performed also with a set of observations. Data sets for set classification appear, for example, in diagnostics of disease based on multiple cell nucleus images from a single tissue. Relevant statistical models for set classification are introduced, which motivate a set classification framework based on context-free feature extraction. By understanding a set of observations as an empirical distribution, we employ a data-driven method to choose those features which contain information on location and major variation. In particular, the method of principal component analysis is used to extract the features of major variation. Multidimensional scaling is used to represent features as vector-valued points on which conventional classifiers can be applied. The proposed set classification approaches achieve better classification results than competing methods in a number of simulated data examples. The benefits of our method are demonstrated in an analysis of histopathology images of cell nuclei related to liver cancer.
 Journal of Communications , 2010, DOI: 10.4304/jcm.5.6.467-474 Abstract: Cognitive radar is a new framework of radar system proposed by Simon Haykin recently. Adaptive waveform selection is an important problem of intelligent transmitter in cognitive radar. In this paper, the problem of adaptive waveform selection is modeled as stochastic dynamic programming model. Then backward dynamic programming, temporal difference learning and Q-learning are used to solve this problem. Optimal waveform selection algorithm and approximate solutions are proposed respectively. The simulation results demonstrate that the two approximate methods approach the optimal waveform selection scheme and have lower uncertainty of state estimation compared to fixed waveform. The performance of temporal difference learning is better than Q-learning, but Q-learning is more suitable to use in radar scene. Finally, the whole paper is summarized.
 Anthony Almudevar EURASIP Journal on Bioinformatics and Systems Biology , 2010, DOI: 10.1155/2009/878013 Abstract: The reconstruction of gene regulatory networks using gene expression data has become an important computational tool in systems biology. A relationship among a set of genes can be established either by measuring the effect of the experimental perturbation of one or more selected genes on the remaining genes or from the use of measures of coexpression from observational data. The data is then incorporated into a suitable mathematical model of gene regulation. Such models vary in level of detail, but most are based on a gene graph, in which nodes represent individual genes, while edges between nodes indicate a regulatory relationship.One important issue that arises is the variability of the data due to biological and technological sources. This leads to imperfect resolution of gene relationships and the need for principled statistical methodology with which to assign statistical significance to any inferred feature.In many models, the existence or absence of an edge in the gene graph is resolved by a statistical hypothesis test. A natural first step is the ranking of potential edges based on the strength of the statistical evidence for the existence of the implied regulatory relationship. The intuitive approach is to construct a graph consisting of the highest ranking edges, defined by a -value threshold. The choice of threshold may be ad hoc, typically a conservative significance level such as 0.01. A more rigorous approach is to select the threshold using principles of multiple hypothesis testing (see, e.g., [1]), which may yield an estimate of the error rates of edge classification.There is a fundamental drawback to this approach, in that the lack of statistical evidence of a regulatory relationship may be as much a consequence of small sample size as of biological fact. Under this scenario, we note that selection of a -value threshold generates a graph of, say, edges, with increasing in . Under a null hypothesis of no regulatory structure, -values are randomly ran
 GMS Medizinische Informatik, Biometrie und Epidemiologie , 2008, Abstract: Based on the major revision of the regulation for the licence to practice medicine ( AppO) we adapted teaching in medical biometry. The so-called “teaching project Biometry” is intended to give basics of biometry to the students by using computer methods. For this purpose an E-Learning system is established and a statistical software is introduced. Methods of statistics are inducted using a real medical patient data set. First of all the new project is intended to increase the students’ motivation for the subsidiary subject of medical biometry, secondly to improve the sustainability for future medical research and for dissertation writing. This field report mainly describes the selection process and the applicability of a statistical software. Additionally the implementation of the course will be presented.
 International Journal of Computer Science Issues , 2012, Abstract: In this paper, we have addressed a quite researched problem in vision for tracking objects in realistic scenarios containing complex situations. Our framework comprises of four phases: object detection and feature extraction, tracking event detection, integrated statistical and cognitive modules, and object tracker. The objects are detected using fused background subtraction approach along with feature computation. Next, the tracking events are inferred by finding spatial occupancy of moving objects. Third module is the key to proposed approach and the motivation is to tackle the tracking problem by axiomatizing and reasoning human-tracking abilities with associated weights. Each object contains a unique identity and a data structure of cognitive and statistical attributes whilst satisfying the global constraints of continuity during motion. Consequently, the results are linked with Kalman filter based tracker to estimate the trajectories of moving objects. We show that combining cognitive and statistical information gives a straightforward way to interpret and disambiguate the uncertainties occurred due to conflicted situations in tracking. The performance of the proposed approach is demonstrated on a set of videos representing various challenges. Besides, quantitative evaluation with annotated ground truth is also presented.
 Computer Science , 2014, Abstract: In this paper, we investigate dynamic channel and rate selection in cognitive radio systems which exploit a large number of channels free from primary users. In such systems, transmitters may rapidly change the selected (channel, rate) pair to opportunistically learn and track the pair offering the highest throughput. We formulate the problem of sequential channel and rate selection as an online optimization problem, and show its equivalence to a {\it structured} Multi-Armed Bandit problem. The structure stems from inherent properties of the achieved throughput as a function of the selected channel and rate. We derive fundamental performance limits satisfied by {\it any} channel and rate adaptation algorithm, and propose algorithms that achieve (or approach) these limits. In turn, the proposed algorithms optimally exploit the inherent structure of the throughput. We illustrate the efficiency of our algorithms using both test-bed and simulation experiments, in both stationary and non-stationary radio environments. In stationary environments, the packet successful transmission probabilities at the various channel and rate pairs do not evolve over time, whereas in non-stationary environments, they may evolve. In practical scenarios, the proposed algorithms are able to track the best channel and rate quite accurately without the need of any explicit measurement and feedback of the quality of the various channels.
 Computer Science , 2014, Abstract: In cognitive radio (CR) technology, the trend of sensing is no longer to only detect the presence of active primary users. A large number of applications demand for more comprehensive knowledge on primary user behaviors in spatial, temporal, and frequency domains. To satisfy such requirements, we study the statistical relationship among primary users by introducing a Bayesian network (BN) based framework. How to learn such a BN structure is a long standing issue, not fully understood even in the statistical learning community. Besides, another key problem in this learning scenario is that the CR has to identify how many variables are in the BN, which is usually considered as prior knowledge in statistical learning applications. To solve such two issues simultaneously, this paper proposes a BN structure learning scheme consisting of an efficient structure learning algorithm and a blind variable identification scheme. The proposed approach incurs significantly lower computational complexity compared with previous ones, and is capable of determining the structure without assuming much prior knowledge about variables. With this result, cognitive users could efficiently understand the statistical pattern of primary networks, such that more efficient cognitive protocols could be designed across different network layers.
 Journal of Networks , 2010, DOI: 10.4304/jnw.5.9.1041-1046 Abstract: Cognitve radar can be aware of its environment, utilize intelligent signal processing, provide feedback from the receiver to the transmitter for adaptive illumination and preserve the information contents of radar returns. In this paper, based on the analysis of the parameters of radar mesurements, range-Doppler resolution cell is built up, then stochastic dynamic programming model of waveform selection in cognitive radar is proposed, which is viewed as an important part of cognitive radar. Then optimal algorithm of waveform selection is proposed. The simulation results show the importance of adaptive waveform selection in cognitive radar, and the uncertainty of state estimation of using optimal selected waveform is lower than that of using fixed waveform.
 Page 1 /100 Display every page 5 10 20 Item