oalib

OALib Journal期刊

ISSN: 2333-9721

费用:99美元

投稿

匹配条件: “Tech” ,找到相关结果约60条。
列表显示的所有文章,均可免费获取
第1页/共60条
每页显示
An unsupervised classification scheme for improving predictions of prokaryotic TIS
Maike Tech, Peter Meinicke
BMC Bioinformatics , 2006, DOI: 10.1186/1471-2105-7-121
Abstract: We introduce a clustering algorithm for completely unsupervised scoring of potential TIS, based on positionally smoothed probability matrices. The algorithm requires an initial gene prediction and the genomic sequence of the organism to perform the reannotation. As compared with other methods for improving predictions of gene starts in bacterial genomes, our approach is not based on any specific assumptions about prokaryotic TIS. Despite the generality of the underlying algorithm, the prediction rate of our method is competitive on experimentally verified test data from E. coli and B. subtilis. Regarding genomes with high G+C content, in contrast to some previously proposed methods, our algorithm also provides good performance on P. aeruginosa, B. pseudomallei and R. solanacearum.On reliable test data we showed that our method provides good results in post-processing the predictions of the widely-used program GLIMMER. The underlying clustering algorithm is robust with respect to variations in the initial TIS annotation and does not require specific assumptions about prokaryotic gene starts. These features are particularly useful on genomes with high G+C content. The algorithm has been implemented in the tool ?TICO?(TIs COrrector) which is publicly available from our web site.Recent publications have shown that gene prediction in prokaryotes is still a challenging problem in bioinformatics. While existing gene finders [1-3] are likely to identify coding regions associated with open reading frames (ORF) of statistically significant length, the prediction of the true translation initiation sites (TIS) is insufficient in many cases [4-6]. In particular, for genomes with a high G+C content the prediction of TIS has been shown to be of low quality [6].In order to cope with this insufficiency of conventional gene finders, several methods have been proposed for improving the predictions of prokaryotic TIS. Common approaches require prior knowledge about the characteristics
Estimation of Physiological Tremor from Accelerometers for Real-Time Applications
Kalyana C. Veluvolu,Wei Tech Ang
Sensors , 2011, DOI: 10.3390/s110303020
Abstract: Accurate filtering of physiological tremor is extremely important in robotics assisted surgical instruments and procedures. This paper focuses on developing single stage robust algorithms for accurate tremor filtering with accelerometers for real-time applications. Existing methods rely on estimating the tremor under the assumption that it has a single dominant frequency. Our time-frequency analysis on physiological tremor data revealed that tremor contains multiple dominant frequencies over the entire duration rather than a single dominant frequency. In this paper, the existing methods for tremor filtering are reviewed and two improved algorithms are presented. A comparative study is conducted on all the estimation methods with tremor data from microsurgeons and novice subjects under different conditions. Our results showed that the new improved algorithms performed better than the existing algorithms for tremor estimation. A procedure to separate the intended motion/drift from the tremor component is formulated.
O conhecimento sobre doen?as sexualmente transmissíveis entre adolescentes de baixa renda em Ribeir?o Preto, S?o Paulo, Brasil
Doreto, Daniella Tech;Vieira, Elisabeth Meloni;
Cadernos de Saúde Pública , 2007, DOI: 10.1590/S0102-311X2007001000026
Abstract: this study examined female adolescents' knowledge concerning stds and transmission, condom use, and health care. it was a cross-sectional study of 90 adolescents living in an area covered by the family health program in ribeir?o preto, s?o paulo, brazil. data were collected through household interviews using a structured questionnaire, followed by preliminary analysis of simple frequency of variables. most adolescents were single, sexually active, and with limited knowledge concerning stds. condoms were known as the main means of prevention, but only 35.2% of the sample reported always using them. there was a large drop in condom use (from 71.1% to 37.1%) when comparing the first versus the most recent sexual intercourse. teenagers did not consider themselves at risk of stds (65.5%), although 57.8% reported related symptoms and 36.7% had never undergone gynecological examination. the results point to the need for special attention to adolescent health care. the lack of effective protection makes them vulnerable to stds, including hiv/aids, even though they do not consider themselves at risk.
Sobre a experiência sexual dos jovens
Villela, Wilza Vieira;Doreto, Daniella Tech;
Cadernos de Saúde Pública , 2006, DOI: 10.1590/S0102-311X2006001100021
Abstract: the rise in teenage pregnancy and young people's vulnerability to hiv have been a serious problem. this paper is intended to confront this statement based on its structural concepts (adolescence, youth, teenage pregnancy, and vulnerability) and by a non-exhaustive review of the relevant literature. the current paper discusses the ideas of youth and adolescence to approach the sexuality of young people and adolescents from the perspective of inequalities between different social groups and their access to health and resources for the prevention of diseases like hiv/aids as well as contraception. there are multiple paths leading young people to having unprotected sexual relations, and the numbers that surface on pregnancy, stds, and hiv infection are doubtless lower than the real figures. the data presented herein indicate that the safe-sex approach is still necessary among youth, requiring efforts to produce creative strategies that make sense in different socio-cultural contexts in which young people experience sex.
Organizational practices that effects Software Quality In Software Engineering process
VANGARA SUBRAMANYAM M.TECH(CSE)
International Journal of Engineering Trends and Technology , 2011,
Abstract: Software quality metrics are a subset of software metrics that focus on the quality aspects of the product, process, and project. In general, software quality metrics are more closely associated with process and product metrics than with project metrics. Nonetheless, the project parameters such as the number of developers and their skill levels, the schedule, the size, and the organization structure certainly affect the quality of the product. Software quality metrics can be divided further into end-product quality metrics and in-process quality metrics. The essence of software quality engineering is to investigate the relationships among in-process metrics, project characteristics, and end-product quality, and, based on the findings, to engineer improvements in both process and product quality. Moreover, we should view quality from the entire software life-cycle perspective and, in this regard, we should include metrics that measure the quality level of the maintenance process as another category of software quality metrics. In general the organization process related to software development is always a complex task to be fit into any of the quality metrics. Analytically a complete software development organizational process can be achieved as a combination of one or more software quality metrics. But some of the organizational processes which do not come under software quality metrics lead to degrade the software quality.Organization practices are performed as regular operations at various levels in the organization. These practices should be addressed continuously to run the process smoothly as predetermined. A Software development organization maintains metrics, time - work controls,Finance control tools and some other techniques to improve and finely tune the processes. But some of the software organization processes which cannot come under the vision of such tools make the development process unknowingly inefficient. Some of the Organization processes which cannot come under these control tools and metrics are effecting the Software development and maintenance life cycle at various levels.
MACHINE UNDERSTANDING OF HUMAN ACTIONS
T.SRIKANTH M.TECH
International Journal of Computer Trends and Technology , 2011,
Abstract: A widely accepted prediction is that computing is moving background projecting the human user into foreground. Today computing has become a key in every technology. If this prediction is to come true, then next generation computing, which we will call as human computing is about anticipatory user interfaces that should be human centered, built for humans based on human model. In the present paper we will discuss how human computing leads to understanding of human behavior , number of components of human behavior, how they might be integrated into computers & how far we are from enabling computers to understand human behavior.
Drift-Free Position Estimation of Periodic or Quasi-Periodic Motion Using Inertial Sensors
Win Tun Latt,Kalyana Chakravarthy Veluvolu,Wei Tech Ang
Sensors , 2011, DOI: 10.3390/s110605931
Abstract: Position sensing with inertial sensors such as accelerometers and gyroscopes usually requires other aided sensors or prior knowledge of motion characteristics to remove position drift resulting from integration of acceleration or velocity so as to obtain accurate position estimation. A method based on analytical integration has previously been developed to obtain accurate position estimate of periodic or quasi-periodic motion from inertial sensors using prior knowledge of the motion but without using aided sensors. In this paper, a new method is proposed which employs linear filtering stage coupled with adaptive filtering stage to remove drift and attenuation. The prior knowledge of the motion the proposed method requires is only approximate band of frequencies of the motion. Existing adaptive filtering methods based on Fourier series such as weighted-frequency Fourier linear combiner (WFLC), and band-limited multiple Fourier linear combiner (BMFLC) are modified to combine with the proposed method. To validate and compare the performance of the proposed method with the method based on analytical integration, simulation study is performed using periodic signals as well as real physiological tremor data, and real-time experiments are conducted using an ADXL-203 accelerometer. Results demonstrate that the performance of the proposed method outperforms the existing analytical integration method.
Oligo kernels for datamining on biological sequences: a case study on prokaryotic translation initiation sites
Peter Meinicke, Maike Tech, Burkhard Morgenstern, Rainer Merkl
BMC Bioinformatics , 2004, DOI: 10.1186/1471-2105-5-169
Abstract: We propose a kernel-based approach to datamining on biological sequences. With our method it is possible to model and analyze positional variability of oligomers of any length in a natural way. On one hand this is achieved by mapping the sequences to an intuitive but high-dimensional feature space, well-suited for interpretation of the learnt models. On the other hand, by means of the kernel trick we can provide a general learning algorithm for that high-dimensional representation because all required statistics can be computed without performing an explicit feature space mapping of the sequences. By introducing a kernel parameter that controls the degree of position-dependency, our feature space representation can be tailored to the characteristics of the biological problem at hand. A regularized learning scheme enables application even to biological problems for which only small sets of example sequences are available. Our approach includes a visualization method for transparent representation of characteristic sequence features. Thereby importance of features can be measured in terms of discriminative strength with respect to classification of the underlying sequences. To demonstrate and validate our concept on a biochemically well-defined case, we analyze E. coli translation initiation sites in order to show that we can find biologically relevant signals. For that case, our results clearly show that the Shine-Dalgarno sequence is the most important signal upstream a start codon. The variability in position and composition we found for that signal is in accordance with previous biological knowledge. We also find evidence for signals downstream of the start codon, previously introduced as transcriptional enhancers. These signals are mainly characterized by occurrences of adenine in a region of about 4 nucleotides next to the start codon.We showed that the oligo kernel can provide a valuable tool for the analysis of relevant signals in biological sequences. In the
A Steganography method for JPEG2000 Baseline System
P.Ramakrishna Rao M.Tech.,[CSE]
Indian Journal of Computer Science and Engineering , 2010,
Abstract: Hiding capacity is very important for efficient covert communications. For JPEG2000 compressed images, it is necessary to enlarge the hiding capacity because the available redundancy is very limited. In addition, the bitstream truncation makes it difficult to hide information. In this paper, a high-capacity steganography schemeis proposed for the JPEG2000 baseline system, which uses bit-plane encoding procedure twice to solve the problem due to bitstream truncation. Moreover, embedding points and their intensity are determined in a well defined quantitative manner via redundancy evaluation to increase hiding capacity. The redundancy is measured by bit, which is different from conventional methods which adjust the embedding intensity by multiplying a visual masking factor. High volumetric data is embedded into bit-planes as low as possible to keep message integrality, but at the cost of an extra bit-plane encoding procedure and slightly changed compression ratio. The proposed method can be easily integrated into the JPEG2000 image coder, and the produced stego-b
Neural Network Based Classification and Diagnosis of Brain Hemorrhages
K.V.Ramana M.Tech,Raghu Korrpati
International Journal of Artificial Intelligence and Expert Systems , 2010,
Abstract: The classification and diagnosis of brain hemorrhages has work out into a great importance diligence in early detection of hemorrhages which reduce the death rates. The purpose of this research was to detect brain hemorrhages and classify them and provide the patient with correct diagnosis. A possible solution to this social problem is to utilize predictive techniques such as sparse component analysis, artificial neural networks to develop a method for detection and classification. In this study we considered a perceptron based feed forward neural network for early detection of hemorrhages. This paper attempts to spot on consider and talk about Computer Aided Diagnosis (CAD) that chiefly necessitated in clinical diagnosis without human act. This paper introduces a Region Severance Algorithm (RSA) for detection and location of hemorrhages and an algorithm for finding threshold band. In this paper different data sets (CT images) are taken from various machines and the results obtained by applying our algorithm and those results were compared with domain expert. Further researches were challenged to originate different models in study of hemorrhages caused by hyper tension or by existing tumor in the brain.
第1页/共60条
每页显示


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.