Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Evaluation of QUADAS, a tool for the quality assessment of diagnostic accuracy studies
Penny F Whiting, Marie E Weswood, Anne WS Rutjes, Johannes B Reitsma, Patrick NM Bossuyt, Jos Kleijnen
BMC Medical Research Methodology , 2006, DOI: 10.1186/1471-2288-6-9
Abstract: Three reviewers independently rated the quality of 30 studies using QUADAS. We assessed the proportion of agreements between each reviewer and the final consensus rating. This was done for all QUADAS items combined and for each individual item. Twenty reviewers who had used QUADAS in their reviews completed a short structured questionnaire on their experience of QUADAS.Over all items, the agreements between each reviewer and the final consensus rating were 91%, 90% and 85%. The results for individual QUADAS items varied between 50% and 100% with a median value of 90%. Items related to uninterpretable test results and withdrawals led to the most disagreements. The feedback on the content of the tool was generally positive with only small numbers of reviewers reporting problems with coverage, ease of use, clarity of instructions and validity.Major modifications to the content of QUADAS itself are not necessary. The evaluation highlighted particular difficulties in scoring the items on uninterpretable results and withdrawals. Revised guidelines for scoring these items are proposed. It is essential that reviewers tailor guidelines for scoring items to their review, and ensure that all reviewers are clear on how to score studies. Reviewers should consider whether all QUADAS items are relevant to their review, and whether additional quality items should be assessed as part of their review.QUADAS is a tool to assess the quality of diagnostic accuracy studies included in systematic reviews. We defined quality as being concerned with both the internal and external validity of a study. QUADAS was developed in a systematic manner, based upon three reviews of existing evidence and a Delphi procedure involving a panel of experts in diagnostic research [1]. Like all quality assessment tools, QUADAS is a measurement, implying that its characteristics have to be evaluated: does it measure what it aims to measure, how well does it do this, and are results reproducible between differ
A mixed effect model for bivariate meta-analysis of diagnostic test accuracy studies using a copula representation of the random effects distribution  [PDF]
Aristidis K. Nikoloulopoulos
Statistics , 2015, DOI: 10.1002/sim.6595
Abstract: Diagnostic test accuracy studies typically report the number of true positives, false positives, true negatives and false negatives. There usually exists a negative association between the number of true positives and true negatives, because studies that adopt less stringent criterion for declaring a test positive invoke higher sensitivities and lower specificities. A generalized linear mixed model (GLMM) is currently recommended to synthesize diagnostic test accuracy studies. We propose a copula mixed model for bivariate meta-analysis of diagnostic test accuracy studies. Our general model includes the GLMM as a special case and can also operate on the original scale of sensitivity and specificity. Summary receiver operating characteristic curves are deduced for the proposed model through quantile regression techniques and different characterizations of the bivariate random effects distribution. Our general methodology is demonstrated with an extensive simulation study and illustrated by re-analysing the data of two published meta-analyses. Our study suggests that there can be an improvement on GLMM in fit to data and makes the argument for moving to copula random effects models. Our modelling framework is implemented in the package CopulaREMADA within the open source statistical environment R.
A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence  [PDF]
Aristidis K. Nikoloulopoulos
Statistics , 2015, DOI: 10.1177/0962280215596769
Abstract: A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that is superior to the standard generalized linear mixed model (GLMM) in this context. Here we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate GLMM as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analysing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate GLMM in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness including reflection asymmetric tail dependence, and, computational feasibility despite their three-dimensionality.
Quality and Reporting of Diagnostic Accuracy Studies in TB, HIV and Malaria: Evaluation Using QUADAS and STARD Standards  [PDF]
Patricia Scolari Fontela,Nitika Pant Pai,Ian Schiller,Nandini Dendukuri,Andrew Ramsay,Madhukar Pai
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0007753
Abstract: Poor methodological quality and reporting are known concerns with diagnostic accuracy studies. In 2003, the QUADAS tool and the STARD standards were published for evaluating the quality and improving the reporting of diagnostic studies, respectively. However, it is unclear whether these tools have been applied to diagnostic studies of infectious diseases. We performed a systematic review on the methodological and reporting quality of diagnostic studies in TB, malaria and HIV.
Should methodological filters for diagnostic test accuracy studies be used in systematic reviews of psychometric instruments? a case study involving screening for postnatal depression
Rachel Mann, Simon M Gilbody
Systematic Reviews , 2012, DOI: 10.1186/2046-4053-1-9
Abstract: A reference set of six relevant studies was derived from a forward citation search via Web of Knowledge. The performance of the 'target condition and index test' method recommended by the Cochrane DTA Group was compared to two alternative strategies which included methodological filters. Outcome measures were total citations retrieved, sensitivity, precision and associated 95% confidence intervals (95%CI).The Cochrane recommended strategy and one of the filtered search strategies were equivalent in performance and both retrieved a total of 105 citations, sensitivity was 100% (95% CI 61%, 100%) and precision was 5.2% (2.6%, 11.9%). The second filtered search retrieved a total of 31 citations, sensitivity was 66.6% (30%, 90%) and precision was 12.9% (5.1%, 28.6%). This search missed the DTA study with most relevance to the DTA review.The Cochrane recommended search strategy, 'target condition and index test', method was pragmatic and sensitive. It was considered the optimum method for retrieval of relevant studies for a psychometric DTA review (in this case for postnatal depression). Potential limitations of using filtered searches during a psychometric mental health DTA review should be considered.The advent of systematic reviews has generated challenges to develop optimum methods with which to identify studies from electronic bibliographic databases [1]. There is a great deal of expertise in this matter for systematic reviews of randomised trials [2]. However the design of optimum information retrieval strategies for recent developments such as Diagnostic Test Accuracy (DTA) reviews is not yet resolved; challenges that exist when searching for DTA studies have been acknowledged and include the design of DTA search strategies and selection of appropriate filters [3-5]. DTA studies are important for the assessment of new or existing screening tests; the accuracy of a screening test is assessed by comparing the test to a 'gold standard' to examine if the screening test
Diagnostic accuracy of basal TSH determinations based on the intravenous TRH stimulation test: An evaluation of 2570 tests and comparison with the literature
Helga Moncayo, Otto Dapunt, Roy Moncayo
BMC Endocrine Disorders , 2007, DOI: 10.1186/1472-6823-7-5
Abstract: A series of 2570 women attending a specialized endocrine unit were evaluated. A standardized i.v. TRH stimulation test was carried out by applying 200 μg of TRH. TSH levels were measured both in the basal and the 30 minute blood sample. The normal response to TRH stimulation had been previously determined to be an absolute value lying between 2.5 and 20 mIU/l. Both TSH values were analyzed by cross tabulation. In addition the results were compared to reference values taken from the literature.Basal TSH values were within the normal range (0.3 to 3.5 mIU/l) in 91,5% of cases, diminished in 3,8% and elevated in 4.7%. Based on the response to TRH, 82.4% were considered euthyroid, 3.3% were latent hyperthyroid, and 14.3% were latent hypothyroid. Combining the data on basal and stimulated TSH levels, latent hypothyroidism was found in the following proportions for different TSH levels: 5.4% for TSH < 2.0 mIU/l, 30.2% for TSH between 2.0 and 3.0 mIU/l, 65,5% for TSH between 3.0 and 3.50 mIU/l, 87.5% for TSH between 3.5 and 4.0 mIU/l, and 88.2% for TSH between 4 and 5 mIU/l. The use of an upper normal range for TSH of 2.5 mIU/l, as recommended in the literature, misclassified 7.7% of euthyroid cases.Our analysis strategy allows us to delineate the predictive value of basal TSH levels in relation to latent hypothyroidism. A grey area can be identified for values between 3.0 and 3.5 mIU/l.Elevated levels of TSH are the hallmark of decreased thyroid function. In order to correctly identify these patients it is imperative to have a clear definition of the upper reference range for basal TSH. Patients whose TSH lies in the upper reference range might appear to have minimal thyroid deficiency. Although this might appear to be an easy task, the definition of the upper reference range for TSH has been matter of controversial debate [1-5]. Reported reference values for the upper range of basal TSH vary between 2.12 and 5.95 mIU/l [6-21] (Table 1). In the majority of studies, the re
Reproducibility of the STARD checklist: an instrument to assess the quality of reporting of diagnostic accuracy studies
Nynke Smidt, Anne WS Rutjes, Dani?lle AWM van der Windt, Raymond WJG Ostelo, Patrick M Bossuyt, Johannes B Reitsma, Lex M Bouter, Henrica CW de Vet
BMC Medical Research Methodology , 2006, DOI: 10.1186/1471-2288-6-12
Abstract: Thirty-two diagnostic accuracy studies published in 2000 in medical journals with an impact factor of at least 4 were included. Two reviewers independently evaluated the quality of reporting of these studies using the 25 items of the STARD statement. A consensus evaluation was obtained by discussing and resolving disagreements between reviewers. Almost two years later, the same studies were evaluated by the same reviewers. For each item, percentages agreement and Cohen's kappa between first and second consensus assessments (inter-assessment) were calculated. Intraclass Correlation coefficients (ICC) were calculated to evaluate its reliability.The overall inter-assessment agreement for all items of the STARD statement was 85% (Cohen's kappa 0.70) and varied from 63% to 100% for individual items. The largest differences between the two assessments were found for the reporting of the rationale of the reference standard (kappa 0.37), number of included participants that underwent tests (kappa 0.28), distribution of the severity of the disease (kappa 0.23), a cross tabulation of the results of the index test by the results of the reference standard (kappa 0.33) and how indeterminate results, missing data and outliers were handled (kappa 0.25). Within and between reviewers, also large differences were observed for these items. The inter-assessment reliability of the STARD checklist was satisfactory (ICC = 0.79 [95% CI: 0.62 to 0.89]).Although the overall reproducibility of the quality of reporting on diagnostic accuracy studies using the STARD statement was found to be good, substantial disagreements were found for specific items. These disagreements were not so much caused by differences in interpretation of the items by the reviewers but rather by difficulties in assessing the reporting of these items due to lack of clarity within the articles. Including a flow diagram in all reports on diagnostic accuracy studies would be very helpful in reducing confusion between read
Using patient management as a surrogate for patient health outcomes in diagnostic test evaluation
Lukas P Staub, Sarah J Lord, R Simes, Suzanne Dyer, Nehmat Houssami, Robert YM Chen, Les Irwig
BMC Medical Research Methodology , 2012, DOI: 10.1186/1471-2288-12-12
Abstract: We discuss the rationale for measuring patient management, describe the common study designs and provide guidance about how this evidence should be reported.Interpretation of patient management studies relies on the condition that patient management is a valid surrogate for downstream patient benefits. This condition presupposes two critical assumptions: the test improves diagnostic accuracy; and the measured changes in patient management improve patient health outcomes. The validity of this evidence depends on the certainty around these critical assumptions and the ability of the study design to minimise bias. Three common designs are test RCTs that measure patient management as a primary endpoint, diagnostic before-after studies that compare planned patient management before and after testing, and accuracy studies that are extended to report on the actual treatment or further tests received following a positive and negative test result.Patient management can be measured as a surrogate outcome for test evaluation if its limitations are recognised. The potential consequences of a positive and negative test result on patient management should be pre-specified and the potential patient benefits of these management changes clearly stated. Randomised comparisons will provide higher quality evidence about differences in patient management using the new test than observational studies. Regardless of the study design used, the critical assumption that patient management is a valid surrogate for downstream patient benefits or harms must be discussed in these studies.Before a new test is introduced in clinical practice, evidence is needed to demonstrate that its use will lead to improvements in patient health outcomes [1]. Studies reporting test accuracy may not be sufficient, and clinical trials of tests that follow patients over the whole pathway from testing to treatment outcomes, although ideal, are rarely feasible [2]. Therefore, studies investigating the consequences o
Field evaluation of a malaria rapid diagnostic test (ICT Pf)
D Moonasar, AE Goga, PS Kruger, C La Cock
South African Medical Journal , 2009,
Abstract: Background. Malaria rapid diagnostic tests (MRDTs) are quick and easy to perform and useful for diagnosing malaria in primary health care settings. In South Africa most malaria infections are due to Plasmodium falciparum, and HRPII-based MRDTs have been used since 2001. Previous studies in Africa showed variability in sensitivity and specificity of HRPIIbased MRDTs; hence, we conducted a field evaluation in Limpopo province to determine the accuracy of the MRDT currently used in public sector clinics and hospitals. Methods. A cross-sectional observational study was conducted to determine the sensitivity and specificity of an ICT Pf MRDT. We tested 405 patients with fever with ICT Pf MRDT and compared the results with blood film microscopy (the gold standard). Results. The overall sensitivity of the ICT Pf MRDT was 99.48% (95% confidence interval (CI) 96.17 - 100%), while specificity was 96.26% (95% CI 94.7 - 100%). The positive predictive value of the test was 98.48 (99% CI 98.41 - 100%), and the negative predictive value was 99.52% (95% CI 96.47 – 100%). Conclusions. The ICT Pf MRDT is an appropriate test to use in the field in South Africa where laboratory facilities are not available. It has a high degree of sensitivity and acceptable level of specificity in accordance with the World Health Organization criteria. However, sensitivity of MRDT at low levels of parasitaemia (<100 parasites/ìl of blood) in field conditions must still be established.
No role for quality scores in systematic reviews of diagnostic accuracy studies
Penny Whiting, Roger Harbord, Jos Kleijnen
BMC Medical Research Methodology , 2005, DOI: 10.1186/1471-2288-5-19
Abstract: We developed five schemes for weighting QUADAS to produce quality scores. We used three methods to investigate the effects of quality scores on test performance. We used a set of 28 studies that assessed the accuracy of ultrasound for the diagnosis of vesico-ureteral reflux in children.The different methods of weighting individual items from the same quality assessment tool produced different quality scores. The different scoring schemes ranked different studies in different orders; this was especially evident for the intermediate quality studies. Comparing the results of studies stratified as "high" and "low" quality based on quality scores resulted in different conclusions regarding the effects of quality on estimates of diagnostic accuracy depending on the method used to produce the quality score. A similar effect was observed when quality scores were included in meta-regression analysis as continuous variables, although the differences were less apparent.Quality scores should not be incorporated into diagnostic systematic reviews. Incorporation of the results of the quality assessment into the systematic review should involve investigation of the association of individual quality items with estimates of diagnostic accuracy, rather than using a combined quality score.Quality assessment is as important in systematic reviews of diagnostic accuracy studies as it is for any other systematic review. One method of incorporating quality into a review is to use a quality score. Quality scores combine the individual items from a quality assessment tool to provide an overall single score. One of the main problems with quality scores is determining how to weight each item to provide an overall quality score. There is no objective way of doing this and different methods are likely to produce different scores that may lead to different results if these scores are used in the analysis.There has been much discussion regarding the use of quality scores in the area of clinical tr
Page 1 /100
Display every page Item

Copyright © 2008-2017 Open Access Library. All rights reserved.