oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Theories of truth as assessment criteria in judgment and decision making  [PDF]
Philip T. Dunwoody
Judgment and Decision Making , 2009,
Abstract: Hammond (1996) argued that much of the research in the field of judgment and decision making (JDM) can be categorized as focused on either coherence or correspondence (CandC) and that, in order to understand the findings of the field, one needs to understand the differences between these two criteria. extit{Hammond's claim} is that conclusions about the competence of judgments and decisions will depend upon the selection of coherence or correspondence as the criterion (Hammond, 2008). First, I provide an overview of the terms coherence and correspondence (CandC) as philosophical theories of truth and relate them to the field of JDM. Second, I provide an example of Hammond's claim by examining literature on base rate neglect. Third, I examine Hammond's claim as it applies to the broader field of JDM. Fourth, I critique Hammond's claim and suggest that refinements to the CandC distinction are needed. Specifically, the CandC distinction 1) is more accurately applied to criteria than to researchers, 2) should be refined to include two important types of coherence (inter and intrapersonal coherence) and 3) neglects the third philosophical theory of truth, pragmatism. Pragmatism, as a class of criteria in JDM, is defined as goal attainment. In order to provide the most complete assessment of human judgment possible, and understand different findings in the field of JDM, all three criteria should be considered.
Editorial: Methodology in judgment and decision making research  [PDF]
Andreas Glockner,Benjamin E. Hilbig
Judgment and Decision Making , 2011,
Abstract: In this introduction to the special issue on methodology, we provide background on its original motivation and a systematic overview of the contributions. The latter are discussed with correspondence to the phase of the scientific process they (most strongly) refer to: Theory construction, design, data analysis, and cumulative development of scientific knowledge. Several contributions propose novel measurement techniques and paradigms that will allow for new insights and can thus avail researchers in JDM and beyond. Another set of contributions centers around how models can best be tested and/or compared. Especially when viewed in combination, the papers on this topic spell out vital necessities for model comparisons and provide approaches that solve noteworthy problems prior work has been faced with.
The role of process data in the development and testing of process models of judgment and decision making
Michael Schulte-Mecklenbeck,Anton Kuhberger,Rob Ranyard
Judgment and Decision Making , 2011,
Abstract: The aim of this article is to evaluate the contribution of process tracing data to the development and testing of models of judgment and decision making (JDM). We draw on our experience of editing the ``Handbook of process tracing methods for decision research'' recently published in the SJDM series. After a brief introduction we first describe classic process tracing methods (thinking aloud, Mouselab, eye-tracking). Then we present a series of examples of how each of these techniques has made important contributions to the development and testing of process models of JDM. We discuss the issue of large data volumes resulting from process tracing and remedies for handling those. Finally, we argue for the importance of formulating process hypotheses and opt for a multi-method approach that focuses on the cross-validation of findings.
The Decision Making Individual Differences Inventory and guidelines for the study of individual differences in judgment and decision-making research  [PDF]
Kirstin C. Appelt,Kerry F. Milch,Michel J. J. Handgraaf,Elke U. Weber
Judgment and Decision Making , 2011,
Abstract: Individual differences in decision making are a topic of longstanding interest, but often yield inconsistent and contradictory results. After providing an overview of individual difference measures that have commonly been used in judgment and decision-making (JDM) research, we suggest that our understanding of individual difference effects in JDM may be improved by amending our approach to studying them. We propose four recommendations for improving the pursuit of individual differences in JDM research: a more systematic approach; more theory-driven selection of measures; a reduced emphasis on main effects in favor of interactions between individual differences and decision features, situational factors, and other individual differences; and more extensive communication of results (whether significant or null, published or unpublished). As a first step, we offer our database---the Decision Making Individual Differences Inventory (DMIDI; html://www.dmidi.net), a free, public resource that categorizes and describes the most common individual difference measures used in JDM research.
The Empirical content of theories in judgment and decision making: Shortcomings and remedies
Andreas Glockner,Tilmann Betsch
Judgment and Decision Making , 2011,
Abstract: According to Karl Popper, we can tell good theories from poor ones by assessing their empirical content (empirischer Gehalt), which basically reflects how much information they convey concerning the world. ``The empirical content of a statement increases with its degree of falsifiability: the more a statement forbids, the more it says about the world of experience.'' Two criteria to evaluate the empirical content of a theory are their level of universality (Allgemeinheit) and their degree of precision (Bestimmtheit). The former specifies how many situations it can be applied to. The latter refers to the specificity in prediction, that is, how many subclasses of realizations it allows. We conduct an analysis of the empirical content of theories in Judgment and Decision Making (JDM) and identify the challenges in theory formulation for different classes of models. Elaborating on classic Popperian ideas, we suggest some guidelines for publication of theoretical work.
Diagnostic task selection for strategy classification in judgment and decision making  [PDF]
Marc Jekel,Susann Fiedler,Andreas Glockner
Judgment and Decision Making , 2011,
Abstract: One major statistical and methodological challenge in Judgment and Decision Making research is the reliable identification of individual decision strategies by selection of diagnostic tasks, that is, tasks for which predictions of the strategies differ sufficiently. The more strategies are considered, and the larger the number of dependent measures simultaneously taken into account in strategy classification (e.g., choices, decision time, confidence ratings; Glockner, 2009), the more complex the selection of the most diagnostic tasks becomes. We suggest the Euclidian Diagnostic Task Selection (EDTS) method as a standardized solution for the problem. According to EDTS, experimental tasks are selected that maximize the average difference between strategy predictions for any multidimensional prediction space. In a comprehensive model recovery simulation, we evaluate and quantify the influence of diagnostic task selection on identification rates in strategy classification. Strategy classification with EDTS shows superior performance in comparison to less diagnostic task selection algorithms such as representative sampling. The advantage of EDTS is particularly large if only few dependent measures are considered. We also provide an easy-to-use function in the free software package R that allows generating predictions for the most commonly considered strategies for a specified set of tasks and evaluating the diagnosticity of those tasks via EDTS; thus, to apply EDTS, no prior programming knowledge is necessary.
Effect of time pressure and human judgment on decision making in three public sector organizations of Pakistan  [cached]
Rizwan Saleem,Anwar ul Haq Shah,Muhammad Waqas
International Journal of Human Sciences , 2011,
Abstract: This study attempts to widen the effect of time pressure and human judgment on decision making. A census of three organizations named Project Management Organization (PMO), Accountant General Pakistan Revenues (AGPR) and Controller General of Accountant (CGA) was occupied. To demeanor this study a questionnaire tagged Decision Making, Time Pressure and Human Judgment was used for the assortment of data. The questionnaire was predominantly designed to accomplish the objectives of the study. The total number of observations was eighty two and the Arithmetic Mean Score of decision making, time pressure and human judgment were 2.532, 2.527 and 2.395 respectively. The significance level of the model was 0.000 which illustrates maximum significant level. As p-value is less than .05 so it can be assumed that the variables elected for the study are decidedly significant.
What would judgment and decision making research be like if we took a Bayesian approach to hypothesis testing?
William J. Matthews
Judgment and Decision Making , 2011,
Abstract: Judgment and decision making research overwhelmingly uses null hypothesis significance testing as the basis for statistical inference. This article examines an alternative, Bayesian approach which emphasizes the choice between two competing hypotheses and quantifies the balance of evidence provided by the data---one consequence of which is that experimental results may be taken to strongly favour the null hypothesis. We apply a recently-developed ``Bayesian $t$-test'' to existing studies of the anchoring effect in judgment, and examine how the change in approach affects both the tone of hypothesis testing and the substantive conclusions that one draws. We compare the Bayesian approach with Fisherian and Neyman-Pearson testing, examining its relationship to conventional $p$-values, the influence of effect size, and the importance of prior beliefs about the likely state of nature. The results give a sense of how Bayesian hypothesis testing might be applied to judgment and decision making research, and of both the advantages and challenges that a shift to this approach would entail.
Practice Rationale Care Model: The Art and Science of Clinical Reasoning, Decision Making and Judgment in the Nursing Process  [PDF]
Jefferson Garcia Guerrero
Open Journal of Nursing (OJN) , 2019, DOI: 10.4236/ojn.2019.92008
Abstract: Nurses must be enlightened that clinical reasoning, clinical decision making, and clinical judgement are the key elements in providing safe patient care. It must be incorporated and applied all throughout the nursing process. The impact of patients’ positive outcomes relies on how nurses are effective in clinical reasoning and put into action once clinical decision making occurs. Thus, nurses with poor clinical reasoning skills frequently fail to see and notice patient worsening condition, and misguided decision making arises that leads to ineffective patient care and adding patients suffering. Clinical judgment on the other hand denotes on the outcome after the cycle of clinical reasoning. Within this context, nurses apply reflection about their actions from the clinical decision making they made. The process of applying knowledge, skills and expertise in the clinical field through clinical reasoning is the work of art in the nursing profession in promoting patient safety in the course of delivering routine nursing interventions. Nurses must be guided with their sound clinical reasoning to have an optimistic outcome and prevent iatrogenic harm to patients. Nurses must be equipped with knowledge, skills, attitude and values but most importantly prepared to face the bigger picture of responsibility to care for every patient in the clinical field.
A Critical Meta-Analysis of Lens Model Studies in Human Judgment and Decision-Making  [PDF]
Esther Kaufmann, Ulf-Dietrich Reips, Werner W. Wittmann
PLOS ONE , 2013, DOI: 10.1371/journal.pone.0083528
Abstract: Achieving accurate judgment (‘judgmental achievement’) is of utmost importance in daily life across multiple domains. The lens model and the lens model equation provide useful frameworks for modeling components of judgmental achievement and for creating tools to help decision makers (e.g., physicians, teachers) reach better judgments (e.g., a correct diagnosis, an accurate estimation of intelligence). Previous meta-analyses of judgment and decision-making studies have attempted to evaluate overall judgmental achievement and have provided the basis for evaluating the success of bootstrapping (i.e., replacing judges by linear models that guide decision making). However, previous meta-analyses have failed to appropriately correct for a number of study design artifacts (e.g., measurement error, dichotomization), which may have potentially biased estimations (e.g., of the variability between studies) and led to erroneous interpretations (e.g., with regards to moderator variables). In the current study we therefore conduct the first psychometric meta-analysis of judgmental achievement studies that corrects for a number of study design artifacts. We identified 31 lens model studies (N = 1,151, k = 49) that met our inclusion criteria. We evaluated overall judgmental achievement as well as whether judgmental achievement depended on decision domain (e.g., medicine, education) and/or the level of expertise (expert vs. novice). We also evaluated whether using corrected estimates affected conclusions with regards to the success of bootstrapping with psychometrically-corrected models. Further, we introduce a new psychometric trim-and-fill method to estimate the effect sizes of potentially missing studies correct psychometric meta-analyses for effects of publication bias. Comparison of the results of the psychometric meta-analysis with the results of a traditional meta-analysis (which only corrected for sampling error) indicated that artifact correction leads to a) an increase in values of the lens model components, b) reduced heterogeneity between studies, and c) increases the success of bootstrapping. We argue that psychometric meta-analysis is useful for accurately evaluating human judgment and show the success of bootstrapping.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.