oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Context-Dependent Data Envelopment Analysis-Measuring Attractiveness and Progress with Interval Data  [cached]
F. Hosseinzadeh Lotfi,N. Ebrahimkhany Ghazy,S. Ebrahimkhany Ghazy,M. Ahadzadeh Namin
International Journal of Applied Operational Research , 2011,
Abstract: Data envelopment analysis (DEA) is a method for recognizing the efficient frontier of decision making units (DMUs).This paper presents a Context-dependent DEA which uses the interval inputs and outputs. Context-dependent approach with interval inputs and outputs can consider a set of DMUs against the special context. Each context shows an efficient frontier including DMUs in particular levels. The Context-dependent DEA with interval inputs and outputs can measure (i) the attractiveness when DMUs showing weaker performance are selected as an appraisal context, and (ii) the interval progress when DMUs showing better performance are selected as the appraisal context. Keywords: Interval Inputs and Outputs, Context-Dependent Data Envelopment Analysis, Attractiveness, Progress, Value Judgment.
Improving of Efficiency in Data Envelopment Analysis with Interval Data
M. Mahallati Rayeni,F.H. Saljooghi
The International Journal of Applied Economics and Finance , 2010,
Abstract: Aim of this research is study efficiency of Decision Making Units (DMUs) with interval data using Data Envelopment Analysis (DEA) models. The DEA is a widely applied approach for measuring the relative efficiencies of a set of DMUs which uses multiple inputs to produce multiple outputs. An assumption underlying DEA is that all the data are known exactly. But in reality, many factors cannot be measured in a precise manner. In recent years, in different applications of DEA, inputs and outputs have been observed whose values are indefinite. Such data are called imprecise. Imprecise data can be probabilistic, interval, ordinal, qualitative or fuzzy. In this study, we investigate an interval DEA model, in the case that the inputs and outputs are located within the bounded intervals. The resulting model is non-linear and then we convert it to a linear one. Also minimal variations of input and output intervals are computed to achieve to full efficiency. Indeed, we propose a new method for improving of efficiency classifications of DMUs with interval data in data envelopment analysis.
An assurance interval for non-Archimedean $epsilon$ in imprecise data envelopment analysis(IDEA)  [PDF]
Mohammad Khodabakhshi,Kh. Rashnoo
Data Envelopment Analysis and Decision Science , 2013, DOI: 10.5899/2013/dea-00018
Abstract: Park (2010) [8] presented a method to obtain the upper bound on efficiency in imprecise data envelopment analysis (IDEA) in which the envelopment model with imprecise data had been used. In this paper, we consider the dual model, the multiplier model, which involves the non-Archimedean element $epsilon$. Then, we define a model to determine the upper bound of $epsilon$. An assurance interval for the non-Archimedean element $epsilon$ is obtained in IDEA which is important when solving the model directly.
Curation of complex, context-dependent immunological data
Randi Vita, Kerrie Vaughan, Laura Zarebski, Nima Salimi, Ward Fleri, Howard Grey, Muthu Sathiamurthy, John Mokili, Huynh-Hoa Bui, Philip E Bourne, Julia Ponomarenko, Romulo de Castro, Russell K Chan, John Sidney, Stephen S Wilson, Scott Stewart, Scott Way, Bjoern Peters, Alessandro Sette
BMC Bioinformatics , 2006, DOI: 10.1186/1471-2105-7-341
Abstract: To identify and extract relevant data from the scientific literature in an efficient and accurate manner, novel processes were developed for manual and semi-automated annotation.Formalized curation strategies enable the processing of a large volume of context-dependent data, which are now available to the scientific community in an accessible and transparent format. The experiences described herein are applicable to other databases housing complex biological data and requiring a high level of curation expertise.Many aspects of biological sciences have profited from the recent advances in the field of bioinformatics. New computational methodologies and tools allow researchers to capture, store, analyze and model large volumes of data, thereby dramatically affecting the pace, depth and scope of scientific investigation. A prerequisite for computational analysis is the availability of experimental data in an annotated, machine-accessible format. In research areas such as genomics and proteomics, such databases are a necessity, simply due to the vast amount of data generated. In the field of immunology, the majority of data are only reported in the literature, due to the typically smaller amounts of data distributed over many publications and to the dynamic and complex nature of immunological interactions. Thus, accurate representation of these data in a formalized fashion presents unique challenges.Databases such as the International ImMunoGeneTics information system (IMGT) [1], AntiJen [2], Functional Immunology (FIMM) [3], HLA Ligand [4], SYFPEITHI [5] and the HIV database [6] house immunologically relevant information. They contain immunoglobulin-specific (Ig) resources, T and B cell epitope sequence data, and/or MHC binding data from peer-reviewed publications. Similarly, the Protein Data Bank (PDB) [7] functions as a service and repository for structural data and associated metadata of immunological relevance. While these databases are comprehensive in their respe
Data analysis with ordinal and interval dependent variables: examples from a study of real estate salespeople
G. Martin Izzo,Barry E. Langford
Review of Economic and Business Studies (REBS) , 2008,
Abstract: This paper re-examines the problems of estimating the parameters of an underlying linear model using survey response data in which the dependent variables are in discrete categories of ascending order (ordinal, as distinct from numerical) or, where they are observed to fall into certain groups on a continuous scale (interval), where the actual values remain unobserved. An ordered probit model is discussed as an appropriate framework for statistical analysis for ordinal dependent variables. Next, a maximum likelihood estimator (MLE) derived from grouped data regression for interval dependent variable is discussed. Using LIMDEP, a packaged statistical program, survey data from an earlier manuscript are analyzed and the findings presented.
Data envelopment analysis
Quanling Wei
Chinese Science Bulletin , 2001, DOI: 10.1007/BF03183382
Abstract: This review introduces the history and present status of data envelopment analysis (DEA) research, particularly the evaluation process. And extensions of some DEA models are also described. It is pointed out that mathematics, economics and management science are the main forces in the DEA development, optimization provides the fundamental method for the DEA research, and the wide range of applications enforces the rapid development of DEA.
Scalable Testing of Context-Dependent Policies over Stateful Data Planes with Armstrong  [PDF]
Seyed K. Fayaz,Yoshiaki Tobioka,Sagar Chaki,Vyas Sekar
Computer Science , 2015,
Abstract: Network operators today spend significant manual effort in ensuring and checking that the network meets their intended policies. While recent work in network verification has made giant strides to reduce this effort, they focus on simple reachability properties and cannot handle context-dependent policies (e.g., how many connections has a host spawned) that operators realize using stateful network functions (NFs). Together, these introduce new expressiveness and scalability challenges that fall outside the scope of existing network verification mechanisms. To address these challenges, we present Armstrong, a system that enables operators to test if network with stateful data plane elements correctly implements a given context-dependent policy. Our design makes three key contributions to address expressiveness and scalability: (1) An abstract I/O unit for modeling network I/O that encodes policy-relevant context information; (2) A practical representation of complex NFs via an ensemble of finite state machines abstraction; and (3) A scalable application of symbolic execution to tackle state space explosion. We demonstrate that Armstrong is several orders of magnitude faster than existing mechanisms.
DATA ENVELOPMENT ANALYSIS OF BANKING SECTOR IN BANGLADESH  [PDF]
Md. Rashedul Hoque,Dr. Md. Israt Rayhan
Russian Journal of Agricultural and Socio-Economic Sciences , 2012,
Abstract: Banking sector of Bangladesh is flourishing and contributing to its economy. In this aspect measuring efficiency is important. Data Envelopment Analysis technique is used for this purpose. The data are collected from the annual reports of twenty four different banks in Bangladesh. Data Envelopment Analysis is mainly of two types - constant returns to scale and variable returns to scale. Since this study attempts to maximize output, so the output oriented Data Envelopment Analysis is used. The most efficient bank is one that obtains the highest efficiency score.
Directional Congestion in Data Envelopment Analysis  [PDF]
Guo-liang Yang
Mathematics , 2015,
Abstract: First, this paper proposes the definition of directional congestion in certain input and output directions in the framework of data envelopment analysis (DEA). Second, two methods from different viewpoints are also proposed to estimate the directional congestion in a DEA framework. Third, we address the relations among directional congestion and classic congestion and strong (weak) congestion. Finally, we present a case study investigating the analysis of the research institutes in the Chinese Academy of Sciences (CAS) to demonstrate the applicability and usefulness of the methods developed in this paper.
Efficiency and Ranking Measurement of Vendors by Data Envelopment Analysis  [cached]
Hadi Shorouyehzad,Farhad Hoseinzadeh Lotfi,Mirbahador Aryanezhad,Reza Dabestani
International Business Research , 2011, DOI: 10.5539/ibr.v4n2p137
Abstract: One of the key issues in logistics management context is the measurement of the vendors’ efficiency which helps companies to achieve the most appropriate services. In today’s competitive condition, most of the firms have changed from a single vendor to a multi-vendor point of view. A number of conceptual and analytical models have also been developed for identifying the vendor selection problems. It has been recognized that a lot of factors may influence the vendors’ efficiency therefore a suitable approach is required to consider major factors in order to select the most efficient ones. This paper presents a practical approach for evaluating vendors which provide the required services in a procurement situation. This approach uses data envelopment analysis to evaluate the vendors’ efficiency. Anderson and Peterson model is also applied to rank the efficient vendors. The criteria considered in this model are service quality, price, average of late deliveries and rate of rejected parts. A case study is implemented in a pipe manufacturing company to prove the mentioned methods. Findings pinpoint that the vendors which present the best services are not necessarily the most efficient one. This research also provides an appropriate framework for organization to examine the vendors’ efficiency and also choose some effective ways to improve vendors’ performance.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.