oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Search Results: 1 - 10 of 5007 matches for " Vicente Traver "
All listed articles are free for downloading (OA Articles)
Page 1 /5007
Display every page Item
When the Social Meets the Semantic: Social Semantic Web or Web 2.5
Salvatore F. Pileggi,Carlos Fernandez-Llatas,Vicente Traver
Future Internet , 2012, DOI: 10.3390/fi4030852
Abstract: The social trend is progressively becoming the key feature of current Web understanding (Web 2.0). This trend appears irrepressible as millions of users, directly or indirectly connected through social networks, are able to share and exchange any kind of content, information, feeling or experience. Social interactions radically changed the user approach. Furthermore, the socialization of content around social objects provides new unexplored commercial marketplaces and business opportunities. On the other hand, the progressive evolution of the web towards the Semantic Web (or Web 3.0) provides a formal representation of knowledge based on the meaning of data. When the social meets semantics, the social intelligence can be formed in the context of a semantic environment in which user and community profiles as well as any kind of interaction is semantically represented (Semantic Social Web). This paper first provides a conceptual analysis of the second and third version of the Web model. That discussion is aimed at the definition of a middle concept (Web 2.5) resulting in the convergence and integration of key features from the current and next generation Web. The Semantic Social Web (Web 2.5) has a clear theoretical meaning, understood as the bridge between the overused Web 2.0 and the not yet mature Semantic Web (Web 3.0).
A Semantic Layer for Embedded Sensor Networks
Salvatore F. Pileggi,Carlos Fernandez-Llatas,Vicente Traver
ARPN Journal of Systems and Software , 2011,
Abstract: Sensor Networks progressively assumed the critical role of bridges between the real world and information systems, through always more consolidated and efficient sensor technologies that enable advanced heterogeneous sensor grids. Sensor data is commonly used by advanced systems and intelligent applications in order to archive complex goals. Processes that build high-level knowledge from sensor data are commonly considered as the key core concept. This paper proposes a semantic layer that would optimally support the knowledge building in sensor systems as well as it enables semantic interaction model at different levels (module, subsystem, system). The semantic layer proposed in the paper is currently used by several architectures and applications in the context of different domains.
Applying Evidence-Based Medicine in Telehealth: An Interactive Pattern Recognition Approximation
Carlos Fernández-Llatas,Teresa Meneu,Vicente Traver,José-Miguel Benedi
International Journal of Environmental Research and Public Health , 2013, DOI: 10.3390/ijerph10115671
Abstract: Born in the early nineteen nineties, evidence-based medicine (EBM) is a paradigm intended to promote the integration of biomedical evidence into the physicians daily practice. This paradigm requires the continuous study of diseases to provide the best scientific knowledge for supporting physicians in their diagnosis and treatments in a close way. Within this paradigm, usually, health experts create and publish clinical guidelines, which provide holistic guidance for the care for a certain disease. The creation of these clinical guidelines requires hard iterative processes in which each iteration supposes scientific progress in the knowledge of the disease. To perform this guidance through telehealth, the use of formal clinical guidelines will allow the building of care processes that can be interpreted and executed directly by computers. In addition, the formalization of clinical guidelines allows for the possibility to build automatic methods, using pattern recognition techniques, to estimate the proper models, as well as the mathematical models for optimizing the iterative cycle for the continuous improvement of the guidelines. However, to ensure the efficiency of the system, it is necessary to build a probabilistic model of the problem. In this paper, an interactive pattern recognition approach to support professionals in evidence-based medicine is formalized.
Process Mining for Individualized Behavior Modeling Using Wireless Tracking in Nursing Homes
Carlos Fernández-Llatas,José-Miguel Benedi,Juan M. García-Gómez,Vicente Traver
Sensors , 2013, DOI: 10.3390/s131115434
Abstract: The analysis of human behavior patterns is increasingly used for several research fields. The individualized modeling of behavior using classical techniques requires too much time and resources to be effective. A possible solution would be the use of pattern recognition techniques to automatically infer models to allow experts to understand individual behavior. However, traditional pattern recognition algorithms infer models that are not readily understood by human experts. This limits the capacity to benefit from the inferred models. Process mining technologies can infer models as workflows, specifically designed to be understood by experts, enabling them to detect specific behavior patterns in users. In this paper, the eMotiva process mining algorithms are presented. These algorithms filter, infer and visualize workflows. The workflows are inferred from the samples produced by an indoor location system that stores the location of a resident in a nursing home. The visualization tool is able to compare and highlight behavior patterns in order to facilitate expert understanding of human behavior. This tool was tested with nine real users that were monitored for a 25-week period. The results achieved suggest that the behavior of users is continuously evolving and changing and that this change can be measured, allowing for behavioral change detection.
On Compiler Error Messages: What They Say and What They Mean
V. Javier Traver
Advances in Human-Computer Interaction , 2010, DOI: 10.1155/2010/602570
Abstract: Programmers often encounter cryptic compiler error messages that are difficult to understand and thus difficult to resolve. Unfortunately, most related disciplines, including compiler technology, have not paid much attention to this important aspect that affects programmers significantly, apparently because it is felt that programmers should adapt to compilers. In this article, however, this problem is studied from the perspective of the discipline of human-computer interaction to gain insight into why compiler errors messages make the work of programmers more difficult, and how this situation can be alleviated. Additionally, because poorly designed error messages affect novice programmers more adversely, the problems faced by computer science students while learning to program are analyzed, and the obstacles originated by compilers are identified. Examples of actual compiler error messages are provided and carefully commented. Finally, some possible measures that can be taken are outlined, and some principles for compiler error message design are included. 1. Introduction One reason why high-quality software development is difficult lies in the nature of software itself [1]. To tackle the challenges of the demanding intellectual activity of software design and construction, a whole discipline, software engineering [2, 3], exists. Software engineering is devoted to principles, techniques, methods, strategies, and technologies for modeling, conceiving, managing, developing, and maintaining software systems. Object orientation [4, 5], the team software process (TSP) and the personal software process (PSP) [6], extreme programming [7], and so forth are only a few of the proposals in this line. Focusing on the coding task, high-level programming languages have been promoted as a means of closing the huge gap in the abstraction level that exists between machine language idiosyncrasies and human thinking and language. In addition, integrated environments have been conceived to ease the editing, compilation, running, and debugging of computer programs. Visual programming techniques have also proven beneficial because they offer the programmer an easy and intuitive way to building attractive user-friendly graphical interfaces. However, in spite of all this effort, not much has been done with compiler messages to make the life of programmers much easier. Error messages shown by compilers are, more often than not, difficult to interpret, resolve, and prevent in the future. The lack of computer support in this sense is somehow paradoxical. For instance, tools
La epistemología contemporánea: entre filosofía y psicología
Sergi Rosell Traver
Límite , 2008,
Abstract: En este artículo se revisan los problemas que contemporáneamente han centrado el debate en epistemología, atendiendo principalmente a los puntos de mayor contacto entre filosofía y psicología. En concreto, tras referirme brevemente a las disputas entre fundamentismo/coherentismo y internismo/ externismo y al reto escéptico, me ocupo con detenimiento de las dos fuentes principales de conocimiento: la percepción y la inducción, elementos con especial interés en relación al vínculo entre filosofía y psicología. En la parte final se discute la propuesta quiniana de naturalización de la epistemología, rechazando su versión fuerte.
Ecological Relations of the Lepidopterous Genus Depressaria( cophorid )
Jay R. Traver
Psyche , 1919, DOI: 10.1155/1919/27298
Abstract:
Knowledge-Based Automatic Generation of Linear Algebra Algorithms and Code
Diego Fabregat-Traver
Computer Science , 2014,
Abstract: This dissertation focuses on the design and the implementation of domain-specific compilers for linear algebra matrix equations. The development of efficient libraries for such equations, which lie at the heart of most software for scientific computing, is a complex process that requires expertise in a variety of areas, including the application domain, algorithms, numerical analysis and high-performance computing. Moreover, the process involves the collaboration of several people for a considerable amount of time. With our compilers, we aim to relieve the developers from both designing algorithms and writing code, and to generate routines that match or even surpass the performance of those written by human experts.
A Domain-Specific Compiler for Linear Algebra Operations
Diego Fabregat-Traver,Paolo Bientinesi
Computer Science , 2012,
Abstract: We present a prototypical linear algebra compiler that automatically exploits domain-specific knowledge to generate high-performance algorithms. The input to the compiler is a target equation together with knowledge of both the structure of the problem and the properties of the operands. The output is a variety of high-performance algorithms, and the corresponding source code, to solve the target equation. Our approach consists in the decomposition of the input equation into a sequence of library-supported kernels. Since in general such a decomposition is not unique, our compiler returns not one but a number of algorithms. The potential of the compiler is shown by means of its application to a challenging equation arising within the genome-wide association study. As a result, the compiler produces multiple "best" algorithms that outperform the best existing libraries.
Computing Petaflops over Terabytes of Data: The Case of Genome-Wide Association Studies
Diego Fabregat-Traver,Paolo Bientinesi
Computer Science , 2012,
Abstract: In many scientific and engineering applications, one has to solve not one but a sequence of instances of the same problem. Often times, the problems in the sequence are linked in a way that allows intermediate results to be reused. A characteristic example for this class of applications is given by the Genome-Wide Association Studies (GWAS), a widely spread tool in computational biology. GWAS entails the solution of up to trillions ($10^{12}$) of correlated generalized least-squares problems, posing a daunting challenge: the performance of petaflops ($10^{15}$ floating-point operations) over terabytes of data. In this paper, we design an algorithm for performing GWAS on multi-core architectures. This is accomplished in three steps. First, we show how to exploit the relation among successive problems, thus reducing the overall computational complexity. Then, through an analysis of the required data transfers, we identify how to eliminate any overhead due to input/output operations. Finally, we study how to decompose computation into tasks to be distributed among the available cores, to attain high performance and scalability. With our algorithm, a GWAS that currently requires the use of a supercomputer may now be performed in matter of hours on a single multi-core node. The discussion centers around the methodology to develop the algorithm rather than the specific application. We believe the paper contributes valuable guidelines of general applicability for computational scientists on how to develop and optimize numerical algorithms.
Page 1 /5007
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.