Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99


Any time

2019 ( 42 )

2018 ( 286 )

2017 ( 275 )

2016 ( 434 )

Custom range...

Search Results: 1 - 10 of 18973 matches for " Yousef Al-Jarrah "
All listed articles are free for downloading (OA Articles)
Page 1 /18973
Display every page Item
A Wavelet Based Method for the Solution of Fredholm Integral Equations  [PDF]
En-Bing Lin, Yousef Al-Jarrah
American Journal of Computational Mathematics (AJCM) , 2012, DOI: 10.4236/ajcm.2012.22015
Abstract: In this article, we use scaling function interpolation method to solve linear Fredholm integral equations, and we prove a convergence theorem for the solution of Fredholm integral equations. We present two examples which have better results than others.
Numerical Solution of Freholm-Volterra Integral Equations by Using Scaling Function Interpolation Method  [PDF]
Yousef Al-Jarrah, En-Bing Lin
Applied Mathematics (AM) , 2013, DOI: 10.4236/am.2013.41A031

Wavelet methods are a very useful tool in solving integral equations. Both scaling functions and wavelet functions are the key elements of wavelet methods. In this article, we use scaling function interpolation method to solve Volterra integral equations of the first kind, and Fredholm-Volterra integral equations. Moreover, we prove convergence theorem for the numerical solution of Volterra integral equations and Freholm-Volterra integral equations. We also present three examples of solving Volterra integral equation and one example of solving Fredholm-Volterra integral equation. Comparisons of the results with other methods are included in the examples.

Wavelet Interpolation Method for Solving Singular Integral Equations  [PDF]
Yousef Al-Jarrah, En-Bing Lin
Applied Mathematics (AM) , 2013, DOI: 10.4236/am.2013.411A3001

Numerical solutions of singular Fredholm integral equations of the second kind are solved by using Coiflet interpolation method. Error analysis of the method is obtained and examples are presented. It turns out that our method provides better accuracy than other methods.

Extrametricality Revisited
Rasheed S. Al-Jarrah
LiBRI : Linguistic and Literary Broad Research and Innovation , 2011,
Abstract: This research paper advances the claim that extrametricality (Liberman & Prince 1977, McCarthy 1979b, Hayes 1982, 1995, Hammond 1999, Kiparsky 2003, Watson 2007, among others) can be constrained to syllable extrametricality, eliminating consonant, mora, and (presumably) foot extrametricality. This paper presents a basic analysis of parse (LICENSE-SEG) and antiparse (NONFINAL-SEG) constraints for dealing with stress placement (or lack thereof) on final syllables. The main thrust of the argument is twofold: (1) both parse and antiparse constraints are parameterized relative to the weight of the constituent to which they apply, and (2) the constraints that require syllables to be incorporated into higher level prosodic structure (LICENSE-SEG) conflict with constraints that require final syllable to remain stray (NONFINAL-SEG). The antiparse constraint NONFINAL-SEG is factored out into NONFINAL(C), NONFINAL(V), NONFINAL(s), NONFINAL(F), and NONFINAL(PR). And, in order for extrametricality to be constrained just to syllable extrametricality, we advance the claim that NONFINAL(s), in particular, is mora-sensitive, and can be further parameterized into a family of subconstraints (NONFINAL-μ, NONFINAL-μμ, NONFINAL-μμμ) differing in the weight of the syllable to which they apply. Similarly, by adopting the Strict Layering requirement (for details see Nespor and Vogel 1986: 7), the parse constraint LICENSE-SEG is decomposed into LICENSE(C), LICENSE(V), LICENSE(s), LICENSE(F), and LICENSE(PR); in the meantime, LICENSE(s) is decomposed into LICENSE-μ, LICENSE-μμ, and LICENSE-μμμ. In principle, the interaction of the parameterized set of the parse constraint LICENSE-SEG with the parameterized set of the antiparse constraint NONFINAL(s), we argue, yields the correct stress patterns for all final syllables. A typological prediction of breaking NONFINALITY into a family of mora sensitive constraints avoids the need for parameterized extrametricality below the level of the foot. An explicit prediction is that mora extrametricality should not occur, i.e. no language should treat, for example, CVCC and CVV as heavy but treat CVC and CV as light, as we believe there are no compelling cases of mora extrametricality (for illuminating discussions, see Hayes 1995, Rosenthall & van der Hulst 1999).
An Anomaly Detector for Keystroke Dynamics Based on Medians Vector Proximity
Mudhafar M. Al-Jarrah
Journal of Emerging Trends in Computing and Information Sciences , 2012,
Abstract: This paper presents an anomaly detector for keystroke dynamics authentication, based on a statistical measure of proximity, evaluated through the empirical study of an independent benchmark of keystroke data. A password typing-rhythm classifier is presented, to be used as an anomaly detector in the authentication process of genuine users and impostors. The proposed user authentication method involves two phases. First a training phase in which a user typing profile is created through repeated entry of password. In the testing phase, the password typing rhythm of the user is compared with the stored typing profile, to determine whether it is a genuine user or an impostor. The typing rhythm is obtained through keystroke timings of key-down / key-up of individual keys and the latency between keys. The training data is stored as a typing profile, consisting of a vector of median values of elements of the feature set, and as a vector of standard deviations for the same elements. The proposed classifier algorithm computes a score for the typing of a password to determine authenticity. A measure of proximity is used in the comparison between feature set medians vector and feature set testing vector. Each feature in the testing vector is given a binary score of 1 if it is within a proximity distance threshold from the stored median of that feature, otherwise the score is 0. The proximity distance threshold for a feature is chosen to be the standard deviation of that feature in the training data. The typing of a password is classified as genuine if the accumulated score for all features meet a minimum acceptance threshold. Analysis of the benchmark dataset using the proposed classifier has given an improved anomaly detection performance in comparison with results of 14 algorithms that were previously tested using the same benchmark.
A Novel Algorithm for Defending Path-Based Denial of Service Attacks in Sensor Networks
Ramzi Saifan,Omar Al-Jarrah
International Journal of Distributed Sensor Networks , 2010, DOI: 10.1155/2010/793981
Abstract: Existing end-to-end security mechanisms are vulnerable to path-based denial of service attacks (PDoS). If checking integrity and authenticity of a message is done only at the final destination, the intermediate nodes are going to forward bogus packets injected by an adversary many hops before they are detected. Therefore, the adversary can easily overwhelm intermediate nodes by bogus or replayed packets. This attack exhausts the nodes along the path. In addition, other downstream nodes that depend on the exhausted nodes as intermediate nodes will be isolated, and they have to find alternative paths. Regarding broadcast traffic that originated from the base station, if packets were injected by an adversary, the whole network's nodes will be exhausted. Therefore, there is a need to enable intermediate nodes to filter out bogus packets. We adopted a link layer security scheme to enable en route intermediate nodes to filter out any bogus or replayed packet as soon as it is injected into the network. Our scheme can handle different types of traffic. Simulation results show that our algorithm outperforms the one-way hash chain (OHC) algorithm and that it is more scalable. 1. Introduction With the rapid development and wide application of wireless sensor networks (WSN), more and more security problems are emerging. Due to the unique characteristics and challenges in WSN, traditional security techniques used in traditional networks cannot be directly applied. First, sensor devices are limited in their energy, computation, and communication capabilities. Second, sensor nodes are often deployed in accessible areas which make the sensors vulnerable to physical attacks. Third, since the communication medium in WSN is a broadcast wireless medium, adversaries can easily eavesdrop on, intercept, inject, and alter transmitted data. Moreover, adversaries can overwhelm intermediate nodes with bogus or replayed packets to drain their batteries and waste network bandwidth. In addition to that, the adversary can make the victim node store invalid information to exhaust its memory and, therefore, leave no room for storing useful information. In the case of wireless medium and due to physical constraints of sensors, a security threat can be classified as a path-based denial of service attack (PDoS). In PDoS, an adversary overwhelms sensor nodes by flooding a multihop communication path with either replayed packets or injected spurious packets. Consequently, the energy of the victim nodes will be exhausted. The bogus packets may be dropped out by the end destination if there
Performance Evaluation of Edge Detection Using Sobel, Homogeneity and Prewitt Algorithms  [PDF]
Abdel Karim M. Baareh, Ahmad Al-Jarrah, Ahmad M. Smadi, Ghazi H. Shakah
Journal of Software Engineering and Applications (JSEA) , 2018, DOI: 10.4236/jsea.2018.1111032
Abstract: Edge detection considered as very important and fundamental tool in image processing. An image edge is a very sensitive place where the image information and details mostly placed on it. Different filters were used to detect and enhance these edges to improve the sharpness and raising the image clarity. The significance of this paper comes from the study, compare and evaluate the effects of three well-known edge detection techniques in a spatial domain, where this evaluation was performed using both subjective and objective manner to find out the best edge detection algorithm. The Sobel, Homogeneity and Prewitt algorithms were used on 2D gray-scale synthesis and real images in Jordan using C# programming language. According to the comparative results obtained using the three techniques, it was clearly found that Prewitt and Homogeneity algorithms performance were better than Sobel algorithm. Therefore, Prewitt and Homogeneity algorithms can be recommended as useful detection tools in edge detection.
Modeling and Simulation of a Robust e-Voting System
Mohammad Malkawi,Mohammed Khasawneh,Omar Al-Jarrah,Laith Barakat
Communications of the IBIMA , 2009,
Abstract: In this paper we present a simulation model for a multifaceted online e-Voting system. The proposed model is capable of handling electronic ballots with multiple scopes at the same time, e.g., presidential, municipal, and parliamentary, amongst others. The model caters for integrity of an election process in terms of the functional and non-functional requirements. The functional requirements embedded in the design of the proposed system warrant well-secured identification and authentication processes for the voter through the use of combined simple biometrics. Of utmost importance are the requirements for correctness, robustness, coherence, consistency, and security. To verify the robustness and reliability of the proposed system, intensive computer simulations were run under varying voting environments, viz. voter density, voter inter-arrival times, introduced acts of malice, etc. Results of the simulations show the impact of several parameters on the performance of the system. These results provide the proper grounds that would guide the decision maker in customizing an e-voting system.
Efficient Machine Learning for Big Data: A Review
O. Y. Al-Jarrah,P. D. Yoo,S Muhaidat,G. K. Karagiannidis,K. Taha
Computer Science , 2015,
Abstract: With the emerging technologies and all associated devices, it is predicted that massive amount of data will be created in the next few years, in fact, as much as 90% of current data were created in the last couple of years,a trend that will continue for the foreseeable future. Sustainable computing studies the process by which computer engineer/scientist designs computers and associated subsystems efficiently and effectively with minimal impact on the environment. However, current intelligent machine-learning systems are performance driven, the focus is on the predictive/classification accuracy, based on known properties learned from the training samples. For instance, most machine-learning-based nonparametric models are known to require high computational cost in order to find the global optima. With the learning task in a large dataset, the number of hidden nodes within the network will therefore increase significantly, which eventually leads to an exponential rise in computational complexity. This paper thus reviews the theoretical and experimental data-modeling literature, in large-scale data-intensive fields, relating to: (1) model efficiency, including computational requirements in learning, and data-intensive areas structure and design, and introduces (2) new algorithmic approaches with the least memory requirements and processing to minimize computational cost, while maintaining/improving its predictive/classification accuracy and stability.
Test-retest strength reliability of the Electronic Push/Pull Dynamometer (EPPD) in the measurement of the quadriceps and hamstring muscles on a new chair  [PDF]
Mikhled F. Maayah, Mohammad D. Al-Jarrah, Saad S. El Zahrani, Ali H. Alzahrani, Emad T. Ahmedv, Amr A. Abdel-Aziem, Gopichandran Lakshmanan, Nabeel A. Almawajdeh, Muhsen B. Alsufiany, Yaser O. M. Abu Asi
Open Journal of Internal Medicine (OJIM) , 2012, DOI: 10.4236/ojim.2012.22022
Abstract: Background: Test-retest strength reliability of the Electronic Push/Pull Dynamometer (EPPD) in the measurement of the extensor and flexor muscles on a new constructed chair. The objective of the study was to assess reliability of Electronic Push/Pull Dynamometer in the measurement of the knee flexion and extension at 90° and 60° on a new constructed chair. The aims of the author: To assess reliability of Electronic Push/Pull Dynamometer in the measurement of the knee flexion and extension at 90° and 60° on a new constructed chair. Design: A test-retest reliability study. Subjects: One hundred healthy students male and female (mean age, 21y). Methods: Maximum isometric strength of the quadriceps and hamstring muscle groups was measured using the EPPD were recorded at 60° and 90° for 3 trials on 2 occasions. Reliability was assessed with the Intraclass correlation coefficient (ICC), mean and standard deviation (SD) of measurements, and smallest real differences were calculated for the maximum and for the mean and work of the 3 repetitions. Results: Mean strength ranged from 50.44 kg for knee flexion to 55.76 kg for knee extension 50.44 kg to 61.98 kg at 90° hip flexion. Test-retest reliability Intraclass correlation coefficients (ICCs) ranged from 0.85 to 0.99. ICCs for test-retest reliability ranged from 0.780 to 0.998. Conclusions: The results of the reliability study indicate that the EPPD in reliable dynamometer to use in determining lower limb muscle force production. It can be used to measure disease progression and to evaluate changes in knee extension and flexion strength at the individual patient level.
Page 1 /18973
Display every page Item

Copyright © 2008-2017 Open Access Library. All rights reserved.