oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

9 ( 1 )

2019 ( 6 )

2018 ( 8 )

2017 ( 14 )

Custom range...

Search Results: 1 - 10 of 1771 matches for " Hadi Sadoghi Yazdi "
All listed articles are free for downloading (OA Articles)
Page 1 /1771
Display every page Item
Going Concern Prediction of Iranian Companies by Using Fuzzy C-Means  [PDF]
Mahdi Moradi, Mahdi Salehi, Hadi Sadoghi Yazdi, Mohammad Ebrahim Gorgani
Open Journal of Accounting (OJAcct) , 2012, DOI: 10.4236/ojacct.2012.12005
Abstract: Decision-making problems in the area of financial status evaluation have been considered very important. Making incorrect decisions in firms is very likely to cause financial crises and distress. Predicting going concern of factories and manufacturing companies is the desire of managers, investors, auditors, financial analysts, governmental officials, employees. This research introduces a new approach for modeling of company’s behavior based on Fuzzy Clustering Means (FCM). Fuzzy clustering is one of well-known unsupervised clustering techniques, which allows one piece of data belongs to two or more clusters. The data used in this research was obtained from Iran Stock Market and Accounting Research Database. According to the data between 2000 and 2009, 70 pairs of companies listed in Tehran Stock Exchange are selected as initial data set. Our experimental results showed that FCM approach obtains good prediction accuracy in developing a financial distress prediction model. Also, in effective features determination test the results show that features based on cash flows play more important role in clustering two classes.
Modified Clipped LMS Algorithm
Mojtaba Lotfizad,Hadi Sadoghi Yazdi
EURASIP Journal on Advances in Signal Processing , 2005, DOI: 10.1155/asp.2005.1229
Abstract: A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization (+1,0, ¢ ’1) scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS) algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.
A Novel Neuron in Kernel Domain
Zahra Khandan,Hadi Sadoghi Yazdi
ISRN Signal Processing , 2013, DOI: 10.1155/2013/748914
Abstract: Kernel-based neural network (KNN) is proposed as a neuron that is applicable in online learning with adaptive parameters. This neuron with adaptive kernel parameter can classify data accurately instead of using a multilayer error backpropagation neural network. The proposed method, whose heart is kernel least-mean-square, can reduce memory requirement with sparsification technique, and the kernel can adaptively spread. Our experiments will reveal that this method is much faster and more accurate than previous online learning algorithms. 1. Introduction Adaptive filter is the heart of most neural networks [1]. LMS method and its kernel-based methods are potential online methods with iterative learning that are used for reducing mean squared error toward optimum Wiener weights. Due to simple implementation of LMS [1], this method became one of the candidates for online kernel-based learning. The kernel-based learning [2] utilizes Mercer kernels in order to produce nonlinear versions of conventional linear methods. After the introduction of the kernel, kernel least-mean-square (KLMS) [3, 4] was proposed. KLMS algorithm tries to solve LMS problems in reproducing kernel hilbert spaces (RKHS) [3] using a stochastic gradient methodology. KNN has such characteristics as kernel abilities and LMS features, easy learning over variants of patterns, and traditional neurons capabilities. The experimental results show that this classifier has better performance than the other online kernel methods, with suitable parameters. Two main drawbacks of kernel-based methods are selecting proper value for kernel parameters and series expansions whose size equals the number of training data, which make them unsuitable for online applications. This paper concentrates only on Gaussian kernel (for similar reasons to those discussed in [5]), while KNN uses other kernels too. In [6], the role of kernel width in the smoothness of the performance surfaces. Determining the kernel width of Gaussian kernel in kernel-based methods is very important. Controlling kernel width can help us to control the learning rate and the tradeoff between overfitting and underfitting. Use of cross-validation is one of the simplest methods to tune this parameter which is costly and cannot be used for datasets with too many classes. So, the parameters are chosen using a subset of data with a low number of classes in [7]. In some methods, genetic algorithm [8] or grid search [5] is used to determine the proper value of such parameters. However, in all the mentioned methods, the kernel width is chosen as a
Comment on "robustness and regularization of support vector machines" by H. Xu, et al., (Journal of Machine Learning Research, vol. 10, pp. 1485-1510, 2009, arXiv:0803.3490)
Yahya Forghani,Hadi Sadoghi Yazdi
Computer Science , 2013,
Abstract: This paper comments on the published work dealing with robustness and regularization of support vector machines (Journal of Machine Learning Research, vol. 10, pp. 1485-1510, 2009) [arXiv:0803.3490] by H. Xu, etc. They proposed a theorem to show that it is possible to relate robustness in the feature space and robustness in the sample space directly. In this paper, we propose a counter example that rejects their theorem.
Modified Adaptive Center Weighted Median Filter for Suppressing Impulsive Noise in Images
Behrooz Ghandeharian, Hadi Sadoghi Yazdi and Faranak Homayouni
International Journal of Research and Reviews in Applied Sciences , 2009,
Abstract:
Duct Modeling Using the Generalized RBF Neural Network for Active Cancellation of Variable Frequency Narrow Band Noise
Hadi Sadoghi Yazdi,Javad Haddadnia,Mojtaba Lotfizad
EURASIP Journal on Advances in Signal Processing , 2007, DOI: 10.1155/2007/41679
Abstract: We have shown that duct modeling using the generalized RBF neural network (DM_RBF), which has the capability of modeling the nonlinear behavior, can suppress a variable-frequency narrow band noise of a duct more efficiently than an FX-LMS algorithm. In our method (DM_RBF), at first the duct is identified using a generalized RBF network, after that N stage of time delay of the input signal to the N generalized RBF network is applied, then a linear combiner at their outputs makes an online identification of the nonlinear system. The weights of linear combiner are updated by the normalized LMS algorithm. We have showed that the proposed method is more than three times faster in comparison with the FX-LMS algorithm with 30% lower error. Also the DM_RBF method will converge in changing the input frequency, while it makes the FX-LMS cause divergence.
Duct Modeling Using the Generalized RBF Neural Network for Active Cancellation of Variable Frequency Narrow Band Noise
Yazdi Hadi Sadoghi,Haddadnia Javad,Lotfizad Mojtaba
EURASIP Journal on Advances in Signal Processing , 2007,
Abstract: We have shown that duct modeling using the generalized RBF neural network (DM_RBF), which has the capability of modeling the nonlinear behavior, can suppress a variable-frequency narrow band noise of a duct more efficiently than an FX-LMS algorithm. In our method (DM_RBF), at first the duct is identified using a generalized RBF network, after that stage of time delay of the input signal to the generalized RBF network is applied, then a linear combiner at their outputs makes an online identification of the nonlinear system. The weights of linear combiner are updated by the normalized LMS algorithm. We have showed that the proposed method is more than three times faster in comparison with the FX-LMS algorithm with 30% lower error. Also the DM_RBF method will converge in changing the input frequency, while it makes the FX-LMS cause divergence.
Gait Recognition Based on Invariant Leg Classification Using a Neuro-Fuzzy Algorithm as the Fusion Method
Hadi Sadoghi Yazdi,Hessam Jahani Fariman,Jaber Roohi
ISRN Artificial Intelligence , 2012, DOI: 10.5402/2012/289721
Abstract:
Clipped Input RLS Applied to Vehicle Tracking
Hadi Sadoghi Yazdi,Mojtaba Lotfizad,Ehsanollah Kabir,Mahmood Fathy
EURASIP Journal on Advances in Signal Processing , 2005, DOI: 10.1155/asp.2005.1221
Abstract: A new variation to the RLS algorithm is presented. In the clipped RLS algorithm (CRLS), proposed in updating the filter weights and computation of the inverse correlation matrix, the input signal is quantized into three levels. The convergence of the CRLS algorithm to the optimum Wiener weights is proved. The computational complexity and signal estimation error is lower than that of the RLS algorithm. The CRLS algorithm is used in the estimation of a noisy chirp signal and in vehicles tracking. Simulation results in chirp signal detection shows that this algorithm yields considerable error reduction and less computation time in comparison to the conventional RLS algorithm. In the presence of strong noise, also using the proposed algorithm in tracking of 59 vehicles shows an average of 3.06% reduction in prediction error variance relative to conventional RLS algorithm.
Designing Kernel Scheme for Classifiers Fusion
Mehdi Salkhordeh Haghighi,Hadi Sadoghi Yazdi,Abedin Vahedian,Hamed Modaghegh
Computer Science , 2009,
Abstract: In this paper, we propose a special fusion method for combining ensembles of base classifiers utilizing new neural networks in order to improve overall efficiency of classification. While ensembles are designed such that each classifier is trained independently while the decision fusion is performed as a final procedure, in this method, we would be interested in making the fusion process more adaptive and efficient. This new combiner, called Neural Network Kernel Least Mean Square1, attempts to fuse outputs of the ensembles of classifiers. The proposed Neural Network has some special properties such as Kernel abilities,Least Mean Square features, easy learning over variants of patterns and traditional neuron capabilities. Neural Network Kernel Least Mean Square is a special neuron which is trained with Kernel Least Mean Square properties. This new neuron is used as a classifiers combiner to fuse outputs of base neural network classifiers. Performance of this method is analyzed and compared with other fusion methods. The analysis represents higher performance of our new method as opposed to others.
Page 1 /1771
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.