oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2019 ( 67 )

2018 ( 103 )

2017 ( 96 )

2016 ( 128 )

Custom range...

Search Results: 1 - 10 of 7997 matches for " Muhammad Shahid Shaikh "
All listed articles are free for downloading (OA Articles)
Page 1 /7997
Display every page Item
An improved semantic similarity measure for document clustering based on topic maps
Muhammad Rafi,Mohammad Shahid Shaikh
Computer Science , 2013,
Abstract: A major computational burden, while performing document clustering, is the calculation of similarity measure between a pair of documents. Similarity measure is a function that assigns a real number between 0 and 1 to a pair of documents, depending upon the degree of similarity between them. A value of zero means that the documents are completely dissimilar whereas a value of one indicates that the documents are practically identical. Traditionally, vector-based models have been used for computing the document similarity. The vector-based models represent several features present in documents. These approaches to similarity measures, in general, cannot account for the semantics of the document. Documents written in human languages contain contexts and the words used to describe these contexts are generally semantically related. Motivated by this fact, many researchers have proposed seman-tic-based similarity measures by utilizing text annotation through external thesauruses like WordNet (a lexical database). In this paper, we define a semantic similarity measure based on documents represented in topic maps. Topic maps are rapidly becoming an industrial standard for knowledge representation with a focus for later search and extraction. The documents are transformed into a topic map based coded knowledge and the similarity between a pair of documents is represented as a correlation between the common patterns (sub-trees). The experimental studies on the text mining datasets reveal that this new similarity measure is more effective as compared to commonly used similarity measures in text clustering.
A comparison of SVM and RVM for Document Classification
Muhammad Rafi,Mohammad Shahid Shaikh
Computer Science , 2013,
Abstract: Document classification is a task of assigning a new unclassified document to one of the predefined set of classes. The content based document classification uses the content of the document with some weighting criteria to assign it to one of the predefined classes. It is a major task in library science, electronic document management systems and information sciences. This paper investigates document classification by using two different classification techniques (1) Support Vector Machine (SVM) and (2) Relevance Vector Machine (RVM). SVM is a supervised machine learning technique that can be used for classification task. In its basic form, SVM represents the instances of the data into space and tries to separate the distinct classes by a maximum possible wide gap (hyper plane) that separates the classes. On the other hand RVM uses probabilistic measure to define this separation space. RVM uses Bayesian inference to obtain succinct solution, thus RVM uses significantly fewer basis functions. Experimental studies on three standard text classification datasets reveal that although RVM takes more training time, its classification is much better as compared to SVM.
Content-based Text Categorization using Wikitology
Muhammad Rafi,Sundus Hassan,Muhammad Shahid Shaikh
International Journal of Computer Science Issues , 2012,
Abstract: The process of text categorization assigns labels or categories to each text document according to the semantic content of the document. The traditional approaches to text categorization used features from the text like: words, phrases, and concepts hierarchies to represent and reduce the dimensionality of the documents. Recently, researchers addressed this brittleness by incorporating background knowledge into document representation by using some external knowledge base for example WordNet, Open Project Directory (OPD) and Wikipedia. In this paper we have tried to enhance text categorization by integrating knowledge from Wikitology. Wikitology is a knowledge repository which extracts knowledge from Wikipedia in structured/unstructured forms with a warping of ontological structure. We have augmented text document by exploring Wikitology fields like: {Bag of Words, titles, redirects, entity types, categories and linked entities}. We also propose and evaluate different text representations and text enrichment technique. The classification is performed by using Support Vector Machine (SVM and we have validated this experiment on 4-fold cross-validation.
Comparing SVM and Naive Bayes classifiers for text categorization with Wikitology as knowledge enrichment
Sundus Hassan,Muhammad Rafi,Muhammad Shahid Shaikh
Computer Science , 2012, DOI: 10.1109/INMIC.2011.6151495
Abstract: The activity of labeling of documents according to their content is known as text categorization. Many experiments have been carried out to enhance text categorization by adding background knowledge to the document using knowledge repositories like Word Net, Open Project Directory (OPD), Wikipedia and Wikitology. In our previous work, we have carried out intensive experiments by extracting knowledge from Wikitology and evaluating the experiment on Support Vector Machine with 10- fold cross-validations. The results clearly indicate Wikitology is far better than other knowledge bases. In this paper we are comparing Support Vector Machine (SVM) and Na\"ive Bayes (NB) classifiers under text enrichment through Wikitology. We validated results with 10-fold cross validation and shown that NB gives an improvement of +28.78%, on the other hand SVM gives an improvement of +6.36% when compared with baseline results. Na\"ive Bayes classifier is better choice when external enriching is used through any external knowledge base.
Document Clustering based on Topic Maps
Muhammad Rafi,M. Shahid Shaikh,Amir Farooq
Computer Science , 2011, DOI: 10.5120/1640-2204
Abstract: Importance of document clustering is now widely acknowledged by researchers for better management, smart navigation, efficient filtering, and concise summarization of large collection of documents like World Wide Web (WWW). The next challenge lies in semantically performing clustering based on the semantic contents of the document. The problem of document clustering has two main components: (1) to represent the document in such a form that inherently captures semantics of the text. This may also help to reduce dimensionality of the document, and (2) to define a similarity measure based on the semantic representation such that it assigns higher numerical values to document pairs which have higher semantic relationship. Feature space of the documents can be very challenging for document clustering. A document may contain multiple topics, it may contain a large set of class-independent general-words, and a handful class-specific core-words. With these features in mind, traditional agglomerative clustering algorithms, which are based on either Document Vector model (DVM) or Suffix Tree model (STC), are less efficient in producing results with high cluster quality. This paper introduces a new approach for document clustering based on the Topic Map representation of the documents. The document is being transformed into a compact form. A similarity measure is proposed based upon the inferred information through topic maps data and structures. The suggested method is implemented using agglomerative hierarchal clustering and tested on standard Information retrieval (IR) datasets. The comparative experiment reveals that the proposed approach is effective in improving the cluster quality.
Content-based Text Categorization using Wikitology
Muhammad Rafi,Sundus Hassan,Mohammad Shahid Shaikh
Computer Science , 2012,
Abstract: A major computational burden, while performing document clustering, is the calculation of similarity measure between a pair of documents. Similarity measure is a function that assign a real number between 0 and 1 to a pair of documents, depending upon the degree of similarity between them. A value of zero means that the documents are completely dissimilar whereas a value of one indicates that the documents are practically identical. Traditionally, vector-based models have been used for computing the document similarity. The vector-based models represent several features present in documents. These approaches to similarity measures, in general, cannot account for the semantics of the document. Documents written in human languages contain contexts and the words used to describe these contexts are generally semantically related. Motivated by this fact, many researchers have proposed semantic-based similarity measures by utilizing text annotation through external thesauruses like WordNet (a lexical database). In this paper, we define a semantic similarity measure based on documents represented in topic maps. Topic maps are rapidly becoming an industrial standard for knowledge representation with a focus for later search and extraction. The documents are transformed into a topic map based coded knowledge and the similarity between a pair of documents is represented as a correlation between the common patterns. The experimental studies on the text mining datasets reveal that this new similarity measure is more effective as compared to commonly used similarity measures in text clustering.
Document clustering using graph based document representation with constraints
Muhammad Rafi,Farnaz Amin,Mohammad Shahid Shaikh
Computer Science , 2014,
Abstract: Document clustering is an unsupervised approach in which a large collection of documents (corpus) is subdivided into smaller, meaningful, identifiable, and verifiable sub-groups (clusters). Meaningful representation of documents and implicitly identifying the patterns, on which this separation is performed, is the challenging part of document clustering. We have proposed a document clustering technique using graph based document representation with constraints. A graph data structure can easily capture the non-linear relationships of nodes, document contains various feature terms that can be non-linearly connected hence a graph can easily represents this information. Constrains, are explicit conditions for document clustering where background knowledge is use to set the direction for Linking or Not-Linking a set of documents for a target clusters, thus guiding the clustering process. We deemed clustering is an ill-define problem, there can be many clustering results. Background knowledge can be used to drive the clustering algorithm in the right direction. We have proposed three different types of constraints, Instance level, corpus level and cluster level constraints. A new algorithm Constrained HAC is also proposed which will incorporate Instance level constraints as prior knowledge; it will guide the clustering process leading to better results. Extensive set of experiments have been performed on both synthetic and standard document clustering datasets, results are compared on standard clustering measures like: purity, entropy and F-measure. Results clearly establish that our proposed approach leads to improvement in cluster quality.
The m-Point Quaternary Approximating Subdivision Schemes  [PDF]
Shahid S. Siddiqi, Muhammad Younis
American Journal of Computational Mathematics (AJCM) , 2013, DOI: 10.4236/ajcm.2013.31A002
Abstract:

In this article, the objective is to introduce an algorithm to produce the quaternary m-point (for any integer m>1) approximating subdivision schemes, which have smaller support and higher smoothness, comparing to binary and ternary schemes. The proposed algorithm has been derived from uniform B-spline basis function using the Cox-de Boor recursion formula. In order to determine the convergence and smoothness of the proposed schemes, the Laurent polynomial method has been used.

Gender Equality in Primary Education in Bangladesh
Shaikh Muhammed Shahid Uddin Eskander
Pakistan Journal of Social Sciences , 2012,
Abstract: Women are almost half of the total world population and in order to attain sustainable development, it is no more possible to ignore the women. Various attempts are made to create an effective partnership between male and female in development activities. But before empowering women, it is necessary to make them able to empower themselves. To start the process, it is needed to eliminate all sorts of gender disparities from education, initially from primary education.
Two-Dimensional Nonlinear Reaction Diffusion Equation with Time Efficient Scheme  [PDF]
Shahid Hasnain, Muhammad Saqib, Daoud Suleiman Mashat
American Journal of Computational Mathematics (AJCM) , 2017, DOI: 10.4236/ajcm.2017.72017
Abstract: This research paper represents a numerical approximation to non-linear two-dimensional reaction diffusion equation from population genetics. Since various initial and boundary value problems exist in two-dimensional reaction-diffusion, phenomena are studied numerically by different numerical methods, here we use finite difference schemes to approximate the solution. Accuracy is studied in term of L2, L and relative error norms by random selected grids along time levels for comparison with exact results. The test example demonstrates the accuracy, efficiency and versatility of the proposed schemes. It is shown that the numerical schemes give better solutions. Moreover, the schemes can be easily applied to a wide class of higher dimension nonlinear reaction diffusion equations with a little modification.
Page 1 /7997
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.