oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2019 ( 122 )

2018 ( 173 )

2017 ( 168 )

2016 ( 279 )

Custom range...

Search Results: 1 - 10 of 130543 matches for " Umit V. Catalyurek "
All listed articles are free for downloading (OA Articles)
Page 1 /130543
Display every page Item
Performance Evaluation of Sparse Matrix Multiplication Kernels on Intel Xeon Phi
Erik Saule,Kamer Kaya,Umit V. Catalyurek
Computer Science , 2013,
Abstract: Intel Xeon Phi is a recently released high-performance coprocessor which features 61 cores each supporting 4 hardware threads with 512-bit wide SIMD registers achieving a peak theoretical performance of 1Tflop/s in double precision. Many scientific applications involve operations on large sparse matrices such as linear solvers, eigensolver, and graph mining algorithms. The core of most of these applications involves the multiplication of a large, sparse matrix with a dense vector (SpMV). In this paper, we investigate the performance of the Xeon Phi coprocessor for SpMV. We first provide a comprehensive introduction to this new architecture and analyze its peak performance with a number of micro benchmarks. Although the design of a Xeon Phi core is not much different than those of the cores in modern processors, its large number of cores and hyperthreading capability allow many application to saturate the available memory bandwidth, which is not the case for many cutting-edge processors. Yet, our performance studies show that it is the memory latency not the bandwidth which creates a bottleneck for SpMV on this architecture. Finally, our experiments show that Xeon Phi's sparse kernel performance is very promising and even better than that of cutting-edge general purpose processors and GPUs.
Hypergraph Partitioning through Vertex Separators on Graphs
Enver Kayaaslan,Ali Pinar,Umit V. Catalyurek,Cevdet Aykanat
Computer Science , 2011,
Abstract: The modeling flexibility provided by hypergraphs has drawn a lot of interest from the combinatorial scientific community, leading to novel models and algorithms, their applications, and development of associated tools. Hypergraphs are now a standard tool in combinatorial scientific computing. The modeling flexibility of hypergraphs however, comes at a cost: algorithms on hypergraphs are inherently more complicated than those on graphs, which sometimes translate to nontrivial increases in processing times. Neither the modeling flexibility of hypergraphs, nor the runtime efficiency of graph algorithms can be overlooked. Therefore, the new research thrust should be how to cleverly trade-off between the two. This work addresses one method for this trade-off by solving the hypergraph partitioning problem by finding vertex separators on graphs. Specifically, we investigate how to solve the hypergraph partitioning problem by seeking a vertex separator on its net intersection graph (NIG), where each net of the hypergraph is represented by a vertex, and two vertices share an edge if their nets have a common vertex. We propose a vertex-weighting scheme to attain good node-balanced hypergraphs, since NIG model cannot preserve node balancing information. Vertex-removal and vertex-splitting techniques are described to optimize cutnet and connectivity metrics, respectively, under the recursive bipartitioning paradigm. We also developed an implementation for our GPVS-based HP formulations by adopting and modifying a state-of-the-art GPVS tool onmetis. Experiments conducted on a large collection of sparse matrices confirmed the validity of our proposed techniques.
Incremental Algorithms for Network Management and Analysis based on Closeness Centrality
Ahmet Erdem Sariyuce,Kamer Kaya,Erik Saule,Umit V. Catalyurek
Computer Science , 2013,
Abstract: Analyzing networks requires complex algorithms to extract meaningful information. Centrality metrics have shown to be correlated with the importance and loads of the nodes in network traffic. Here, we are interested in the problem of centrality-based network management. The problem has many applications such as verifying the robustness of the networks and controlling or improving the entity dissemination. It can be defined as finding a small set of topological network modifications which yield a desired closeness centrality configuration. As a fundamental building block to tackle that problem, we propose incremental algorithms which efficiently update the closeness centrality values upon changes in network topology, i.e., edge insertions and deletions. Our algorithms are proven to be efficient on many real-life networks, especially on small-world networks, which have a small diameter and a spike-shaped shortest distance distribution. In addition to closeness centrality, they can also be a great arsenal for the shortest-path-based management and analysis of the networks. We experimentally validate the efficiency of our algorithms on large networks and show that they update the closeness centrality values of the temporal DBLP-coauthorship network of 1.2 million users 460 times faster than it would take to compute them from scratch. To the best of our knowledge, this is the first work which can yield practical large-scale network management based on closeness centrality values.
GPU accelerated maximum cardinality matching algorithms for bipartite graphs
Mehmet Deveci,Kamer Kaya,Bora Ucar,Umit V. Catalyurek
Computer Science , 2013,
Abstract: We design, implement, and evaluate GPU-based algorithms for the maximum cardinality matching problem in bipartite graphs. Such algorithms have a variety of applications in computer science, scientific computing, bioinformatics, and other areas. To the best of our knowledge, ours is the first study which focuses on GPU implementation of the maximum cardinality matching algorithms. We compare the proposed algorithms with serial and multicore implementations from the literature on a large set of real-life problems where in majority of the cases one of our GPU-accelerated algorithms is demonstrated to be faster than both the sequential and multicore implementations.
Finding the Hierarchy of Dense Subgraphs using Nucleus Decompositions
Ahmet Erdem Sariyuce,C. Seshadhri,Ali Pinar,Umit V. Catalyurek
Computer Science , 2014,
Abstract: Finding dense substructures in a graph is a fundamental graph mining operation, with applications in bioinformatics, social networks, and visualization to name a few. Yet most standard formulations of this problem (like clique, quasiclique, k-densest subgraph) are NP-hard. Furthermore, the goal is rarely to find the "true optimum", but to identify many (if not all) dense substructures, understand their distribution in the graph, and ideally determine relationships among them. Current dense subgraph finding algorithms usually optimize some objective, and only find a few such subgraphs without providing any structural relations. We define the nucleus decomposition of a graph, which represents the graph as a forest of nuclei. Each nucleus is a subgraph where smaller cliques are present in many larger cliques. The forest of nuclei is a hierarchy by containment, where the edge density increases as we proceed towards leaf nuclei. Sibling nuclei can have limited intersections, which enables discovering overlapping dense subgraphs. With the right parameters, the nucleus decomposition generalizes the classic notions of k-cores and k-truss decompositions. We give provably efficient algorithms for nucleus decompositions, and empirically evaluate their behavior in a variety of real graphs. The tree of nuclei consistently gives a global, hierarchical snapshot of dense substructures, and outputs dense subgraphs of higher quality than other state-of-the-art solutions. Our algorithm can process graphs with tens of millions of edges in less than an hour.
Metagenomic Insights into the Carbohydrate-Active Enzymes Carried by the Microorganisms Adhering to Solid Digesta in the Rumen of Cows
Lingling Wang, Ayat Hatem, Umit V. Catalyurek, Mark Morrison, Zhongtang Yu
PLOS ONE , 2013, DOI: 10.1371/journal.pone.0078507
Abstract: The ruminal microbial community is a unique source of enzymes that underpin the conversion of cellulosic biomass. In this study, the microbial consortia adherent on solid digesta in the rumen of Jersey cattle were subjected to an activity-based metagenomic study to explore the genetic diversity of carbohydrolytic enzymes in Jersey cows, with a particular focus on cellulases and xylanases. Pyrosequencing and bioinformatic analyses of 120 carbohydrate-active fosmids identified genes encoding 575 putative Carbohydrate-Active Enzymes (CAZymes) and proteins putatively related to transcriptional regulation, transporters, and signal transduction coupled with polysaccharide degradation and metabolism. Most of these genes shared little similarity to sequences archived in databases. Genes that were predicted to encode glycoside hydrolases (GH) involved in xylan and cellulose hydrolysis (e.g., GH3, 5, 9, 10, 39 and 43) were well represented. A new subfamily (S-8) of GH5 was identified from contigs assigned to Firmicutes. These subfamilies of GH5 proteins also showed significant phylum-dependent distribution. A number of polysaccharide utilization loci (PULs) were found, and two of them contained genes encoding Sus-like proteins and cellulases that have not been reported in previous metagenomic studies of samples from the rumens of cows or other herbivores. Comparison with the large metagenomic datasets previously reported of other ruminant species (or cattle breeds) and wallabies showed that the rumen microbiome of Jersey cows might contain differing CAZymes. Future studies are needed to further explore how host genetics and diets affect the diversity and distribution of CAZymes and utilization of plant cell wall materials.
Graph Coloring Algorithms for Muti-core and Massively Multithreaded Architectures
Umit Catalyurek,John Feo,Assefaw Gebremedhin,Mahantesh Halappanavar,Alex Pothen
Computer Science , 2012,
Abstract: We explore the interplay between architectures and algorithm design in the context of shared-memory platforms and a specific graph problem of central importance in scientific and high-performance computing, distance-1 graph coloring. We introduce two different kinds of multithreaded heuristic algorithms for the stated, NP-hard, problem. The first algorithm relies on speculation and iteration, and is suitable for any shared-memory system. The second algorithm uses dataflow principles, and is targeted at the non-conventional, massively multithreaded Cray XMT system. We study the performance of the algorithms on the Cray XMT and two multi-core systems, Sun Niagara 2 and Intel Nehalem. Together, the three systems represent a spectrum of multithreading capabilities and memory structure. As testbed, we use synthetically generated large-scale graphs carefully chosen to cover a wide range of input types. The results show that the algorithms have scalable runtime performance and use nearly the same number of colors as the underlying serial algorithm, which in turn is effective in practice. The study provides insight into the design of high performance algorithms for irregular problems on many-core architectures.
Uma abordagem prática para a programa??o de setup e para o dimensionamento de lote em uma indústria têxtil
Akinc, Umit;
Gest?o & Produ??o , 1995, DOI: 10.1590/S0104-530X1995000100003
Abstract: this is a study of scheduling of setups and production activities of a textile firm, located in north carolina, usa. the firm faces the problem of scheduling customer orders on a number of knitting machines which can be configured differently by installing different knitting cylinders, to knit various types of greige cloth. given a set of requirements for different styles of cloth, the problem is to decide on the specific configurations to be used on each machine and on the specific orders to be run on these configurations. the problem is formulated as an integer linear programming model. the objective is the maximization of total contribution of all the scheduled orders subject to capacity constraints of machines and that of tooling, which explicitly consider the effect of scheduled setups and constraints on customer orders. various solution approaches are discussed. an approximate procedure is devised which incrementally adds new setups based on several heuristics by which the "value" of candidate configurations for the machines are evaluated. these heuristics can either be developed into a self contained scheduling procedure or can interactively be utilized by a human scheduler in a microcomputer environment.
Punishments Applied to Children by a Group of Pre-School Teachers in Turkey
Umit Deniz
Pakistan Journal of Social Sciences , 2012,
Abstract: This study was carried out to determine whether a group of pre-school teachers applied punishment to children and what type of behavior was punished by those applying the punishment and what the punishment was for this behavior. The current study is a descriptive research. The working group was consisted of 41 teachers teaching at official kindergartens serving in the central towns of Etimesgut and Sincan of the city of Ankara, Turkey. A questionnaire form made up of open-ended questions was used to gather data and the data obtained was analyzed through content analysis. Thirty-four teachers included in the study expressed that they applied punishment to children. It was found that teachers mostly punished the behaviors of being aggressive and disobeying the rules. Teachers pointed out in terms of these behaviors that they applied such punishments in the categories as keeping them away from favorite activities, time-out and making him compensate.
Zoonotics
umit Khanna
Pharmaceutical Reviews , 2005,
Abstract: The use of weapons of mass destruction by international terrorist organizations remains a potential threat. But in my opinion this is not the major risk to the public at large. Biological weapons of mass destruction such as anthrax or other pathogens are relatively difficult to stabilize, transport, and effectively disseminate on a large scale.Whereas there are some agents which are easy to stabilize and simpler to transport known as zoonotic agents. Zoonotics was derived from two words “zoon + noses”. Zoon means animals and noses means diseases. (1,2)The World Health Organization defines Zoonoses (Zoonosis, sing.) as "Those diseases and infections which are naturally transmitted between vertebrate animals and man". They demonstrate the ability to infect humans and animals. Numerous organisms have zoonotic potential of which many are devastating and horrible diseases. Both dogs and cats are born with lots of worms that go dormant in their muscle tissue and emerge throughout the pet's life during periods of stress (especially pregnancy) or sickness. Therefore even pets, without exposure to other pets, can turn up positive for worms after previously being tested as negative on a fecal exam. In addition, they continuously pick up microscopic worm eggs in the environment. The worms go through their life cycle in the dog or cat, causing various degrees of trouble to the pet and end up spreading more worm eggs via the stool.
Page 1 /130543
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.