Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+
 Title Keywords Abstract Author All
Search Results: 1 - 10 of 100 matches for " "
 Page 1 /100 Display every page 5 10 20 Item
 International Journal of Computer Science Issues , 2011, Abstract: The Internet has experienced a phenomenal growth as a result of increasing demands for contents, content distributions and other services. CDNs have evolved as cooperative and collaborative groups of networks over the Internet where contents are replicated over the surrogate servers for efficient delivery performance to the clients and improved service cost to the CDN providers. However, a CDN is limited in terms of Point of Presence (PoP) and scalability. This work is concerned with content object replication among peering CDNs. It provides an analytical model for the replication problem in terms of a constrained optimization problem subject to a mix of QoS
 Journal of Computer Networks and Communications , 2011, DOI: 10.1155/2011/707592 Abstract: This paper proposes a new replica placement algorithm that expands the exhaustive search limit with reasonable calculation time. It combines a new type of parallel data-flow processor with an architecture tuned for fast calculation. The replica placement problem is to find a replica-server set satisfying service constraints in a content delivery network (CDN). It is derived from the set cover problem which is known to be NP-hard. It is impractical to use exhaustive search to obtain optimal replica placement in large-scale networks, because calculation time increases with the number of combinations. To reduce calculation time, heuristic algorithms have been proposed, but it is known that no heuristic algorithm is assured of finding the optimal solution. The proposed algorithm suits parallel processing and pipeline execution and is implemented on DAPDNA-2, a dynamically reconfigurable processor. Experiments show that the proposed algorithm expands the exhaustive search limit by the factor of 18.8 compared to the conventional algorithm search limit running on a Neumann-type processor. 1. Introduction Content delivery networks (CDNs) [1–3] are being developed to improve the user’s experience when downloading voluminous files such as music and videos, a rapidly growing component of the traffic on the Internet. A CDN consists of two types of servers: origin server and replica server. The original data is stored in the origin server and then copied to the replica servers, which are geographically distributed. A user requesting content is connected to a replica server automatically selected by the network, which then sends the content to the user. Replica selection is based on the distance between the server and the user, and usually, the closest server is selected [4]. One important issue in CDN performance is replica placement [5]. The problem is in deciding which servers are to hold which replicas. Replica servers cache the original servers’ contents to prevent traffic congestion and to maintain user performance. They allow CDN providers to minimize the capital expenditures (CapEx) and operational expenditures (OpEx). Note that cache size is restricted; no replica server can hold all of the contents held by the origin server. For each content, we must pick those servers, not all, that will hold the replica. In addition, in order to achieve adequate user performance, each replica server must have a limited delivery area. The delivery area is expressed as the distance from the replica server. Each user must lie within the delivery area of at least one replica
 Computer Science , 2006, Abstract: In this paper, we discuss and compare several policies to place replicas in tree networks, subject to server capacity and QoS constraints. The client requests are known beforehand, while the number and location of the servers are to be determined. The standard approach in the literature is to enforce that all requests of a client be served by the closest server in the tree. We introduce and study two new policies. In the first policy, all requests from a given client are still processed by the same server, but this server can be located anywhere in the path from the client to the root. In the second policy, the requests of a given client can be processed by multiple servers. One major contribution of this paper is to assess the impact of these new policies on the total replication cost. Another important goal is to assess the impact of server heterogeneity, both from a theoretical and a practical perspective. In this paper, we establish several new complexity results, and provide several efficient polynomial heuristics for NP-complete instances of the problem. These heuristics are compared to an absolute lower bound provided by the formulation of the problem in terms of the solution of an integer linear program.
 International Journal of Computer Science Issues , 2012, Abstract: Replica placement is one of the important factors to improve performance in data grid systems. A good replica placement algorithm can result in good performance gains. It should be mentioned that, these algorithms or strategies are dependent on architecture of the data grid. By considering different kinds of architecture in data grid systems, a true representation of a grid is a general graph. So we propose a new algorithm for suitable placement of replicas on graph-based data grids. The performance of the proposed algorithm is improved by minimizing the data access time, avoiding unnecessary replications and nice performance in balancing the load of replica servers. The algorithm will be simulated using a data grid simulator Optorsim, developed by European Data Grid projects.
 Computer Science , 2015, Abstract: A new model of causal failure is presented and used to solve a novel replica placement problem in data centers. The model describes dependencies among system components as a directed graph. A replica placement is defined as a subset of vertices in such a graph. A criterion for optimizing replica placements is formalized and explained. In this work, the optimization goal is to avoid choosing placements in which a single failure event is likely to wipe out multiple replicas. Using this criterion, a fast algorithm is given for the scenario in which the dependency model is a tree. The main contribution of the paper is an $O(n + \rho \log \rho)$ dynamic programming algorithm for placing $\rho$ replicas on a tree with $n$ vertices. This algorithm exhibits the interesting property that only two subproblems need to be recursively considered at each stage. An $O(n^2 \rho)$ greedy algorithm is also briefly reported.
 Journal of Networks , 2011, DOI: 10.4304/jnw.6.3.416-423 Abstract: Content Distribution Networks have been attracted a great deal of attraction in recent years. Replica placement problems (RPPs) as one of the key technologies in the Content Distribution Networks have been widely studied. In this paper, we propose an optimization model with server storage capacity constraints for the RPPs. Furthermore, part of the objective function is represented as a Multiple Minimum Cost Flow Model for the first time. E
 International Journal of Computer Networks , 2011, Abstract: The growth in wireless communication technologies has immensely gained a lot ofattention in ad-hoc networks especially in the area of mobile hosts. As the networktopology in this type of ad-hoc networks changes dynamically, chances of networkgetting disconnected is also very frequent due to unstable radio links for shorter timeintervals. In this paper, we proposed a succinct solution for the problem replica allocationin a mobile ad-hoc network by exploring group mobility. The solution for this problem iscarried out in three phases. Thus, with this solution even if the hosts get disconnectedwe can replicate the data items on to mobile hosts so that the mobile hosts can stillaccess the data. Several experiments are conducted to evaluate the performance of theproposed scheme. The experimental results show that the proposed scheme is able tonot only obtain higher data accessibility but also produce lower network traffic than priorschemes.
 International Journal on Computer Science and Engineering , 2009, Abstract: The large scale content distribution systems were improved broadly using the replication techniques. The demanded contents can be brought closer to the clients by multiplying the source of information geographically, which in turn reduce both the access latency and the network traffic. The system scalability can be improved by distributing the load across multiple servers which is proposed by replication. If a copy of the requested object (e.g., a web page or an image) is located in its closer proximity then the clients would feel low access latency. Depending on the position of the replicas, the effectiveness of replication tends to a large extent. A QoS based overlay network architecture involving an intelligent replica placement algorithm is proposed in this paper. Its main goal is to improve the network utilization and fault tolerance of the P2P system. In addition to the replica placement, it also has a caching technique, to reduce the search latency. We are able to show that our proposed architecture attains less latency and better throughput with reduced bandwidth usage, through the simulation results.
 Computer Science , 2009, Abstract: The large scale content distribution systems were improved broadly using the replication techniques. The demanded contents can be brought closer to the clients by multiplying the source of information geographically, which in turn reduce both the access latency and the network traffic. The system scalability can be improved by distributing the load across multiple servers which is proposed by replication. If a copy of the requested object (e.g., a web page or an image) is located in its closer proximity then the clients would feel low access latency. Depending on the position of the replicas, the effectiveness of replication tends to a large extent. A QoS based overlay network architecture involving an intelligent replica placement algorithm is proposed in this paper. Its main goal is to improve the network utilization and fault tolerance of the P2P system. In addition to the replica placement, it also has a caching technique, to reduce the search latency. We are able to show that our proposed architecture attains less latency and better throughput with reduced bandwidth usage, through the simulation results.
 Information Technology Journal , 2008, Abstract: In this study, we present a review of replication techniques that work under peer-to-peer, hybrid and domain-based approaches for Grid environment, with specific emphasis on replica placement. We identify that peer-to-peer technology is contributing a high proportion in Grid data management and replication. A representative set of replica placement schemes were simulated and compared to show their effectiveness.
 Page 1 /100 Display every page 5 10 20 Item