oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Fractal Image Compression Using Quadtree Decomposition and Huffman Coding  [PDF]
Veenadevi.S.V,A.G.Ananth
Signal & Image Processing , 2012,
Abstract: Fractal image compression can be obtained by dividing the original grey level image into unoverlappedblocks depending on a threshold value and the well known techniques of Quadtree decomposition. By usingthreshold value of 0.2 and Huffman coding for encoding and decoding of the image these techniques havebeen applied for the compression of satellite imageries. The compression ratio (CR) and Peak Signal toNoise Ratio (PSNR) values are determined for three types of images namely standard Lena image, SatelliteRural image and Satellite Urban image. The Matlab simulation results show that for the Quad treedecomposition approach shows very significant improvement in the compression ratios and PSNR valuesderived from the fractal compression with range block and iterations technique. The results indicatethat for a Lena image C R is 2.02 and PSNR values is 29.92, Satellite Rural image 3.08 and 29.34,Satellite urban image 5.99 and 28.12 respectively The results are presented and discussed in this paper.
Asymmetrical two-level scalar quantizer with extended Huffman coding for compression of Laplacian source  [PDF]
Zoran Peric,Jelena Nikolic,Lazar Velimirovic,Miomir Stankovic,Danijela Aleksic
Mathematics , 2012,
Abstract: This paper proposes a novel model of the two-level scalar quantizer with extended Huffman coding. It is designed for the average bit rate to approach the source entropy as close as possible provided that the signal to quantization noise ratio (SQNR) value does not decrease more than 1 dB from the optimal SQNR value. Assuming the asymmetry of representation levels for the symmetric Laplacian probability density function, the unequal probabilities of representation levels are obtained, i.e. the proper basis for further implementation of lossless compression techniques is provided. In this paper, we are concerned with extended Huffman coding technique that provides the shortest length of codewords for blocks of two or more symbols. For the proposed quantizer with extended Huffman coding the convergence of the average bit rate to the source entropy is examined in the case of two to five symbol blocks. It is shown that the higher SQNR is achieved by the proposed asymmetrical quantizer with extended Huffman coding when compared with the symmetrical quantizers with extended Huffman coding having equal average bit rates.
Enhancing Efficiency of Huffman Coding using Lempel Ziv Coding for Image Compression  [PDF]
Dr. C. Saravanan,M. Surender
International Journal of Soft Computing & Engineering , 2013,
Abstract: Compression is a technology for reducing thequantity of data used to represent any content without excessivelyreducing the quality of the picture. The need for an efficienttechnique for compression of images ever increasing because theraw images need large amounts of disk space seems to be a bigdisadvantage during transmission & storage. Compression is atechnique that makes storing easier for large amount of data. Italso reduces the number of bits required to store and transmitdigital media. In this paper, a fast lossless compression scheme ispresented and named as HL which consists of two stages. In thefirst stage, a Huffman coding is used to compress the image. Inthe second stage all Huffman code words are concatenatedtogether and then compressed with Lempel Ziv coding. Thistechnique is simple in implementation and utilizes less memory. Asoftware algorithm has been developed and implemented tocompress and decompress the given image using Huffman codingtechniques in MATLAB software.
Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression Standards  [PDF]
Asadollah Shahbahrami,Ramin Bahrampour,Mobin Sabbaghi Rostami,Mostafa Ayoubi Mobarhan
Mathematics , 2011,
Abstract: Compression is a technique to reduce the quantity of data without excessively reducing the quality of the multimedia data. The transition and storing of compressed multimedia data is much faster and more efficient than original uncompressed multimedia data. There are various techniques and standards for multimedia data compression, especially for image compression such as the JPEG and JPEG2000 standards. These standards consist of different functions such as color space conversion and entropy coding. Arithmetic and Huffman coding are normally used in the entropy coding phase. In this paper we try to answer the following question. Which entropy coding, arithmetic or Huffman, is more suitable compared to other from the compression ratio, performance, and implementation points of view? We have implemented and tested Huffman and arithmetic algorithms. Our implemented results show that compression ratio of arithmetic coding is better than Huffman coding, while the performance of the Huffman coding is higher than Arithmetic coding. In addition, implementation of Huffman coding is much easier than the Arithmetic coding.
Quantum-inspired Huffman Coding  [PDF]
A. S. Tolba,M. Z. Rashad,M. A. El-Dosuky
Computer Science , 2013,
Abstract: Huffman Compression, also known as Huffman Coding, is one of many compression techniques in use today. The two important features of Huffman coding are instantaneousness that is the codes can be interpreted as soon as they are received and variable length that is a most frequent symbol has length smaller than a less frequent symbol. The traditional Huffman coding has two procedures: constructing a tree in O(n^2) and then traversing it in O(n). Quantum computing is a promising approach of computation that is based on equations from Quantum Mechanics. Instantaneousness and variable length features are difficult to generalize to the quantum case. The quantum coding field is pioneered by Schumacher works on block coding scheme. To encode N signals sequentially, it requires O(N3) computational steps. The encoding and decoding processes are far from instantaneous. Moreover, the lengths of all the codewords are the same. A Huffman-coding-inspired scheme for the storage of quantum information takes O(N(log N)a) computational steps for a sequential implementation on non-parallel machines.
Lossless Grey-scale Image Compression using Source Symbols Reduction and Huffman Coding
C. SARAVANAN,R. PONALAGUSAMY
International Journal of Image Processing , 2009,
Abstract: Usage of Image has been increasing and used in many applications. Imagecompression plays vital role in saving storage space and saving time whilesending images over network. A new compression technique proposed toachieve more compression ratio by reducing number of source symbols. Thesource symbols are reduced by applying source symbols reduction andfurther the Huffman coding is applied to achieve compression. The sourcesymbols reduction technique reduces the number of source symbols bycombining together to form a new symbol. Therefore, the number of Huffmancode to be generated also reduced. The Huffman code symbols reductionachieves better compression ratio. The experiment has been conducted usingthe proposed technique and the Huffman coding on standard images. Theexperiment result has analyzed and the result shows that the newly proposedcompression technique achieves 10% more compression ratio than theregular Huffman coding.
IMAGE COMPRESSION WITH SCALABLE ROI USING ADAPTIVE HUFFMAN CODING  [PDF]
P. Sutha?
International Journal of Computer Science and Mobile Computing , 2013,
Abstract: Most of the commercial medical image viewers do not provide scalability in image compressionand/or encoding/decoding of region of interest (ROI). This paper discusses a medical application thatcontains a viewer for digital imaging and communications in medicine (DICOM) images as a core module.The proposed application enables scalable wavelet-based compression, retrieval, and decompression ofDICOM medical images and also supports ROI coding/decoding. Furthermore, the presented application isappropriate for use by mobile devices activated in a heterogeneous network. The methodology involvesextracting a given DICOM image into two segments, compressing the region of interest with a lossless,quality sustaining compression scheme like JPEG2000, compressing the non-important regions (background,et al.,) with an algorithm that has a very high compression ratio Adaptive Huffman. With this type of thecompression work, energy efficiency is achieved and after respective reconstructions, the outputs areintegrated and combined with the output from a texture based edge detector. Thus the required targets areattained and texture information is preserved.
Asymmetric numeral systems: entropy coding combining speed of Huffman coding with compression rate of arithmetic coding  [PDF]
Jarek Duda
Computer Science , 2013,
Abstract: The modern data compression is mainly based on two approaches to entropy coding: Huffman (HC) and arithmetic/range coding (AC). The former is much faster, but approximates probabilities with powers of 2, usually leading to relatively low compression rates. The latter uses nearly exact probabilities - easily approaching theoretical compression rate limit (Shannon entropy), but at cost of much larger computational cost. Asymmetric numeral systems (ANS) is a new approach to accurate entropy coding, which allows to end this trade-off between speed and rate: the recent implementation [1] provides about $50\%$ faster decoding than HC for 256 size alphabet, with compression rate similar to provided by AC. This advantage is due to being simpler than AC: using single natural number as the state, instead of two to represent a range. Beside simplifying renormalization, it allows to put the entire behavior for given probability distribution into a relatively small table: defining entropy coding automaton. The memory cost of such table for 256 size alphabet is a few kilobytes. There is a large freedom while choosing a specific table - using pseudorandom number generator initialized with cryptographic key for this purpose allows to simultaneously encrypt the data. This article also introduces and discusses many other variants of this new entropy coding approach, which can provide direct alternatives for standard AC, for large alphabet range coding, or for approximated quasi arithmetic coding.
Difference-Huffman Coding of Multidimensional Databases  [PDF]
István Szépkúti
Computer Science , 2011,
Abstract: A new compression method called difference-Huffman coding (DHC) is introduced in this paper. It is verified empirically that DHC results in a smaller multidimensional physical representation than those for other previously published techniques (single count header compression, logical position compression, base-offset compression and difference sequence compression). The article examines how caching influences the expected retrieval time of the multidimensional and table representations of relations. A model is proposed for this, which is then verified with empirical data. Conclusions are drawn, based on the model and the experiment, about when one physical representation outperforms another in terms of retrieval time. Over the tested range of available memory, the performance for the multidimensional representation was always much quicker than for the table representation.
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding  [PDF]
Song Han,Huizi Mao,William J. Dally
Computer Science , 2015,
Abstract: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.