- Algorithms and Data Compression
- Advanced Data Compression Techniques
- Cellular Automata and Applications
- semigroups and automata theory
- Advanced Data Storage Technologies
- Complexity and Algorithms in Graphs
- Error Correcting Code Techniques
- Parallel Computing and Optimization Techniques
- Advanced Graph Theory Research
- Interconnection Networks and Systems
- Coding theory and cryptography
- Advanced Image and Video Retrieval Techniques
- Distributed and Parallel Computing Systems
- Image Retrieval and Classification Techniques
- Digital Image Processing Techniques
- Video Coding and Compression Technologies
- Graph Theory and Algorithms
- Petri Nets in System Modeling
- Numerical Methods and Algorithms
- Limits and Structures in Graph Theory
- Glycosylation and Glycoproteins Research
- Distributed systems and fault tolerance
- Historical Education and Society
- Computational Geometry and Mesh Generation
- Image and Signal Denoising Methods
Sapienza University of Rome
2008-2023
Armstrong Atlantic State University
2001-2004
Brandeis University
1994-2003
We present a survey of results concerning Lempel–Ziv data compression on parallel and distributed systems, starting from the theoretical approach to time complexity conclude with practical goal designing algorithms low communication cost. Storer’s extension for image is also discussed.
The authors study parallel algorithms for lossless data compression via textual substitution. Dynamic dictionary is known to be P-complete, however, if the given in advance, they show that can efficiently parallelized and a computational advantage obtained when has prefix property. approach generalized sliding window method where passes continuously from left right over input string.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">></ETX>
Understanding the associations among data items of a given dataset plays significant role in mining. One well-known methods that deliver is Formal Concept Analysis (FCA) able to represent as lattice. FCA generates context for set, then builds (concepts) from context. If decision maker changes granularity data, process creating new lattice repeated beginning. Because association mining deals with high volume creation every change so computationally expensive it prohibitive. This lack...
In this work, we present a scheme for the lossy compression of image sequences, based on Adaptive Vector Quantization (AVQ) algorithm. The AVQ algorithm is grayscale images, which processes input data in single-pass, by using properties vector quantization to approximate data. First, review key aspects and, subsequently, outline basic concepts and design choices behind proposed scheme. Finally, report experimental results, highlight an improvement performances when our compared with
We show an O(m+t) space algorithm to find all the occurrences of a pattern in text compressed with ID heuristic that runs time O(n(m+t)), where m is length, n size and 1 maximum target length.
The bottom-up hierarchical clustering methodology that is introduced in this paper an extension of self-organizing map neural network (ESOM) and it provides remedy for two different major problems. first one related to the second (SOM) able perform a task. crucial problem approaches (top-down bottom-up) are faced with fact once merging or decomposing clusters takes place, impossible undo redo it. SOM stems from initial clusters' weight vectors, generated randomly, highly influence outcome clustering.
We present a survey of results concerning Lempel-Ziv data compression on parallel and distributed systems, starting from the theoretical approach to time complexity conclude with practical goal designing algorithms low communication cost. An extension by Storer image is also discussed.
The unbounded version of the Lempel-Ziv dynamic dictionary compression method is P-complete. Therefore, it unlikely to implement with sublinear work space unless a deletion heuristic applied bound dictionary. well-known LRU (least recently used) strategy provides best performance among existent heuristics. We show experimental results on effectiveness relaxed (RLRUp) heuristic. RLRUp partitions in p equivalence classes, so that all elements each class are considered have same "age" for...
Summary form only given. In this paper, we showed a work-optimal parallel algorithm using the rectangle greedy matching technique requiring O (log M log n) time on PRAM EREW model. We how is implemented mesh of trees still with optimal work and in time. Differently from arrays trees, meshes have both small diameter large bisection width, which makes them as fast hypercubic networks but simpler to build. our case, can even run without slowing it down increasing number processors
We show nearly work-optimal parallel decoding algorithms which run on the PRAM EREW in O ( log n) time with (n/( 1/2 ) processors for text compressed LZ1 and LZ2 methods, where n is length of output string. also present pseudo decoders finite window compression requiring logarithmic (dn) work, d size alphabet respectively. Finally, we observe that (n/ are possible non-conservative assumption computer word 2 bits.
In this paper, we show a simple lossless compression heuristic for gray scale images. The main advantage of approach is that it provides highly parallelizable compressor and decompressor. fact, can be applied independently to each block 8×8 pixels, achieving 80 percent the obtained with LOCO-I (JPEG-LS), current standard in low-complexity applications. compressed form employs header fixed length code, sequential implementations encoder decoder are 50 60 faster than LOCO-I.
A work-optimal O(lognlogM) time PRAM-EREW algorithm for lossless image compression by block matching was shown in L. Cinque et al., (2003), where n is the size of and M maximum match. The design a parallel decoder left as an open problem. By slightly modifying encoder, this paper we show how to implement with O(n/logn) processors on PRAM-EREW. With realistic assumption that compressed O(n <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1/2</sup>...