9 April 2012 Universal lossless compression algorithm for textual images
Author Affiliations +
In recent years, an unparalleled volume of textual information has been transported over the Internet via email, chatting, blogging, tweeting, digital libraries, and information retrieval systems. As the volume of text data has now exceeded 40% of the total volume of traffic on the Internet, compressing textual data becomes imperative. Many sophisticated algorithms were introduced and employed for this purpose including Huffman encoding, arithmetic encoding, the Ziv-Lempel family, Dynamic Markov Compression, and Burrow-Wheeler Transform. My research presents novel universal algorithm for compressing textual images. The algorithm comprises two parts: 1. a universal fixed-to-variable codebook; and 2. our row and column elimination coding scheme. Simulation results on a large number of Arabic, Persian, and Hebrew textual images show that this algorithm has a compression ratio of nearly 87%, which exceeds published results including JBIG2.
© 2012 Society of Photo-Optical Instrumentation Engineers (SPIE)
Saif al Zahir, Saif al Zahir, } "Universal lossless compression algorithm for textual images," Optical Engineering 51(3), 037010 (9 April 2012). https://doi.org/10.1117/1.OE.51.3.037010 . Submission:

Back to Top