Nowadays digital information is growing increasingly faster than ever before and data storage has been becoming an
indispensable branch of computer science and technology. Based on a review of the current development of data storage,
a novel view on the classification and feature of data storage has been put forward. Several ideas on the mass and high
density are analyzed. The theory and key technology of holographic storage is introduced from the perspective of
Because Toshiba quit the competition, there is only one standard of blue-ray disc: BLU-RAY DISC, which satisfies the
demands of high-density video programs. But almost all the patents are gotten by big companies such as Sony, Philips.
As a result we must pay much for these patents when our productions use BD. As our own high-density optical disk
storage system, Next-Generation Versatile Disc(NVD) which proposes a new data format and error correction code with
independent intellectual property rights and high cost performance owns higher coding efficiency than DVD and 12GB
which could meet the demands of playing the high-density video programs. In this paper, we develop Low-Density
Parity-Check Codes (LDPC): a new channel encoding process and application scheme using Q-matrix based on LDPC
encoding has application in NVD's channel decoder. And combined with the embedded system portable feature of SOPC
system, we have completed all the decoding modules by FPGA. In the NVD experiment environment, tests are done.
Though there are collisions between LDPC and Run-Length-Limited modulation codes (RLL) which are used in optical
storage system frequently, the system is provided as a suitable solution. At the same time, it overcomes the defects of the
instability and inextensibility, which occurred in the former decoding system of NVD--it was implemented by hardware.
The traditional method of power spectrum computation of modulation code is that firstly computing the autocorrelation
function of the code, and then taking Fourier transform of the autocorrelation function to get the power spectrum.
Unfortunately, it is difficult to get autocorrelation function according to the coding methods, therefore, the traditional
method is not fit to compute power spectrum of practical RLL code. In this article, we use a method that is based on onestep
state-transition matrix is much simpler, which all the parameters can be computed according to RLL coding
methods. We use the method to compute the power spectrum of a new modulation code called RLL (2, 12; 8, 15), and
evaluate the spectrum performance of the new modulation code according to the power spectrum. Meanwhile, we
improve the new modulation code in order to get a better spectrum performance.
Proc. SPIE. 7137, Network Architectures, Management, and Applications VI
KEYWORDS: Modulation, Data storage, Field programmable gate arrays, Control systems, Computer programming, Digital video discs, Forward error correction, Optical storage, Optical discs, Intellectual property
In digital storage system, the aim of channel coding is to improve efficiency and reliability of the channel. In order to
improve the capacity and quality for optical storage system, the study on the modulation code is significant. A new RLL
(2, 12; 8, 15) run-length limited code is presented. The construction method of the code is discussed and the procedures
in encoding and decoding of the code are also given. The major characteristic parameters of different run-length limited
codes are compared; the decoder is implemented based on FPGA. The RLL(2, 12; 8, 15) code with high modulation rate,
namely high encoding efficient, is fit for high density optical storage systems. The study of modulation code for high-density
optical discs with independent intellectual property rights is also of great significance.
This paper presents a novel storage architecture called Volume Holographic Universal Storage Cache (for short VHUSC) for the purpose of optimizing disk I/O performance. The main idea of VHUSC is to make use of the Volume Holographic Memory, referred to as VHUSC, as a new layer between main memory and disk. VHUSC can lower the disk access latency, provide much higher I/O bandwidth and throughput. An application independent model based on queuing theory is proposed for performance comparison between VHUSC and traditional disk. The results show performance improvements of up to one order of magnitude.
Proc. SPIE. 5966, Seventh International Symposium on Optical Storage (ISOS 2005)
KEYWORDS: Beam splitters, Holograms, Holography, Data storage, Computer programming, Spatial light modulators, Very large scale integration, Error control coding, Volume holography, Holographic data storage systems
Volume holographic storage is currently the subject of widespread interest as a fast data readout, high-capacity digital data-storage technology. However, due to the effect of cross-talk noise, scattering noise, noise gratings formed during a multiple exposure schedule, it brings a lot of burst errors and random errors in the system. In general, row and column (RAC) array codes is based on single parity check codes, so it is weak to correct errors ability. In order to get the acceptable bit error rate (BER)(10-12). This paper presents multiblock strategy to solve the application of involving an entire page of data which may be as large as 1Mbits and We design VLSI implementations of multiblock RAC array coding encoder and decoder architecture for volume holographic storage. We analyze performance about hardware requirements and time delays associated with the multiblock RAC array code.
The Gigabit Ethernet technology has already been maturity. Compared with other technology, Gigabit Ethernet technology possesses higher performance and more convenience realization, so in volume holographic storage (VHM) system, we choose Gigabit Ethernet as channel interface of volume holographic storage.
Proc. SPIE. 5060, Sixth International Symposium on Optical Storage (ISOS 2002)
KEYWORDS: Holography, Clocks, Digital holography, Data storage, Remote sensing, Data processing, Very large scale integration, Volume holography, Holographic data storage systems, Computer architecture
Volume holography is currently the subject of widespread interest as a fast-readout-rate, high-capacity digital data-storage technology. However, due to the effect of cross-talk noise, scattering noise, noise gratings formed during a multiple exposure schedule, it brings a lot of burst errors and random errors in the system. Reed-Solomon error-correction codes have been widely used to defend digital data against errors, but the speed of Reed-Solomon decoder for volume holographic storage system application is a challenge. This paper presents a high-speed VLSI decoder architecture implementation for decoding (255,223) Reed-Solomon codes with the Modified Berlekamp-Massey algorithm for volume holographic storage. In contrast to conventional Berlekamp-Massey architectures, the speed bottleneck is eliminated via a series of algorithmic transformations that result in a fully systolic architecture in which a single array of processors computes both the error-locator and the error-evaluator polynomials. The proposed architecture requirs approximately 25% fewer multipliers and a simpler control structure than the architectures based on the popular extended Euclidean algorithm. By adopting high speed CPLD, a data processing rate over 200 Mbit/s is realized. Moreover, for block-interleaved Reed-Solomon codes, embedding the interleaver memory into the decoder results in a further increase of the throughput.