In this paper, we design and implement a data disaster recovery system based on storage-virtualization. The data disaster
recovery system implements its function through the collaboration of multiple sites in wide-area scope. Each site is an
independent data center and data disaster recovery unit. The construction of each site is based on storage-virtualization
gateway which is a storage-virtualization platform supporting storage multi-protocol. Therefore, through storagevirtualization
gateway's implementation of data replication, failure detection, service switch/takeover among the
multiple sites, data disaster recovery could be implemented.
Nowadays, content-based network storage has become the hot research spot of academy and corporation. In order to
solve the problem of hit rate decline causing by migration and achieve the content-based query, we exploit a new
content-aware storage system which supports metadata retrieval to improve the query performance. Firstly, we extend
the SCSI command descriptor block to enable system understand those self-defined query requests. Secondly, the
extracted metadata is encoded by extensible markup language to improve the universality. Thirdly, according to the
demand of information lifecycle management (ILM), we store those data in different storage level and use corresponding
query strategy to retrieval them. Fourthly, as the file content identifier plays an important role in locating data and
calculating block correlation, we use it to fetch files and sort query results through friendly user interface. Finally, the
experiments indicate that the retrieval strategy and sort algorithm have enhanced the retrieval efficiency and precision.
This paper proposes a server-free backup model deploy on Content Aware Storage System (CASS). The CASS is a
new type information storage system which consists of three layers. In the middle is content management control layer
which is the core layer of the storage system. Compare with host based, LAN based and LAN-Free backup model, this
work design a series of models to fulfill server-free backup solution. Thos models includes backup task detective,
detective agent, query backup activate condition, semantics derivation, storage resource balancer, de-duplication, copy
agent. This server-free backup model not only speeds up the backup operation, but also off-loads the LAN and host
computer, ensuring optimum performance and continuous data access on the network.
This research proposes a new Storage Data Gateway (SDG) on Content Aware Storage System
(CASS). The CASS consists of storage application service, content auto-tiering storage; management and
control sever which is the Content Management Server (CMS). The system gives each stored object a
unique content address, derived from the content itself. The physical location of stored information
becomes irrelevant to users. The CMS takes advantage of the tier data architecture and decides data
migration strategists. But the CMS will be overload if the system assigns the backup and data migration
work to it on LAN-based model. That server does not need to control each data duplication or migration. It
will be free with Sever-free model. The system needs some special device to transfer data between highspeed
storage node and low-speed storage node and intermix FC storage node and IP storage node with
content information instead of the responsibility of CMS like on LAN-based model.
That device is a new type of SDG which needs content aware, high bandwidth, strong computing ability,
and high reliability. According to those requirements, we design the Content Aware SDG (CA-SDG) based
on ATCA specifications.
Advanced Telecommunications Computing Architecture (ATCA) is a new series of PICMG
specifications, targeted to requirements for the next generation of carrier grade communications equipment.
This series of specifications incorporates the latest trends in high speed interconnect technologies, next
generation processors and improved reliability, manageability and serviceability.
The hardware system of CA-SDG consists of two compute units (one unit deal with commands from
CMS, the other interpreter data between storage nodes), two network switch units (one is redundant), two
network interface units (one group units interconnect application service layer, and the other group provide
the communication of heterogeneous storage nodes), one local storage unit and one manager unit. The
network switch units and compute units make up of a dual star switch network. Network interface units and
local storage units which are in the form of AMC card connect Compute unit through PCI-E bus, supplying
extended network interface and local storage capacity to compute unit. The manager unit is responsible for
the management of whole CA-SDG device.
Fibre Channel (FC)becomes the main storage protocol of SAN(Storage Area Network).Enterprises are
increasingly deploying FC-SANs in their data central. In order to mitigate the risk of losing data, more and
more enterprises are increasingly adopting storage extension technologies based on WDM (Wavelength
Division Multiplexing) to replicate their business critical data to a secondary site.
Storage Extension based on WDM makes use of the flow control mechanism based on Credit, it offers
immediate access to the full link bandwidth without a ramp-up time ,but the updating frequency of Credits
and the throughput of storage extension are limited by the long extension distances. Adding the receiver
memory can remove those limitations, but it also brings about the long queuing time.
In this paper, a Petri-Net model of FC flow control protocol is constructed to analysis the relative
degree between the storage extension performance with bandwidth, distance and initial number of Credit.
Meanwhile, a new integrative flow control mechanism for storage extension based on WDM is proposed, it is
consists of three flow control loop A,B and C. A is deployed between FC sender and WDM ingress gateway,
B takes affection on the WDM ingress gateway and WDM engress gateway, C works among WDM engress
gateway and FC receiver, and the three flow control loop are integrated by the buffer utilization of ingress and
engress gateway. Theoretical analysis and simulation result show that the new flow control method can
effectively remove distance limitation and allow data traffic to flow at maximum throughput without adding
queuing latency. The new flow control method is implemented in WDM gateway, it is transparent for FC
protocol and can provide a good compatibility.
As Fibre Channel becomes the key storage protocol of SAN (Storage Area Network), enterprises are increasingly
deploying FC SANs in their data central. Meanwhile, organizations increasingly face an enormous influx of data that
must be stored, protected, backed up and replicated for mitigating the risk of losing data. One of the best ways to achieve
this goal is to deploy SAN extension based on CWDM(Coarse Wavelength Division Multiplexing). Availability is one of
the key performance metrics for business continuity and disaster recovery and has to be well understood by IT
departments when deploying SAN extension based on CWDM, for it determines accessibility to remotely located data
sites. In this paper, several architecture of storage extension over CWDM is analyzed and the availability of this different
storage extension architecture are calculated. Further more, two kinds of high availability storage extension architecture
with 1:1 or 1:N protection is designed, and the availability of protection schema storage extension based on CWDM is
As organizations increasingly face an enormous influx of data that must be stored, protected, backed up and replicated. One of the best ways to achieve the goal is to interconnect geographically dispersed
SANs through reliable and high-speed links. In this storage extension application, flow control deals with the problem where a device receives the frames faster than it can process them, when this happens, the result is that the device is forced to drop some of the frames. The FC flow control protocol is a credit-based mechanism and usually used for SAN extension over WDM and over SONET/SDH. With
FC flow control, when a source storage device intends to send data to a target storage device, the initiating storage device must receive credits from target device. For every credit the initiating device
obtains, it is permitted to transmit a FC frame, so congestion is always avoided in the network. This paper analysis the mechanisms of FC flow control and it's limitation in SAN extension when the
extension distance increases. Computing result indicates that the maximum link efficiency and throughput in SAN extension have relation to credits, frame size and extension distance. In order to achieve the maximum link efficiency and throughput, an extended FC flow control mechanisms are proposed.
As storage environments and storage area networks (SANs) grow, enterprises increasingly have the need to extend data transfers beyond the confines of the enterprise over longer distances such as metropolitan area networks (MANs) and wide area networks (WANs) for disaster-recovery and business-continuity applications. By using virtual concatenation (VCAT), link capacity adjustment scheme (LCAS) and generic frame procedure(GFP), Next generation SONET/SDH can move SCSI commands and block-level data over long distances in an efficient and cost-effective manner. This paper analyses the limitation of traditional SONET/SDH for storage services and the new characteristics of Next generation SONET/SDH. The design approach and steps of GFP interface based SOPC are proposed, furthermore the architecture of SAN extension based on Next generation SONET/SDH is presented.
As Fibre Channel (FC) becomes the protocol of choice within corporate data centers, enterprises are increasingly deploying SANs in their data central. In order to mitigate the risk of losing data and improve the availability of data, more and more enterprises are increasingly adopting storage extension technologies to replicate their business critical data to a secondary site. Transmitting this information over distance requires a carrier grade environment with zero data loss, scalable throughput, low jitter, high security and ability to travel long distance. To address this business requirements, there are three basic architectures for storage extension, they are Storage over Internet Protocol, Storage over Synchronous Optical Network/Synchronous Digital Hierarchy (SONET/SDH) and Storage over Dense Wavelength Division Multiplexing (DWDM). Each approach varies in functionality, complexity, cost, scalability, security, availability , predictable behavior (bandwidth, jitter, latency) and multiple carrier limitations. Compared with these connectiviy technologies,Coarse Wavelength Division Multiplexing (CWDM) is a
Simplified, Low Cost and High Performance connectivity solutions for enterprises to deploy their storage extension. In this paper, we design a storage extension connectivity over CWDM and test it's electrical characteristic and random read and write performance of disk array through the CWDM connectivity, testing result show us that the performance of the connectivity over CWDM is acceptable. Furthermore, we propose three kinds of network architecture of SAN extension based on CWDM interface. Finally the credit-Based flow control mechanism of FC, and the relationship between credits and extension distance is analyzed.