Recent and future communications networks have to provide QoS guarantees for a rapidly growing number of various telecommunications services. Therefore, various communications systems, such as wireless and fixed access networks, apply reservation MAC protocols, providing a good network utilization, which is particularly important access networks with typically limited data rates, and ensuring realization of different QoS guarantees for various telecommunications services. This is important because of a hard competition among communications technologies applied in the access area. The considered MAC protocols apply a per-packet reservation method to avoid the transmission gaps caused by per-burst reservation, and accordingly to achieve a better network utilization. However, the per-packet reservation increases the network load caused by the signaling, which calls for an efficient resource sharing strategy in the signaling channel. There are two basic solutions for capacity sharing in the signaling channel: random access, usually using slotted ALOHA, and dedicated access, realized by a polling method. Performance improvement of basic protocols can be carried out in different ways; by protocol extensions, a combination of different protocol solutions, and the application of adaptive protocols providing a change of access parameters according to the current network status. The best network performance is achieved by application of two-step reservation protocol, which combined with additional features, such as appropriate signaling procedure, priority and fairness mechanisms, and combined reservation domains, can fulfill requirements of services with high QoS demands.
Multistage integration of visual information in the brain allows people to respond quickly to most significant stimuli while preserving the ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing, described in this paper, comprises both main types of cortical multistage convergence. One of these types occurs within each visual pathway and the other between the pathways. This approach maps input images into a flexible hierarchy which reflects the complexity of the image data. The procedures of temporal image decomposition and hierarchy formation are described in mathematical terms. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image which encapsulates, in a computer manner, structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a very quick response from the system. The result is represented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match.
With optical networks evolving to the next generation optical networks, various services are intended to be carried over a single optical network infrastructure. This contributes to the phenomenon that different services with different QoS requirements may survive in a set of degradative network working-states. Thus during the study of optical network reliability, it is insufficient to regard optical networks as having only binary working-states according to their network connectivity. We proposes here a new reliability evaluation model for WDM networks, which regard the network as a system having multiple working-states with reference to their route (s-t) capacity and maximum hop number constraints. Additionally, considering that there exist in WDM networks some elements whose failures affect neither the nodes nor the entire links but only certain wavelength channels in the links, a new kind of network element is added to the traditional element types (nodes and links) of network reliability evaluation model, namely wavelength channel related elements. The model was used to simulate the reliability of WC (wavelength continuous), PWC (partial wavelength conversion) and NWC (non-wavelength continuous) WDM networks with CERNET topology by way of Monte-Carlo method. Simulation results indicate that different working-state requirements may lead to different reliability evaluation results, and the differences will enlarge very quickly with the increasing of network element failure rates. This implies that the study of WDM network reliability should be performed under multiple working-states assumption, especially for multi-service networks, and the addition of the new network element kind - a wavelength channel-related element is necessary.
To provide application-oriented network services, a variety of overlay networks are deployed over physical IP networks. Since they share and compete for the same physical network resources, their selfish behaviors affect each other and, as a result, their performance deteriorates. In this paper, we propose a mechanism for pureP2P networks of file-sharing applications to cooperate with each other. In our proposal, a cooperative peer first finds another P2P network and establishes a logical link to a cooperative peer in the found network. Both ends of the logical link decide whether they cooperate or not from a viewpoint of the mutualism. When they consider they benefit from the cooperation, messages and files are exchanged among cooperative P2P networks through the logical link. For an efficient and effective cooperation, our mechanism has an algorithm for the selection of cooperative peers and a caching mechanism to avoid putting too much load on cooperative peers and
cooperating networks. Simulation results showed that the number of discovered providing peers and the ratio of search hits increased about twice, while the load by the cooperation among P2P networks was reduced about half by caching.
Wavelength Division Multiplexed based Passive Optical Networks (WDM-PON) are subjected to wide
variety of incidental failures. It is preferred in WDM-PONs to provide fault management (survivability) at
link layer level. In this paper, our objective is to determine the factors, which can increase the scope of
survivability at link layer, not available currently. We propose a cost-effective recovery mechanism, called
"Survivable Passive Access Network" (SPAN). SPAN is a multi-level protection scheme that has potential
to be highly efficient and scaleable in terms of performance budget. BER performance and power penalty
factors remain acceptable while switching traffic from normal path to protection path in case of failure.
In this paper, we focus on an information gathering system where a reader continuously collects information from mobile nodes in its access area, such as environmental information cameras and sensors. We assume that a mobile node is relatively tiny and does not have a high-precision antenna to sense carriers emitted by other nodes. Although a random access method like ALOHA can be easily used, it has disadvantages of transmission efficiency and energy consumption. To tackle these problems, we propose a novel method that is a combination of random and selective accesses. At first, a reader sends an ID request to all nodes. Then, each node replies its ID to the reader at a response probability involved in the request. Finally, the reader selectively gathers information from nodes according to the obtained ID lists. In our method, non-registered nodes and non-deleted nodes affect the system performance. The non-registered node is a node that is in the access area but its ID is not registered to the reader. The non-deleted node is a node that leaves the area but its ID is still registered to the reader. In this paper, we first derive their numbers by an analysis using the Inversion Formula of Palm Calculus. Then, we conduct simulation experiments to verify the analysis. Simulation results show that the proposed method performs well in a wide range of mobility by appropriately controlling the response probability.
Wireless technology based on the IEEE 802.11 standard is widely deployed. This technology is used to support multiple types of communication services (data, voice, image) with different QoS requirements. MANET (Mobile Adhoc NETwork) does not require a fixed infrastructure. Mobile nodes communicate through multihop paths. The wireless communication medium has variable and unpredictable characteristics. Furthermore, node mobility creates a continuously changing communication topology in which paths break and new one form dynamically. The routing table of each router in an adhoc network must be kept up-to-date. MANET uses Distance Vector or Link State algorithms which insure that the route to every host is always known. However, this approach must take into account the adhoc networks specific characteristics: dynamic topologies, limited bandwidth, energy constraints, limited physical security, ... Two main routing protocols categories are studied in this paper: proactive protocols (e.g. Optimised Link State Routing - OLSR) and reactive protocols (e.g. Ad hoc On Demand Distance Vector - AODV, Dynamic Source Routing - DSR). The proactive protocols are based on periodic exchanges that update the routing tables to all possible destinations, even if no traffic goes through. The reactive protocols are based on on-demand route discoveries that update routing tables only for the destination that has traffic going through. The present paper focuses on study and performance evaluation of these categories using NS2 simulations. We have considered qualitative and quantitative criteria. The first one concerns distributed operation, loop-freedom, security, sleep period operation. The second are used to assess performance of different routing protocols presented in this paper. We can list end-to-end data delay, jitter, packet delivery ratio, routing load, activity distribution. Comparative study will be presented with number of networking context consideration and the results show the appropriate routing protocol for two kinds of communication services (data and voice).
This paper lays the groundwork for modeling the quantification of sensor coverage for Unmanned Aircraft (UA) swarms of sensors. The concept of information expectation is defined, elaborated, and illustrated. Areas of interest (AOI)s are analyzed from a swarm standpoint to determine the quantity of coverage afforded by a swarm of multiple sensor-laden UAs. This work also investigates the coverage of AOIs as determined by the mission duration, area of the region, and time-variable swarm geometry. By experimentation using simulation, we gain insight into the quantifiable influence of varying swarm sizes and configurations on area coverage. This in turn allows validation of formulae and algorithms for computing approximation of expected opportunities for relevant information collection.
In this paper, a novel architecture for fiber Bragg grating (FBG) sensor network with self-healing function is proposed, which is made of by a set of sector sub-networks including FBG sensors, main node and some remote nodes. The main node is responsible for sending the lightwave from the source to the sensor parts in the network. The remote nodes are built by using of optical switches and couplers so as to check the breakpoint and reconfigure the sensor network with different route if any link fails. The simulation and discussion results show that the networks consisting of different nodes could provide different performances due to the change in insertion losses at the nodes and reflecting spectrum from FBG sensors, and successively, offer different levels of survivability. More importantly, reconfiguring the sensor network in case of the failure for certain links, the new route composed by the different remote nodes may influence the network performances. This sensor network can be also expanded to large scale by combining three or more sector sub-networks. In order to meet the remand for survivability, remote nodes must be redesigned carefully when certain links fail. The improved performances are verified by the simulation. The results indicate the proposed architecture can facilitate a reliable sensor network with large scale and multipoint smart structure.
P2P systems are usually used for information exchange between peers in recent years. However, the open and anonymous nature of a P2P network makes it an ideal medium for malicious peers. There is a lack of efficient mechanism for existing P2P systems to avoid from free riding, whitewashing, collusion and malicious attackers. In this paper, we describe a novel role-base trust model for P2P file sharing system. First, we give object criteria to track each peer's contribution to the system. Second, according to their contribution we divide all the peers into 2 parts: super peers and normal peers. Each of the 2 roles is bonded with different rights and obligations. Third, we show how to carry out the computation and storage at local and global. Finally, we discuss how our trust model allows peer to revoke relationships with distrusted peers. We present a concrete method to validate the proposed trust model and report sets of simulation-based experiments, showing the feasibility and robustness of this model.
With the fast development of ad hoc networks, SIP has attracted more and more attention in multimedia service. This paper proposes a new architecture to provide SIP service for ad hoc users, although there is no centralized SIP server deployed. In this solution, we provide the SIP service by the introduction of two nodes: Designated SIP Server (DS) and its Backup Server (BDS). The nodes of ad hoc network designate DS and BDS when they join the session nodes set and when some pre-defined events occur. A new sip message type called REGISTRAR is presented so nodes can send others REGISTRAR message to declare they want to be DS. According to the IP information taken in the message, an algorithm works like the election of DR and BDR in OSPF protocol is used to vote DS and BDS SIP servers. Naturally, the DS will be replaced by BDS when the DS is down for predicable or unpredictable reasons. To facilitate this, the DS should register to the BDS and transfer a backup of the SIP users' database. Considering the possibility DS or BDS may abruptly go down, a special policy is given. When there is no DS and BDS, a new election procedure is triggered just like the startup phase. The paper also describes how SIP works normally in the decentralized model as well as the evaluation of its performance. All sessions based on SIP in ad hoc such as DS voting have been tested in the real experiments within a 500m*500m square area where about 30 random nodes are placed.
In this paper, Extreme Value Theory (EVT) is presented to analyze wireless network traffic. The role of EVT is to allow the development of procedures that are scientifically and statistically rational to estimate the extreme behavior of random processes. There are two primary methods for studying extremes: the Block Maximum (BM) method and the Points Over Threshold (POT) method. By taking limited traffic data that is greater than the threshold value, our experiment and analysis show the wireless network traffic model obtained with the EVT fits well with that of empirical distribution of traffic, thus illustrating that EVT has a good application foreground in the analysis of wireless network traffic.
Significant TCP unfairness in Ad Hoc wireless networks has been reported during the past several years.
proposed a network layer solution called Neighborhood Random Early Detection (NRED) scheme to enhance TCP
fairness in Ad Hoc wireless networks. In NRED, the concept of neighborhood is introduced. So the RED mechanism is
extended to the distributed neighborhood queue, which is the aggregation of local queue in one's neighborhood. NRED
adopt a passive measurement technique to detect the early congestion of a neighborhood. However, NRED by measuring
channel utilization rate is an over-layer solution and hardly to implement in practice. As it is known, packet delay
increases when the wireless channel is very busy and the overall traffic load exceeds the capacity of the channel. Thus
the packet delay can reflect whether or not the channel is busy. For each packet's transmission, the more delay, the more
severe congestion and competition. We believed that the delay of data could reflect the congestion of shared link
promptly. This paper proposes a scheme based on MAC delay to detect congestion and to notify the nodes which use too
much channel dropping their packets and give the expressed node chance to transmit. We analyze the average packet
delay on IEEE 802.11 DCF which is represented by a Markov model. Based on the relationship between the MAC delay
and number of competitors, whether there exist severe competition can be found.
HASN is a hierarchical routing protocol for heterogeneous sensor networks, optimized via cross-layer designs to save sensor's power and improve reliability. There are two kinds of nodes in heterogeneous sensor networks: normal sensor and header node that has more powerful battery and higher performance antenna. A header and sensors in its radio transmitting range compose a cluster. The header takes charge of data collection and data aggregation in its cluster. In a cluster, the communication is unsymmetrical. From the header to sensors is directly reachable, but from sensors to their header needs multi-hop. In this paper, a new dynamic address assignment method is introduced for large number of sensors automatically. A mathematics model of energy optimum relay tree is designed, which can guarantee the minimum energy cost forwarding and relay load balance. We give an approximation algorithm to resolve the model. A centralized scheduling management is proposed to avoid collisions completely in a cluster. We also introduce a mechanism to depress data redundancy.
Real-time video transmission over ad hoc networks faces many challenges including low bandwidth, long end-to-end
delay, high packet loss rate, frequently changing topology and limited-powered mobile nodes. This paper presents an
effective real-time video transmission scheme and improves implementation of DSR (Dynamic Source Routing)
protocol. We set up a test-bed by using DSR routing in the IP layer, and an application transmitting video stream over
UDP protocol. We get a continuous JPEG image stream from a ZC0301p web camera and split each image into small
blocks according to the MCU (Minimum Coding Unit) borderline. The strong point of splitting JPEG image is that IP
layer fragmentation can be avoided so we can determine which part of data in the frame gets lost to do loss recovery at
the receiver. By using JPEG image stream, the video encoding complexity is reduced, which can save computing power
of mobile nodes compared with MPEG and other Multiple Description Coding (MDC) methods. We also improve
implementation of DSR to make it suitable to transfer real-time multimedia data. First different priorities are given to
different traffic classes in DSR routing. Second the route maintenance scheme is modified to decrease overhead and link
failure misjudgments. We carry out two experiments both indoors and outdoors using six mobile nodes. The first is to
transmit continuous JPEG images using our former DSR implementation according to DSR draft. The second is that we
split JPEG images into blocks and then transmit them using improved DSR implementation. Results show the latter
gives better video stream fluency and higher image quality.
Web services enable the integration of applications in a web environment. Due to an increased automation of web service interoperation, intelligent web services are recommended, such as Semantic Web Services. In this paper, we first introduce web services and the concept of intelligent web services. We then propose an agent-based web service gateway. We focus on the intelligent web services platform with existing standards instead of proposing other standards. In particular, we present an agent approach to dynamic web services, which invokes access points of UDDI registry automatically and returns execution results for web services. The proposed approach is used to build experimental solutions involving intelligent web service systems. Finally, we'll give an example to illustrate a typical scenario in which consumers search web services using our web services gateway.
An electronic tag such as RFID is expected to create new services that cannot be achieved by the traditional bar code. Specifically, in a distribution system, simultaneous readout method of a large amount of electronic tags embedded in products is required to reduce costs and time. In this paper, we propose novel methods, called Response Probability Control (RPC), to accomplish this requirement. In RPC, a reader firstly sends an ID request to electronic tags in its access area. It succeeds reading information on a tag only if other tags do not respond. To improve the readout efficiency, the reader appropriately controls the response probability in accordance with the number of tags. However, this approach cannot entirely avoid a collision of multiple responses. When a collision occurs, ID information is lost. To reduce the amount of lost data, we divide the ID registration process into two steps. The reader first gathers the former part of the original ID, called temporal ID, according to the above method. After obtaining the temporal ID, it sequentially collects the latter part of ID, called remaining ID, based on the temporal ID. Note that we determine the number of bits of a temporal ID in accordance with the number of tags in the access area so that each tag can be distinguishable. Through simulation experiments, we evaluate RPC in terms of the readout efficiency. Simulation results show that RPC can accomplish the readout efficiency 1.17 times higher than the traditional method where there are a thousand of electronic tags whose IDs are 128 bits.
In existing peer-to-peer database framework designs, coordination rules are assumed already present and never changed
during the whole course of operation. This paper investigates how coordination rules are created and changed, hence
helping ease the procedure. Local database can be on and off dynamically, but this feature of P2P database is
inconsistent with fixed coordination rules, for dependency path will be broken when an intermediate peer is absent. A
restoration mechanism is designed in this scenario to realize dynamic coordination rule. To achieve this, coordination
rules on the same dependency path have to be available after the path is broken, and combined together to form a new
dependency path and bypass the absent peer. To backup rules before host is down they can be published as resource
advertisement to remote peers by underlying P2P platform facility. Actually since coordination rules are no longer
bounded with their host, they can be viewed independent from the database system to form a coordination rule P2P
network, with some peers having no database and purely as rule cache. The protocols about rule cache, combination and
new rule creation request in such network are discussed. Rules float along dependency paths across network and
combine to form a new rule where necessary. A peer wanting to create new coordination rules can publish query and if
there is a rule on another peer which can be combined with the existing one, a new rule is created and send back. This
dependency path discovery process can be similar to route discovery process.
Conventional approaches to QoS (Quality of Service) provisioning in IP networks are difficult to apply in all-optical networks. This is mainly because there is no optical RAM (Random Access Memory) to store packets during contention for bandwidth, so the provision of QoS with OBS (Optical Burst Switching) is a hot issue that has been widely studied. Most of the QoS schemes guarantee the performance of high priority traffic at the cost of low priority traffic's performance. To solve this problem, a novel wavelength assignment scheme for supporting QoS in OBS networks is proposed in this paper, we give the detailed means to increase the priority of the buffed low priority bursts and show it is especially beneficial to improve networks average throughput than that of offset-time-based scheme.
This paper investigates saturation and nonsaturation load throughputs of node based on IEEE 802.11 wireless ad hoc network protocol in presence of selfish node. To analyze the throughput of nodes, an extended two-dimension Markov model was used and a general analytical solution was derived for DCF that may be used to find throughput under various traffic loads. Meanwhile, we have done extensive simulation using Qualnet to validate our analytic results. The analytic and simulation results matched well, which revealed three interesting insights: 1) The selfish node can maximize its throughput by adopting selfish behavior. And with the increase of selfish node, the throughput of selfish node decreases. 2) With the increase of initialized contention window size of selfish node, the throughput obtained by selfish node decreases. 3) The effect taken by the selfish behavior increases with the increase of the traffic load of node. When the traffic load is saturation, the throughput that the well-behaved node can get nearly approaches to zero, which is a very undesirable results.
A new fixed-rate layered multicast congestion control algorithm called FLMCC is proposed. The sender of a multicast
session transmits data packets at a fixed rate on each layer, while receivers each obtain different throughput by
cumulatively subscribing to deferent number of layers based on their expected rates. In order to provide TCP-friendliness
and estimate the expected rate accurately, a window-based mechanism implemented at receivers is presented. To achieve
this, each receiver maintains a congestion window, adjusts it based on the GAIMD algorithm, and from the congestion
window an expected rate is calculated. To measure RTT, a new method is presented which combines an accurate
measurement with a rough estimation. A feedback suppression based on a random timer mechanism is given to avoid
feedback implosion in the accurate measurement. The protocol is simple in its implementation. Simulations indicate that
FLMCC shows good TCP-friendliness, responsiveness as well as intra-protocol fairness, and provides high link