In optical communications the transportable bandwidth has exceeded Tbps per fiber, as a result of a dense grid of optical channels in the fiber, known as dense wavelength division multiplexing and which does not come without a price. Degradations and interactions due to non-linear and linear effects degrade the signal thus decreasing signal integrity and channel performance. Channel performance is greatly improved with error detecting and correcting codes that detect up to sixteen errors and correct up to eight within a block of information. However, estimation is limited to BER after monitoring many blocks, which takes valuable time particularly when the traffic per channel is huge. As a consequence, there is a need for fast estimation of the signal integrity and channel performance to ensure that the expected performance metrics are maintained. In this paper, we describe a statistical sampling methodology for the estimation of performance parameters such as BER, SNR, and Q-factor in a very short interval and for all-practical purposes in real-time.
We investigate the detrimental effect of chaotic noise on the performance of an optical chaos cryptosystem. Hyperchaos is generated in the emitter and receiver systems with Mach-Zehnder interferometers fed by semiconductor lasers and subjected to electro-optical feedback. In this optical chaos cryptosystem, (chaotic) noise originates from the mismatch in parameters between the emitter and the receiver. We therefore determine the amplitude of this noise as a fonction of the parameter mismatch, and we evaluate its effect on the bit error rate (BER) performance of the communication system. Analytical predictions are confirmed by numerical simulations and experimental results.
A novel approach to the theory of phase-noise in resonator-oscillators will be given that is based on a combination of a large-signal-small-signal method, harmonic balance, and a modified Rice-model of signals plus noise. The method will be explained using a simple example. Since the type of oscillator under consideration not only de-attenuates eigen-oscillations but also noise in the spectral vicinity of the eigen-frequency, a signal is generated that is quasi-harmonic, and that might be described by means of a pseudo-Fourier-series expansion. Due to the specific description of the internal noise-sources, it is possible to use a time-domain description that at the same time reveals information about the spectral components of the signal. By comparison of these components, the spectrum of the oscillation might be determined. Relations between the spectrum of internal noise sources and the generated oscillator-signal will be recognized. The novel method will thus enable the designer to predict the phase-noise behavior of a specific oscillator-design.
In this paper we discuss the effects of white noise on the spectral lineshape of a simple two dimensional oscillator and compare to the phase noise spectrum predicted by the methods of Demir and Kaertner. A numerical method based on the Fokker-Planck equation is employed to directly calculate the spectrum of the dynamical system. These results are then used as a benchmark to assess the efficiency of a novel small-noise perturbation method we have developed. Due to the simplicity of the oscillator it is possible to solve the approximated partial differential equation using both characteristics; these solutions are then compared to the exact numerical results. We elucidate the effect of the amplitude-phase coupling term which causes the spectral lineshape to become non-Lorentzian.
Cooperative coding is a communication paradigm that pools
distributed resources of different nodes in a network, such that
the nodes act like a collaborative system instead of greedy
adversarial participants. Cooperation has shown promise in
increasing throughput and providing better power efficiency in
wireless networks. In this work, we consider a basic example of
cooperative communication, relay coding, and consider methods to
improve the power efficiency by employing feedback and using power
control. We consider power control policies based on the degree of
transmitter channel knowledge. First, when perfect feedback is
available, we show results for the optimal power control policy
for any network code. We show that by using the decode and forward
relaying protocol, in some cases it is possible to approach the
universal lower bound on the outage probability for the block
fading relay channel. Second, when a finite rate of feedback is
available, we see that only a few feedback bits are necessary to
achieve most of the gains that the perfect feedback policy has
over constant power transmission. Based on these results, it is
evident that future network protocols should utilize feedback in
order to fully exploit the potential gains of network coding.
In this paper, we present schedulers that minimize the total transmit
power in a multihop wireless network. The focus is on guaranteeing an
end-to-end delay bound for a single variable bit rate flow on a
multihop fading channel. We first compute an analytical approximation
for the transmit power required to send a variable bit rate source
over a finite state fading channel. We then use this approximation to
derive schedulers that have low complexity and near optimal
performance over multihop networks where the fading processes on the
multiple hops are independent. Properties of the optimal delay
allocation are also studied; in the special case of a Gaussian
network, the optimal delay allocation strategy is completely
Time-varying multipath fading associated with the wireless link
limit the capacity of a wireless system. To adapt to this adverse
radio environment efficiently, we investigate the use of a
pilot-aided fade-resistant transmission scheme for the uplink of a
chip-interleaved code division multiple access (CDMA) system. We
analyze the trade-off between the number of diversity branches and
the channel estimation error. We derive the optimum ratio of pilot
signal power to information signal power. Our numerical study
indicates that depending on the transmitter power and channel
condition, the proposed system is capable of outperforming the
conventional CDMA system.
This paper considers the classic non-causal state information problem in channel coding where non-causal channel state knowledge at the transmitter, which is also called the Gel'fand Pinsker problem. This paper determines Lagrangian dual expression for this problem, illustrating that Costa's dirty paper coding result can be replicated using the dual-problem methodology.
Multiple input multiple output (MIMO) wireless systems can offer
significant diversity and transmit beamforming with receive
combining provides a method to achieve this diversity with simple
receive processing. The maximum gains in terms of array gain and
diversity, however, requires perfect channel knowledge at the
transmitter. In the absence of perfect channel knowledge, the
channel information can be quantized at the receiver and sent back
to the transmitter using a low-rate feedback link. In the case of
narrowband channels, considerable work has been done in reducing
the feedback information while maintaining bit-error-rate
performance close to the case of perfect channel knowledge. This
work, however, does not naturally extend to the case of frequency
selective channels and leads to an explosion in the feedback
overhead. In this paper, orthogonal frequency division multiplexing (OFDM)is considered as a low complexity implementation of MIMO
beamforming combining over frequency selective channels. Two broad
classes of algorithms are discussed for quantizing channel
information - clustering and transform. The clustering algorithms
group the subcarriers and choose a common frequency-domain
representation of the channel information for each group. Thus the
feedback rate depends on the number of groups and not on the
number of subcarriers. The transform algorithms quantize the
channel information in time-domain where the transform essentially
decorrelates the channel information. Both the algorithms provide
significant compression of channel information maintaining
bit-error-rate performance close to the case of perfect channel
Fifty years ago, when Claude Shannon was developing the Mathematical Theory of Communications, for reliable data transmission, which evolved into the subject of information theory, another discipline was developing dealing with Feedback Control of Dynamical System, which evolved into a scientific subject dealing with decision, stability, and optimization. More recently, a separate discipline dealing with robustness of uncertain systems was born in response to the codification of high performance and reliability in the presence of modeling uncertainties. In principle, robustness in dynamical systems is captured through power dissipation via induced norms and dynamic games, while reliable data transmission is captured through measures of information via entropy, relative entropy, and certain laws of Large Deviations theory. The main ingredient in Large Deviations is the rate functional (or action functional in the classical mechanics terminology), often identified through the Cramer or Legendre-Fenchel Transform. On the other hand, robustness of stochastic uncertain systems is currently under development, using information theoretic as well as statistical mechanics concepts, such as, partition functions, free energy, relative entropy, and entropy rate functional. This lecture will summarize certain connections between fundamental concepts of robustness, information theory, and statistical mechanics, and possibly make future projections into the convergence of these disciplines.
The goal of the paper is the study of suboptimal quantizer based detectors. We place ourselves in the situation where internal noise is present in the hard implementation of the thresholds. We hence focus on the study of random quantizers, showing that they present the noise-enhanced detection property. The random quantizers studied are of two types: time invariant when sampled once for all the observations, time variant when sampled at each time. They are built by adding fluctuations on the thresholds of a uniform quantizer. If the uniform quantizer is matched to the symmetry of the detection problem, adding fluctuation deteriorates the performance. If the uniform quantizer is mismatched, adding noise can improve the performance. Furthermore, we show that the time varying quantizer is better than the time invariant quantizer, and we show that both are more robust than the optimal quantizer. Finally, we introduce the adapted random quantizer for which the levels are chosen in order to approximate the likelihood ratio.
In this paper, we revisit the problem of detecting a known signal corrupted by an independent identically distributed α-stable noise. The implementation of the optimal receiver, i.e. the log-likelihood ratio, requires the explicit expression of the probability density function of the noise. In the general α-stable case, there exists no closed-form for the probability density function of the noise. To avoid the numerical evaluation of the probability density function of the noise, we propose to study a parametric suboptimal detector based on properties of α-stable noise and on implementation considerations. We focus our attention on several optimization criteria of the parameters, showing that our choice allows the optimization without using the explicit expression of the noise probability density function. The chosen detector allows to retrieve the optimal Gaussian detector (matched filter) as well as the locally optimal detector in the Cauchy context. The performance of the detector is studied and compared to usual detectors and to the optimal detector. The robustness of the detector against the signal amplitude and the stability index of the noise is discussed.
This paper investigates important properties of acquisition receivers that employ commonly used serial-search strategies. In particular, we focus on the properties of the mean acquisition time (MAT) for wide bandwidth signals in dense multipath channels. We show that a lower bound on the MAT over all possible search strategies is the solution to an integer programming problem with a convex objective function. We also give an upper bound expression for the MAT over all possible search strategies. We demonstrate that the MAT of the fixed-step serial
search (FSSS) does not depend on the location of the first resolvable path within the uncertainty region, thereby simplifying the evaluation of the MAT of the FSSS. The results in this paper can be applied to the design and analysis of fast acquisition systems in various wideband scenarios.
We present a framework for the analysis of frame synchronization based on Synchronization Words (SWs), where the detection is based on the common sequential algorithm: the received samples are observed over a window of length equal to the SW; over this window a metric (e.g. correlation) is computed; a SW is declared if the computed metric is greater than a proper threshold, otherwise the observation window is time-shifted of one sample. We assume a Gaussian channel, antipodal signalling and coherent detection, where soft values are provided to the frame synchronizer. We state the problem starting from the hypothesis testing theory, deriving the optimum metric (optimum likelihood ratio test (LRT)) according to the Neyman-Pearson lemma. When the data distribution is unknown, we design a simple and effective test based on the Generalized LRT (GLRT). %added - begin
We also analyze the performance of the commonly used correlation metric, both in the "hard" and "soft" version. We show that synchronization by correlation can be greatly improved by the LRT and GLRT metrics, and also that, among correlation based tests, sometimes hard correlation is better than soft correlation. The obtained closed form expressions allow the derivation of the receiver operating characteristic (ROC) curves for the LRT and GLRT synchronizers, showing a remarkable gain with respect to synchronization based on correlation metric. The effect on the performance of non-equally distributed data is also shown.
Phase noise may be regarded as the most severe cause of performance degradation in OFDM systems. Hot carriers (HCs), found in the CMOS synchronization circuits, are high-mobility charge carriers that degrade the MOSFET devices’ performance by increasing the threshold voltage required to operate the MOSFETs. The HC effect manifests itself as the phase noise, which increases with the continued MOSFET operation and results in the performance degradation of the Voltage-Controlled Oscillator (VCO) built on the MOSFET. The HC effect is particularly evident in the short-channel MOSFET devices. The MOSFET instability will impact on the OFDM system performance. The relationship between the OFDM system performance and the hot carrier effect can be analyzed in terms of a crucial parameter, the MOSFET threshold voltage. In this paper, we derive a general phase noise model for OFDM systems based on the Hot-carrier effect and the corresponding drifted threshold voltage in differential ring oscillators. The expected OFDM performance degradation due to the hot carrier effect is provided through our simulations. We show that the OFDM BER performance evaluation using the existing phase noise models can be upto three orders of magnitude different from the results obtained by using our phase noise model.
General Additive Increase and Multiplicative Decrease (General AIMD or GAIMD) congestion control generalizes the standard TCP congestion control mechanism. In this paper, we present GAIMD-SS, an enhanced model for predicting the long-term steady-state mean throughput of the GAIMD congestion control, towards more accurate results than previous GAIMD model. We develop a three-state Markov chain for analyzing the behaviors of the GAIMD, and enhance previous works by taking into account the slow start phase and receiver's maximum window limitation. Our experiment and simulation results show that the GAIMD-SS model predicts the sending rate of GAIMD congestion control more accurately than the previous works in a wider range of packet loss rate.
In this survey we identified some measuring metrics to study the average noise power variations in typical outdoor power substations. Power substations generally have metallic structures and despite the insulation considerations have high electric fields. The physical size of a substation does not allow a complete controlled experiment. We arranged a setup plan to study the noise floor variation in a few substation yards in residential, industrial/isolated (sparsely populated) subdivisions, collected the empirical data sets and compared the achieved results with the known noise constituents that were cited in the literature. A two-weeks measuring window was chosen to control for any possible factors that might confound results. The analysis suggests that the noise floor variation (and hence the link quality) has an underlying dominant semi-deterministic time-dependent constituent in addition to the classical random distribution. Although it is of no surprise that the semi-deterministic component is associated with the location of the substation yard (e.g. residential or industrial), its dynamic range is significant. The methodology, which is adopted in this study, has applications in the analysis of Fixed Wireless Access (FWA), Local Multipoint Distribution Service (LMDS).
A clock and data recovery (CDR) circuit is one of the crucial blocks in high-speed serial link communication systems. The data received in these systems are asynchronous and noisy, requiring that a clock be extracted to allow synchronous operations. Furthermore, the data must be "retimed" so that the jitter accumulated during transmission is removed. This paper presents a novel architecture of CDR, which is very tolerant to long sequences of serial ones or zeros and also robust to occasional long absence of transitions. The design is based on the fact that a basic clock recovery having a clock recovery circuit (CRC) and a data decision circuit separately would generate a high jitter clock when the received non-return-to-zero (NRZ) data with long sequences of ones or zeros. To eliminate this drawback, the proposed architecture incorporates a data circuit decision circuit within the phase-locked loop (PLL) CRC. Other than this, a new phase detector (PD) is also proposed, which was easy to accomplish and robust at high speed. This PD is functional with a random input and automatically turns to disable during both the locked state and long absence of transitions. The voltage-controlled oscillator (VCO) is also designed delicately to suppress the jitter. Due to the high stability, the jitter is highly reduced when the loop is locked. The simulation results of such CDR working at 1.25Gb/s particularly for 1000BASE-X Gigabit Ethernet by using TSMC 0.25μm technology are presented to prove the feasibility of this architecture. One more CDR based on edge detection architecture is also built in the circuit for performance comparisons.
High-frequency (HF) communications is undergoing a resurgence despite advances in long-range satellite communication systems. Defense agencies are using the HF spectrum for backup communications as well as for spectrum surveillance applications. Spectrum management organizations are monitoring the HF spectrum to control and enforce licensing. These activities usually require systems capable of determining the location of a source of transmissions, separating valid signals from interference and noise, and recognizing signal modulation. Our ultimate aim is to develop robust modulation recognition algorithms for real HF signals that propagate by multiple ionospheric modes. One aspect of modulation recognition is the extraction of signal identifying features. The most common features for modulation recognition are instantaneous phase, amplitude, and frequency. Many papers present results based on synthetic data and unproven assumptions. However, this paper continues our previous work by applying the coherence function to noisy real HF groundwave signals; which removes the need for synthesized data and unrealistic assumptions.