While many parallel computers have been built, it has generally been too difficult to program them. Now, all computers
are effectively becoming parallel machines. Biannual doubling in the number of cores on a single chip, or faster, over the
coming decade is planned by most computer vendors. Thus, the parallel programming problem is becoming more
critical. The only known solution to the parallel programming problem in the theory of computer science is through a
parallel algorithmic theory called PRAM. Unfortunately, some of the PRAM theory assumptions regarding the
bandwidth between processors and memories did not properly reflect a parallel computer that could be built in previous
decades. Reaching memories, or other processors in a multi-processor organization, required off-chip connections
through pins on the boundary of each electric chip. Using the number of transistors that is becoming available on chip,
on-chip architectures that adequately support the PRAM are becoming possible. However, the bandwidth of off-chip
connections remains insufficient and the latency remains too high. This creates a bottleneck at the boundary of the chip
for a PRAM-On-Chip architecture. This also prevents scalability to larger "supercomputing" organizations spanning
across many processing chips that can handle massive amounts of data. Instead of connections through pins and wires,
power-efficient CMOS-compatible on-chip conversion to plasmonic nanowaveguides is introduced for improved latency
and bandwidth. Proper incorporation of our ideas offer exciting avenues to resolving the parallel programming problem,
and an alternative way for building faster, more useable and much more compact supercomputers.
A new paradigm for an all-to-all optical interconnect is presented. It could be part of an interconnection fabric between parallel processing elements and the first level of the cache in a computer system. Parallel processing has traditionally aspired to improve performance of such systems. An optical interconnect raises a new possibility: obtain both improved performance and significant cost reduction with respect to standard serial computer system models.
Optical wireless networks are emerging as a viable, cost effective technology for rapidly deployable broadband sensor communication infrastructures. The use of directional, narrow beam, optical wireless links provides great promise for secure, extremely high data rate communication between fixed or mobile nodes, very suitable for sensor networks in civil and military contexts. The main challenge is to maintain the quality of such networks, as changing atmospheric
and platform conditions critically affect their performance. Topology control is used as the means to achieve survivable optical wireless networking under adverse conditions, based on dynamic and autonomous topology reconfiguration. The topology control process involves tracking and acquisition of nodes, assessment of link-state information, collection and distribution of topology data, and the algorithmic solution of an optimal topology. This paper focuses on
the analysis, implementation and evaluation of algorithms and heuristics for selecting the best possible topology in order to optimize a given performance objective while satisfying connectivity constraints. The work done at the physical layer is based on link cost information. A cost measure is defined in terms of bit-error-rate and the heuristics developed seek to form a bi-connected topology which minimizes total network cost. At the network layer a key factor is the traffic matrix, and heuristics were developed in order to minimize congestion, flow-rate or end-to-end delay.