We describe wireless networking systems for close proximity biological sensors, as would be encountered in artificial
skin. The sensors communicate to a "base station" that interprets the data and decodes its origin. Using a large bundle
of ultra thin metal wires from the sensors to the "base station" introduces significant technological hurdles for both the
construction and maintenance of the system. Fortunately, the Address Event Representation (AER) protocol provides an
elegant and biomorphic method for transmitting many impulses (i.e. neural spikes) down a single wire/channel.
However, AER does not communicate any sensory information within each spike, other that the address of the
origination of the spike. Therefore, each sensor must provide a number of spikes to communicate its data, typically in
the form of the inter-spike intervals or spike rate. Furthermore, complex circuitry is required to arbitrate access to the
channel when multiple sensors communicate simultaneously, which results in spike delay. This error is exacerbated as
the number of sensors per channel increases, mandating more channels and more wires.
We contend that despite the effectiveness of the wire-based AER protocol, its natural evolution will be the wireless
AER protocol. A wireless AER system: (1) does not require arbitration to handle multiple simultaneous access of the
channel, (2) uses cross-correlation delay to encode sensor data in every spike (eliminating the error due to arbitration
delay), and (3) can be reorganized and expanded with little consequence to the network. The system uses spread
spectrum communications principles, implemented with a low-power integrate-and-fire neurons. This paper discusses
the design, operation and capabilities of such a system. We show that integrate-and-fire neurons can be used to both
decode the origination of each spike and extract the data contained within in. We also show that there are many
technical obstacles to overcome before this version of wireless AER can be practical.
The bones of the middle ear are the smallest bones in the body and are among the most complicated functionally. They are located within the temporal bone - rendering them difficult to access and study. An accurate 3D model can offer an excellent illustration of the complex spatial relationships between the ossicles and the nerves and muscles with which they intertwine. The overall objective was to create an educational module for learning the anatomy of the outer, middle and inner ear from MRI data. Such a teaching tool will provide surgeons, radiologists and audiologists with a detailed self-guided tour of ear anatomy. MRI images of the auditory canal were acquired using a 9 Tesla MR scanner. The acquired images were reformatted along obliquely oriented axes to obtain the desired orientation relative to anatomical planes. An automated segmentation algorithm was applied to the MRI data to separate the cochlea, auditory nerve and semi-circular canals in the inner ear. Semi-automated segmentation was used to separate the middle ear bones. This was necessary in order to detach the malleus from the incus and the tympanic membrane from the malleus, as the boundaries between these structures were not sufficiently distinct in the data. Each structure became an independent object to facilitate its interactive manipulation. Different angles of view of the 3D structures were rendered illustrating the anatomic pathway starting at the tympanic membrane, through the middle ear bones, to the semi-circular canals, cochlea and auditory nerve in the inner ear.