Skip to main content
Engineering LibreTexts

4.1: Natural Mechano-sensory Systems

  • Page ID
    46375
  • 4.1 Natural Mechano-sensory Systems

    The primary mechano-sensory systems provide the sense of tough and hearing. Neurons are stimulated by contact with objects in the environment or by contact with fluid compression waves caused by movements in the atmosphere or underwater. The primate auditory sense is caused by air vibrations against the eardrum, which causes bone vibrations in the inner ear, which causes deformations of the basilar membrane that resemble the shape of the frequency spectrum of the incoming sound energy.

    4.1.1 Mechano-sensory capability in simple life-forms

    The most basic sense is the mechano-sensory tactile sense, which is the response to mechanical distortion. The history of the tactile sense goes back to ancient prokaryocytes, which are cellular organisms with no distinct nuclei, such as bacteria or blue-green algae. For these fundamental life forms, the tactile sense is required for continually detecting the continuity of the cell boundaries. The organism can then 1) counteract swelling due to osmotic forces (fluid entering the cell to balance ionic concentrations) and 2) prepare for cell division when the tactile sense detects swelling for that purpose. [Smith08].

    4.1.2 Mechano-sensory internal capability within higher life forms

    The human hypothalamus located under the brain serves as an interface between the nervous system and the endocrine (interior secretion) system. The fluid secretions controlled by the hypothalamus are a primary influence for heart rate and other biological rhythm control, body temperature control, hunger and thirst control, digestion rate, and other related functions involving secretions. It is believed to be the center for “mind-over-body” control as well as for feelings such as rage and aggression [Tort84]. Within the hypothalamus is a complex neuronal design based on stretch-sensitive mechanoreceptors that sample the conditions of blood cell membranes in an analogous way that they serve the single-celled organisms. The difference is that the prokarycyte stretch-sensitive mechanoreceptors built into the organism, while the hypothalamic mechanoreceptors sample the blood cells from external to the cell [Smith08].

    Mechano-sensors are built around stretch-sensitive channels that allow immediate detection and rapid response. Photo-sensory and chemo-sensory reception involves a complex biochemistry to translate the presence of a photon or a chemical tastant or odorant into an ionic charge presence within the receptor. The ionic charge increase is then translated into nerve impulses to eventually be processed by the higher brain functions. The mechanoreceptors, on the other hand, respond immediately to mechanical distortion.

    4.1.3 The sense of touch

    Mechanoreceptors are fundamental to the detection of tension and the sense of touch. They are also basic components to detecting vibrations, accelerations, sound, body movement and body positions. They play an important role in kinesthesia, which is sensing the relative positions of different body parts. It is believed that all these senses are ultimately derived from stretch-sensitive channels. However, the human understanding of the molecular structure and nature of most mechanosensory channels is still in its infancy.

    4.1.4 Mechano-sensory sensilla

    A discriminating characteristic of arthropods is their external skeleton, which limits (fortunately!) their overall size. Sensory organs such as the retina cannot develop in the hard exoskeletons. However, their kinesthetic sense is well developed due to sensory endings in muscles and joints of appendages. The most common insect sensory unit is the mechanosensory sensilla, each of which includes one or more neurosensory cells within a cuticular (external shell) housing. The cuticle is the external surface of an arthropod. Mechanosensitive sensilla may respond to cuticular joint movements or be positioned to detect movements within the cavity. The three primary mechanosensory sensilla in arthropods include:

    Hair sensilla

    Neurosensory cells have dendritic inputs from within a hair protruding from the cuticle and axonal outputs from the cell bodies located at the root of the hair embedded in the epidermis underneath the cuticle. Deflection in one direction causes depolarization (an increase from the nominal –70 mV resting potential) while deflection in the other direction causes hyperpolarization (a decrease from –70 mV.) Minimum response thresholds are known for distortions down to 3-5 nm with response times down to 100 us (0.1 ms). These figures imply the use of opening and shutting ion gates; the biophysics for mammalian hair cells is similar.

    Campaniform sensilla

    The hair has been reduced to a dome on the cuticle. The dendritic inputs are just beneath the external surface so that the neurosensory cell senses a slight surface deformation. Responses have been shown with deformations as small as 0.1 nm. Directional selectivity is achieved with elliptically shaped domes, where deformation along the short axis is more sensitive to deformation along the long axis.

    Chordotonal organs – Mechanosensory sensilla developed within the body cavity. These are characterized by a cap or other cell that stimulates numerous dendritic inputs to the neurosensory cell. Chordotonal organs are one of the proprioceptor types, located in almost every exoskeletal joint and between body segments. Many are sensitive to vibrations; for example, one type in the cockroach is sensitive to vibrations between 1 kHz to 5 kHz and amplitudes from 1 nm to 100 nm (0.1 micron). These sensing capabilities are important for detection of danger and for social communications.

    Other non-mechanosensory sensilla include gustatory (taste), olfactory (smell), hygroscopic (humidity-sensing), and thermal (temperature-sensing) sensilla.

    Separating insect mechanoreceptors into vibration detectors and acoustic detectors is difficult since many times the same receptors are used to detect vibrations in air, water, and ground. Certain water insects (pond skater, Gerris, and water-boatman, Notonecta) detect wave amplitudes around 0.5 microns in a frequency range 20-200Hz and time delay range 1 to 4 ms.

    Hairs and tympanic membranes for auditory sensing

    Two basic types of sound detectors have been developed in insects: Hairs and tympanic organs. Hairs only respond to lateral distortions of air when the insect is very near the sound source, such as the wing beat frequency of a predator insect or a prospective mate. They are accompanied in detecting vibrations by Johnston’s organ, which consists of largely packed sensilla. The Johnston’s organs also detect flight speed in bees and gravity in the water beetle.

    Tympanic organs (ears) respond to pressure waves and are thus able to respond to sound sources much farther away. Tympanic organs are used for communications, attack, and defense. The basic parts include a tympanic membrane, air cavity, and a group of chordotonal organs that provide the neuronal signaling from the acoustic stimulus. Across the species the tympanic organs have developed on many different parts of the insect body.

    Evasive maneuvers of the lacewing moth

    An interesting use of the tympanic organ is found in the green lacewing (Chrysopa carnea). Military pilots being pursued by enemy aircraft may mimic the lacewing when being pursued by a hungry bat. As the bat detects its prey and closes in, its active sonar pulses increase in frequency. When the search pulses are detected, the lacewing folds its wings into a nose-dive out of the sky before the bat’s sonar can lock on. Noctuid moths have two neurons for each tympanic organ. One signals a bats detection sonar pulse while the other starts responding with the higher frequency tracking pulses. With the first signal the moth will retreat in the opposite direction; with the second signal it will try desperate avoidance maneuvers, such as zig-zags, loops, spirals, dives, or falling into cluttering foliage. (Surrounding nearby vegetation “clutters” the returning sonar pulses echoing off a target moth; similarly, vegetation also “clutters” the returning echo radar pulses echoing off a military target.) Some moths will emit sounds during the last fraction of a second; it is not sure if the moth is warning others or trying to ‘jam’ the bats echolocation analysis mechanism [Smith08].

    Equilibrium and halteres

    Hair cells in different orientations lead to gravitational force detection from different orientations, which lead to balance and equilibrium. Fluid-filled tubes in the vertebrates called the semicircular canals are oriented orthogonal to each other. Two fluids, called endolymph and perilymph are very different with respect to ionic concentration levels. K+ ions flow through the stereocilia, which project well into the K+-rich endolymph. The resulting design is a complex system of orientation signals that are processed to achieve balance and equilibrium.

    The membranous labyrinth has developed from early lamprey (eel-like fish). It includes the semicircular canals and fluid-filled chambers called the utriculus and sacculus. It also includes pre-cochlear organs and cochlea (auditory part of hearing system) in the higher species.

    Many insects have two pairs of wings to help control their flight, but the dipteran (two-winged) insects have developed halteres to replace the hind wings. These organs are attached to the thorax just under each wing and have dumbbell shaped endings causing responses to changes in momentum. Dipteran insects typically have short, stubby bodies that make it particularly remarkable that they can control their flight. The halteres provide inertial navigation information that is combined with optic flow input through the vision system. The head is kept stabilized by its own visual input, while the halteres provide inertial information used to stabilize flight. The halteres can be thought of as vibrating gyroscopes that serve as angular rate sensors [North01]. It can be shown that a system of two masses suspended on a stiff beam at 45° has the capability to provide sufficient information for stabilized flight control. How the neurons are connected and how the information is processed to accomplish stabilized flight control, however, will remain a mystery for a long time to come [North01].

    The halteres have numerous campaniform sensilla nerve endings attached at the end as well as numerous chordotonal organs embedded within. These signals can detect slight motion in each of the three degrees of freedom: pitch, roll, and yaw. Pitch is rotation about a horizontal axis orthogonal to the main horizontal axis, roll is rotation about the main horizontal axis, and yaw is rotation about the vertical axis. To illustrate each of these three, consider the effects of rotational motion when looking ahead from the bow of a ship: Pitch causes up and down motion, roll causes the left side to go up when the right goes down (and vice versa), and yaw causes the ships heading to oscillate to the left and right. Halteres can oscillate through about 180° at frequencies between 100Hz and 500Hz [Smith08].

    4.1.5 Mammalian tactile receptors

    In mammalian skin tactile receptors can be classified into fast adapting, which respond only during initial skin deformation, and slow adapting, which continue to respond if the deformation is present. Fast adapters include:

    - Pacinian corpuscles, which are in the deeper layers of glabrous (non-hairy, like the palm) skin and respond to vibrations in the range of 70-1000 Hz

    - Meissner’s corpuscles, which are also in the deeper layers of glabrous skin and respond to vibrations in the range of 10-200 Hz

    - Krause’s end bulbs, like Meissner’s corpuscles but found in non-primates, responding to vibrations in the range of 10-100 Hz

    - Hair follicle receptors, which are located just below the sebaceous (sweat) glands; numerous nerve endings give hair follicles a wide range of hair-movement sensitivities and response times.

    The slow adapting tactile receptors in mammalian skin include:

    - Merkel cells, which respond to sudden displacements, such as stroking

    - Ruffini endings, which respond to steady displacement of skin

    - C-Mechanoreceptors, located just beneath the skin, in the epidermis/dermis interface, have unmyelinated (unprotected) nerve fibers extending into the epidermis (the most external layer of skin). These nerves respond with slowly-adapting discharge to steady indentations of the skin. They also respond to temperature extremes and to tissue damage, interpreted as pain.

    Basic hair cells are similar in structure among all vertebrates. Peak sensitivities in the human ear reach movements of only a tenth of a nanometer, which is one angstrom. Hair cell sensitivity “is limited by the random roar of Brownian motion” [Smith08]. Hair cell ending are composed of bundles of fine hair-like bundles called stereocilia and a single, tall cilium with a bulbous tip called a kinocilium. The receptor potential depolarizes (rises from–70 mV) for motion in one direction and hyperpolarizes (decreases below –70 mV) for motion in the other direction. (Biologists refer to the normal neuronal resting potential of –70 mV as the natural voltage “polarization” state).

    4.1.6 Human auditory system

    The founder of Ohm’s Law, G. S. Ohm, once suggested that the human auditory systems does a Fourier analysis of received sound signals, breaking the signals into separate components with separate frequencies and phases [Kand81]. Although this has proven to be true, the auditory system does more than a simple Fourier analysis. The input is fluid pressure waves (sound in air) from the environment striking the eardrum and the ear transforms the pressure waves to neuronal signals processed by the auditory cortex in the brain.

    Figure 4.1.6-1 shows a sketch of the key components of the human auditory system. Sound enters the outer ear, and the vibrations are transferred to the middle ear and then the inner ear. The outer ear is composed of the external cartilage, called pinna, the ear canal, and the tympanic membrane, or ear drum. The middle ear is composed of three bones in an air-filled chamber; the inner ear, or membranous labyrinth, contains the semicircular canals, fluid-filled chambers called the utriculus and sacculus, which are near the semicircular canals (but not labeled in Figure 4.2.1-1) and the cochlea.

    Drawing of the ear and inner ear with labeled features

    Figure 4.1.6-1 Human Auditory System

    Credit: NIH Medical Arts, Picture from https://www.nidcd.nih.gov/sites/default/files/Documents/health/hearing/AgeRelatedHearingLoss.pdf

    The outer ear is designed to collect sound waves and direct them into the ear canal to the eardrum. The middle ear ossicles are the malleus, or “hammer (mallet)”, the incus, or “anvil”, and the stapes, or “stirrups”. The names come from their shapes being similar to familiar objects. The ossicles serve to provide an acoustic impedance matcher between the air waves striking the eardrum and the fluid waves emanating from the oval window in the cochlea. Without the impedance matching most of the air-wave energy would reflect off the surface of the cochlear fluid. Another purpose of the ossicles is to amplify the energy density due to the variation in acoustic surface area: The eardrum surface area is about 25 times larger than that of the oval window.

    Time delays and sound localization

    For humans, the typical time delay for a sound wave to reach each eardrum is between 350 to 650 microseconds [Mead89], depending on the binaural separation distance. A source directly in front of the listener will reach each ear simultaneously with no time delay, while a source at right angles will reach each ear with this maximum time delay. The difference in wave-front arrival time is therefore one of the horizontal localization cues for the sound source, as will be shown later for the barn owl.

    Another horizontal localization cue for humans is the result of high frequency attenuation caused by sound traveling around the head. This is referred to as the acoustic head shadow. A sound source from directly ahead will have the same attenuation effect in both channels, while a source coming from an angle will result in more high frequency attenuation at the contra-lateral (opposite-sided) ear. The sound impulse response from a source between center and right angles shows both a delay and a broadening on the contra-lateral ear with respect to the ipsi-lateral (same-sided) ear.

    Elevation information is encoded in the deconstructive interference pattern of incoming sound wavefronts as they pass through the outer ear along two separate paths: The first path is directly into the ear canal, and the second is a reflected path off the pinna (see Figure 4.2.1-1) and again off the tragus before entering the ear canal. The tragus is an external lobe like the pinna but much smaller (and not seen in Figure 4.2.1-1); the tragus is easily felt when the finger is at the opening of the ear canal. The delay time in the indirect pinna-tragus path is a monotonic function of the elevation of the sound source. Since the destructive interference pattern is a function of the delay time, this pattern serves as a cue for the elevation of the sound source with respect to the individual.

    Static and dynamic equilibrium

    The three semicircular canals are mutually orthogonal to make available signals from each degree-of-freedom. Two chambers connect to the canals, called the utricle and saccule. Static equilibrium is sensed in regions of the chambers, while dynamic equilibrium is sensed at the cristae located at the ends of the semicircular canals.

    The macula in the utricle and saccule (inner ear chambers) serve to provide static equilibrium signals. Hair cells and supporting cells in the macula have stereocilia and kinocilium extending into a gelatinous layer supporting otoliths (oto ear, lithos stone). The otoliths are made of dense calcium carbonate crystals that move across gelatinous layer in response to differential gravitational forces caused by changes in head position. The movement stimulates the hair cells that provide static equilibrium signals to the vestibular cochlear nerve. (The vestibular branch contains signals from the semicircular canals and the utricle and saccule chambers, while the cochlear branch contains signals from the cochlea).

    The cristae located in the ends of each semicircular canal serve to provide dynamic equilibrium signals. Head movements cause endolymph to flow over gelatinous material called the cupula. When each cupula moves it stimulates hair cells comprising the ampullar nerve at the end of each of the semicircular canals. These signals eventually cause muscular contractions that help to maintain body balance in new positions.

    Time-to-frequency transformation in the cochlea

    Sound vibrations from the external environment strike the eardrum, causing a chain reaction through the middle-ear ossicles that transforms the air vibrations into fluid vibrations in the basilar membrane of the cochlea. As shown in Figure 4.1.6-2, if the basilar membrane (inside the cochlea) were uncoiled and straightened out, it would measure about 33 mm long, and 0.1 mm (100 microns) wide at the round window end and 0.5 mm (500 microns) wide at the other end [Smith08]:

    line diagram of basilar membrane

    The vertical and horizontal dimensions are scaled differently.

    The basilar membrane is stiffer at the round window end and looser at the apex. This causes the wave propagation velocity to slow down as it travels down the basilar membrane. Depending on the initial frequency of the wave, this variable velocity behavior will cause a maximum resonant distortion along the path from the round window to the apex. The basilar membrane is quite complicated and includes sensitive inner and outer hair cell neurons that will respond to deformations of the basilar membrane at the location of each neuron. The hair cell neurons are located along the entire pathway so that the frequency content of the sound can be determined from the spatial location of the neurons that are firing. Thus, the basilar membrane performs a mechanical Fourier Transform on the incoming sound energy and the spatially-distributed neurons sample that signal spectrum.

    It was mentioned (Chapter 2) that sensory receptors adjacent to each other in the peripheral sensory system (such as the auditory system) will eventually fire neurons adjacent to each other in the auditory cortex. The relevant signal characteristic of adjacent neurons in the auditory sensor, namely the hair cell neurons adjacent to each other in the basilar membrane, correspond to adjacent frequency components in the input sound. This tonotopic map of the neurons of the basilar membrane is reconstructed in the auditory cortex as well. So, frequency cues are provided by which neurons are firing.

    Data sampling rates and coarse coding

    The rate of neuronal firing in the cochlea encodes the mechanical distortion of the basilar membrane, which is a direct consequence of the sound energy level of the source. This design is quite remarkable when considering the firing rate of neurons being a maximum of around 1 to 2 ms. Nyquist sampling criteria states that a 1ms sampling (1 kHz) of a signal can only encode information up to 500 Hz, yet human hearing can discern frequencies well above 10 kHz. Each neuron can only fire at a rate much less than the Nyquist criterion, but there are many neurons firing simultaneously, so the aggregate sampling rate is much more than that required to sample a signal whose bandwidth is that of the typical human hearing range (up to 20 kHz).

    The firing rate of neurons in the cochlea (basilar membrane) encodes sound intensity information, and not the sound frequency content. The frequency is coarsely-coded: Each neuron has roughly a gaussian frequency response, responding to around 10% to 20% of its peak frequency response. An adjacent neuron would have a slightly different peak frequency response. If both neurons are firing at the same rate, then the frequency would be the value in between their responses. If one is slightly higher than the other, then the frequency component would be closer to its peak response. With only two broadly overlapping gaussian-like frequency responses, a very specific frequency could be extracted with precision far beyond what either neuron could provide.

    This is yet another example of coarse coding. In the vision system we observe 4 photoreceptor types whose spectral response curves broadly overlap, yet due to the complex post-processing of highly interconnected neuronal tissue, millions of combinations of color, tone, and shade can typically be discerned. Similarly, the auditory mechanoreceptors are sensitive to frequencies in a 10-20% band around a peak, yet we can discern more specific frequencies at a much higher resolution.

    Figure 4.1.6-3 shows a Matlab-generated plot of three Gaussian curves centered at 1.0 KHz, 1.1 KHz, and 1.2 KHz. For a monotone input somewhere between 0.9 KHz and 1.3 KHz would stimulate all three neurons. Keep in mind that the intensity of each neuron response concerns the intensity of the sound, so a moderate response from one neuron could be a weak signal at its peak frequency response, or a stronger signal at a slightly different frequency. For the neuron whose peak response is 1.0KHz, the response would be about the same for a signal at 1.0 KHz, or a 850 Hz signal at twice the strength (where the normalized response is about 0.5), or a 800 Hz signal at 4 times the strength (where the response is about 0.25). A single neuron cannot use its response for very accurate frequency detection.

    Graph of three Gaussian curves representing different input frequencies

    Each neuron will respond to a given input frequency between 900 and 1300 Hz. The specific combination of the responses gives a specific frequency. A single neuron can only give a range of possible frequencies.

    It is therefore the relative responses of adjacent neurons (that are spatially located along the basilar membrane) that provides the frequency queues. The following example and exercise are intended to simply illustrate the improved frequency resolution that is obtained by comparing the responses of adjacent auditory neurons.

    Example 4.1.6-1

    Assume three auditory neurons have gaussian responses around peak frequencies of 2.0 KHz, 2.1 KHz, and 2.2 KHz like those shown in Figure 4.1.6-3. Assume the three gaussian responses have the same variance but these three different peak values. Give an estimate (or a range) for three separate inputs given that the normalized neuron output is measured as

    2.0 KHz Neuron 2.1 KHz Neuron 2.2 KHz Neuron
    Input_1 0.2 0.8 0.2
    Input_2 0.4 0.9 0.1
    Input_3 0.1 0.9 0.4

    Solution:

    For this problem we are not concerned with the significance of one particular response value, but instead how the response values compare to those of adjacent neurons. Conveniently the 2.1 KHz Neuron give the strongest response to all three inputs, so the tone would be at least close to 2.1 KHz. Notice for Input_1 that the response to both adjacent neurons is the same (0.2). Since all three curves have the same variance and due to symmetry of Gaussian curves the only possible frequency giving this set of responses would be one at exactly 2.1 KHz.

    The input frequency Input_2 would be closer to 2.1 KHz than 2.0 KHz or 2.2 KHz, but since the response of the 2.0 KHz Neuron is greater than that of the 2.2 KHz Neuron the input would be closer to 2.0 KHz than 2.2 KHz, so something less than 2.1 KHz. If the input frequency were the midpoint 2.05 KHz then we would expect the response values for both the 2.0 KHz Neuron and the 2.1 KHz neuron to be the same, but that is not the case. So, the Input_2 frequency should be greater than 2.05 KHz but less than 2.1 KHz, or in the range of about 2.06 KHz to 2.09 KHz.

    The input frequency Input_3 would be closer to 2.1 KHz than 2.0 KHz or 2.2 KHz, but in this case the response of the 2.2 KHz Neuron is greater than that of the 2.0 KHz Neuron, so the input would be closer to 2.2 KHz than 2.0 KHz, so something greater than 2.1 KHz. In this case if the input frequency were the midpoint 2.15 KHz then we would expect the response values for both the 2.1 KHz Neuron and the 2.2 KHz neuron to be the same, but once again that is not the case. So, the Input_2 frequency should be greater than 2.1 KHz but less than 2.15 KHz, or in the range of about 2.11 KHz to 2.14 KHz.

    The following give a summary of our estimates for the tonal input frequencies:

    2.0 KHz Neuron 2.1 KHz Neuron 2.2 KHz Neuron Estimated tonal frequency (KHz)
    Input_1 0.2 0.8 0.2 f ≈ 2.1 KHz
    Input_2 0.4 0.9 0.1 ~ 2.06 ≤ f ≤ 2.09
    Input_3 0.1 0.9 0.4 ~ 2.11 ≤ f ≤ 2.14

    Exercise 4.1.6-1

    Assume four auditory neurons have gaussian responses around peak frequencies of 3.0 KHz, 3.1 KHz, 3.2 KHz, and 3.3 KHz like those shown in Figure 4.1.2-5. Assume the four gaussian responses have the same variance but these four different peak values. Give an estimate (or a range) for three separate inputs given that the normalized neuron output is measured as

    3.0 KHz Neuron 3.1 KHz Neuron 3.2 KHz Neuron 3.3 KHz Neuron
    Input_1 0.1 0.8 0.8 0.1
    Input_2 0.4 0.9 0.8 0.2
    Input_3 0.7 0.4 0.2 0.1

    Answers:

    Input_1: f ≈ 3.15 KHz, Input_2: ~ 3.11 ≤ f ≤ 3.14 KHz, and Input 3: f ≤ 3.0 KHz

    • Was this article helpful?