In audio reviewing, there’s a tension between scientific explanations for the qualities of the sound we hear and how the music, as conveyed through our equipment, makes us feel. Insights from the new field of interpersonal neurobiology can help us understand this conflict (footnote 1).
The distinction between our emotional response to music and our ability to describe and analyze it scientifically arises from the differing functions of the brain’s right and left hemispheres. The right hemisphere is in constant communication with the autonomic nervous system (ANS) via branches of the vagus nerve to the internal organs, informing our sense of the meaning of events. The ANS produces our intuition and preverbal “gut reactions.” The autonomic nervous system is integrated closely with the right hemisphere and develops earlier than the left hemisphere and language.
The right hemisphere of the mother is in real communication with the right hemisphere of the infant, forming the template for all authentic emotional communication. Tactile contact, rocking, and cooing are sensory cues for safety; our needs for touch, movement, rhythm, and song, which remain throughout our lives, are centered in this hemisphere. The sing-song speech of the mother adds excitement to this growing connection and establishes a foundation for the child’s growing confidence and self-esteem, giving them courage to explore the world. It’s a love affair, creating joy easily seen in the child’s smiles, gurgles, and arm-waving.
Music will continue to give us joy and connection with others throughout our lives. It is the “balm in Gilead”the universal salvefor the wounds and strains of life.
The left hemisphere begins its development between 10 and 18 months as the child begins to explore and talk. In time, words become the medium of exchange by which we attempt to share experiences. Right-hemisphere phenomena including rhythm, prosody, vocal inflection, body language, and facial expressions continue to inform us subconsciously about words’ deeper meaning and import. As we mature, our “right brain” connects us emotionally via words, images, and musicto family, neighbors, and friends, and eventually to tribes, nations, and humanity.
Words are very good at making primary distinctions between objects, but they are very poor at capturing feelings and gut reactions. Our ability to represent internal sensations is limited; we’re reduced to metaphor, simile, analogyin other words, to poetry. Science can precisely define electrical variables, frequency, jitter, and other quantitative data but is hopeless at describing inner, qualitative experience. Poetry, then, is the language of the right hemisphere.
When we describe the sound of an audio system, what words do we use? We say it sounds “open” when the aural image seems spacious, even though nothing is actually open. The English like to use the word “jump” to describe dynamism in the sound, but doesn’t it merely evoke the excitement of jumping?
I like to use the term “feathery” to describe the sound of “string carpet”: the sweet sound of strings playing softly in a sustained way, under the melody of other instruments or voices. Of course, there are no “feathers”just vibrating strings and resonating wood chambersbut “feathery” describes a sensual comfort I hear in the sound, like sinking into a feather bed.
When we try to capture the feelings created by a piece of music through a particular piece of equipment, all we are left with is some form of poetry. We search for and occasionally discover terms to communicate to others what the music helps us feel.
Through music, we connect with the right brains of composers and performerseven with emotional centers of the designers and builders of the equipment that we love, which we have carefully chosen to transport us to ethereal realms of joy.
The highest expression of universal love of humanity, I would argue, is Beethoven’s Symphony No.9, which is based on the poetry of Schiller’s “Ode to Joy.” Schiller’s poem describes joy as the “daughter of Elysium”heavenand when we are caught up in the pure ecstasy of the final movement, we feel connected by bonds of love to all humanity with something that can only be described as joy.
“You millions, I embrace you. This kiss is for all the world.” The specific love and created joy of the mother and child is universalized to the love of all humanity through the feminine aspectthe daughterof heaven itself: joy.
Music touches us first in the right hemisphere. It evokes a near-immediate response in our embodied selves. It takes about half a second for the left hemisphere to begin to engage the experience, striving to give words and concepts to what we already feel in our hearts as real and true. The audio reviewer should, as Jim Austin suggested in his As We See It essay in the March issue, pay attention to the feelings the music evokes first, using poetic analogies to describe them because that is how our brains work. Only then can we try, in a logical, scientific mannerin the left hemisphereto explain what’s responsible for the sound we are enjoying and why it is enjoyable. (Thank you, John Atkinson.)
As audiophiles, we want to feel like we are there while listening to recorded music. We thrill at the illusion of performance spaces spun from spatial cues. But, as Dr. Sattler tells John Hammond in Jurassic Park, it’s all an illusion: “You can’t think through this one, John. You have to feel it.”
There is no “absolute sound”just pleasing illusions that are an aural simile of the original experience, created by the mind, heart, and skill of the performers (and the recording engineers), realized in our homes by our carefully curated audio equipment. The “truth” of the illusion is in how it makes us feel. That’s our reality, and it’s what we audiophiles love.
Brian Richardson is a clinical psychologist in Lincoln, Illinois. He is also a longtime Stereophile reader.
Footnote 1: See Affect Regulation and the Origin of the Self, by Allan Schore (1994).
Click Here: new zealand rugby team jerseys