top of page

Music and the Brain

  • Writer: Harshal
    Harshal
  • Feb 3
  • 6 min read

Book Review: 4/5 Impact On Me (Book By Aniruddh D. Patel)


Read more about the book here



I rate this book’s impact 4 out of 5.


It gave me rich insights about music’s evolutionary and neurological role. These insights felt genuinely new and fascinating, especially because I am a musician. At the same time, the book did not trigger the kind of personal behavior change that my highest-rated books caused.


What I learned and what stood out

  • The book says humans can transpose a pitch and still recognize it. Primates and even songbirds cannot do this.

  • The book says songbirds can learn to tell the difference between patterns like ABCD vs DCBA, or BACD vs A, B, C, D.

  • Adult male and female vocal pitches differ by about 50%.

  • Listening to a melody lights up more of the right side of the brain, based on an fMRI study.

  • Relative pitch uses the same brain areas used for spatial processing, like rotating an object in your mind. This helps explain how you recognize the same tune when someone plays it in a different key.


Why music may have evolved

The book suggests music may have evolved along with humans’ ability to form groups with in-group members. It probably does not link strongly to sexual selection, like how birds sing to attract mates, because men and women do not differ much in musical ability.


I also found this contrast interesting:


  • In music, repetition of phrases can feel normal and even good.

  • In language, repetition often feels like poor style.


The book also says the oldest musical instrument is about 40,000 years old, a Bone flute.


What animals and babies reveal

  • In experiments on monkeys, they chose silence over Western classical music.


  • In experiments with chimpanzees, they liked Indian classical and North African classical music. They did not like Japanese music that used rhythmic drums.


  • Studies in apes show they act differently with different music:


  • With soothing music, they help each other more.

  • With aggressive music, they get more animated.


  • Studies on babies show they can tell sad vs happy music.


  • Babies who are played music are more likely to help each other, because music can increase an in-group feeling.


Emotion in music

People can understand emotion in speech even when they do not understand the language. The book says people can also understand a complex mix of emotions in music, including anger, sadness, fear, happiness, nostalgia, and excitement, even from concert music outside their culture.


Some patterns the book mentions:


  • Minor key feels sadder than major key.

  • Slow vs fast also changes emotion.

  • Most music people choose causes positive emotions, not anger or fear. That fits, because people rarely choose music that makes them feel bad.


Chills and goosebumps

One physical sensation people feel is chills or goosebumps. The book explains this as a two-part reaction:


  • A dramatic change in music can signal danger and trigger the body.

  • Another part of the brain recognizes it is not real danger, it is music. That split can make the moment feel pleasurable.


The book also says people like music that connects different parts of the brain, through functional connectivity of audio regions.


Instinct, culture, and personal memory

The book separates musical feeling into different sources:


  • Instinctual feelings (fast tempo feels energetic, foot tapping changes how you feel).

  • Culturally learned feelings (what “fits” and what feels normal depends on the music you grew up with).

  • Individual feelings (your personal memories, like hearing a song during falling in love or at a party with friends).


It also says there are eight ways music affects you, spread across these categories.


How the brain analyzes sound

The book describes specialized processing:


  • One side focuses more on temporal analysis (timing).

  • The other side focuses more on frequency analysis (precise pitch content).


It also explains two core parts of instrument sound:


  • Pitch: the fundamental frequency.

  • Timbre: the mix of harmonic frequencies that gives an instrument its character.


A key insight: even if you remove the fundamental frequency, you can still guess the pitch from the harmonics.


The book also mentions language metaphors people use:


  • People describe pitch with pairs like young vs old, thick vs thin, to describe high vs low sounds.


It also says sounds have a timeline. If you play a piano note backwards, it will not sound like a piano.


Recognition happens fast

The book says people can identify songs from very short clips:


  • Under half a second for popular songs.

  • People guessed correctly about 25% of the time.


It also says emotional response can happen extremely fast:


  • Within 30 seconds, and even within 1 second. This suggests fast recognition of timbre matters for survival-linked reactions.


Harmony, dissonance, and “home”

The book explains the harmonicity theory:


  • When you add two tones, their harmonic structures can combine into equally spaced tones (harmonicity).

  • If they do not, the result feels disharmonic.


It says this is biologically common, but culture shapes preference. Some cultures value disharmonic sounds to express specific emotions.


It also describes tonal “gravity”:


  • The tonic (like C in the C scale) feels like a central tone you return to.

  • When people rate how well notes fit with a C scale, they rate C highest, then G (the dominant).

  • A dominant seventh chord creates tension and incompleteness that resolves with the next chord. Even disharmony can help music move forward.


Rhythm, speech, and language links

The book distinguishes:


  • Rhythmic and periodic beats in songs.

  • Rhythmic but not periodic patterns like Morse code, which still have temporal structure.


It also connects speech timing and musical phrasing:


  • When you speak, you often stretch sounds near commas, full stops, or phrase endings.

  • Long pauses and stretched words can signal a pause in meaning.


It says cadence differs across languages, like English vs Japanese vs French. It also suggests composers’ musical spacing may reflect the language they grew up with.


It adds that kids learn some of these patterns quickly across cultures.


It also says people tend to like beats around 100 BPM, which sits near heart rate range.


Vocal learning, beat tapping, and measurement tools

  • Some animals are vocal learners and can recognize a song even when played in a different key.

  • Many apes are not vocal learners.

  • The book suggests sea lions may be vocal learners, and mentions seals and walruses as vocal learners.

  • Parrots are vocal learners.


Humans can tap to a beat. Some apes react about 200 milliseconds after hearing a beat, which means they react rather than predict it. The book treats this as a key difference.


It also compares brain measurement tools:


  • EEG has high time resolution (up to 1 millisecond).

  • fMRI has less time resolution but helps with location, though EEG cannot pinpoint location precisely.


Music training and brain change

The book argues music is not “right brain only”. It connects many regions across both sides, including the front left areas involved in cognitive processing.


It also says musicians show more brain activity in finger areas, including fingers that non-musicians use less, like the left hand or the left little finger.


The book says the brain is more sensitive before age 7. Training before 7 is more likely to change the brain. It also says students with music training learn reading and other academic subjects faster when they enter school.


Babies, the womb, and calming

The book includes a fascinating example: sound from the father singing can reach the baby in the uterus.


It describes experiments at 37 weeks pregnancy:


  • Babies in the uterus react to music they heard a couple of times before.

  • After birth, babies react to music they heard in the uterus, but not to other music. This suggests early hearing and learning.


It also says babies prefer hearing an unfamiliar person speak in a language they know, because their native tongue appeals most.


When babies feel distress, they calm faster when:


  • The mother sings.

  • The mother speaks.

  • The mother shows physical affection.


The book suggests singing calms babies even more strongly than physical affection.


Music, medicine, and therapy

The book mentions conditions where a person cannot perceive or understand music. It links this to disrupted communication between brain areas, even if some areas still light up with music.


It also claims:


  • Surgical patients who listen to music use about 15% less anesthesia medication.

  • They report about 20% less pain.


It also notes NICU babies face many alarms, injections, and routine interruptions. It suggests they may show anxiety sooner, likely because of lower cortisol.


It says people with Alzheimer’s have better outcomes if they listen to music, and they tolerate medications more.


It also says people recovering from stroke, or people with speech disorders in therapy, can benefit from music intonation therapy (MIT), where they sing short phrases instead of speaking them.


Patterns in birds and early human development

The book says if you slow down parrot or songbird songs, you can hear musical patterns, including repeated short phrases.


It also says human babies babble even if they cannot hear their parents, for example if they are deaf. Over the first few years, they gradually lose the ability to understand and reproduce languages that are not native to their environment.


Similarly, the book says kids at a young age can recognize many pieces of music.

bottom of page