Notes on Music, Brain, and Society
Evolution and uniqueness of human music
- Music as a candidate capacity that may underpin complex sociality; evolution of music could be coevolution with humans or a post-human development, possibly giving humans a kickstart in being.
- Humans are argued to be the only species that is truly musical; other animals (birds, whales) produce beautiful sounds but not music in the human sense (not multiple-function, not produced by all groups/ages, not used across many contexts).
- Birdsong/whale song: typically tied to specific contexts and functions (e.g., territory defense, mate attraction); tends to be sex-specific (often males) and seasonally/timed; human music is produced by both sexes, across ages, and across a wide range of occasions and functions.
- Neuroimaging shows humans are musically responsive to culturally familiar music even without formal study; music perception is widespread across cultures and times.
- Babbling in infancy is a precursor to speech and is musically rich (pitch variation, glides, squeaks); implies an innate musical capacity that develops alongside language.
- Language and music share pathways but differ in development and use; language learning is highly reinforced by constant exposure, while musical development depends on environment and explicit training.
Music and social function across cultures
- Music is not only an aesthetic activity but also a vehicle for group cohesion and social bonding; in many traditional cultures, music is a central part of socialization and adult roles.
- Venda tribe of South Africa example: pre-literate society where singing and dancing socialize children into adult roles; nearly all children learn to sing and dance, enabling equal participation and social integration.
- In many cultures, music is expected of everyone; even those who are less skilled still participate because engagement is a social norm and a sign of belonging.
- In early development, mother-infant interactions often center on singing (lullabies) and vocal turn-taking, supporting bonding and survival.
Mother-infant bonding and early auditory development
- The mother’s voice is a dominant sound in the early environment; infant-directed speech (motherese) and singing support survival by helping infants locate the caregiver and establish bonds.
- Infants engage in music-like vocalizations (babbling) before language, experimenting with pitch and rhythm; this demonstrates inherent musicality.
- Early exposure to language and musical interaction helps shape neural networks and social development; ubiquitous musical engagement in infancy lays groundwork for later language and social skills.
Talent, practice, and cultural variation in musical achievement
- Innate talent is contested; variation in musical achievement is pronounced within and across cultures.
- Cultural surroundings influence musical attainment: some societies have high baseline musical engagement where most children reach substantial competence, not just a “talented few.”
- In Western industrialized contexts there are large discrepancies in musical performance across individuals (performance, composition, singing).
- In many tribal cultures, music is a collective, normed skill; everyone sings, dances, and participates, suggesting environment and practice drive musical development.
- Group music may facilitate social cohesion; musical practice is tied to communal identity and social norms.
Brain bases of music: networks and plasticity
- Music perception engages widespread brain networks rather than a single center; integration involves temporal, frontal, parietal, occipital regions depending on the task (listening, memory, production, reading music).
- Familiar tunes activate the middle temporal gyrus (MTG) and related auditory memory networks; singing engages working memory and motor planning areas in the frontal cortex and motor cortex.
- Reading music activates parietal and visual areas; emotion processing involves limbic systems.
- There is no single “music center”; rather, music processing involves distributed networks that can overlap with language networks.
Language and music: overlap, differentiation, and training effects
- Early work suggested a divide: language tends to be left-hemisphere-dominant; music engagement involves both hemispheres, but distinct musical training can sculpt more independent networks.
- In singers, training can lead to a more specialized singing network in the left hemisphere, with reduced overlap with language networks, compared to non-singers who rely more on shared networks.
- A neuroimaging study comparing singers and non-singers showed that language tasks activated left-hemisphere language networks in all participants, but singers developed an independent singing network with less overlap onto language areas.
- Neuroplasticity allows the brain to reorganize its functional networks based on environmental demands (e.g., musical training leading to different neural coupling patterns).
Music therapy and rehabilitation: brain resilience and plasticity
- Music therapy can aid language recovery after left-hemisphere stroke; some patients who cannot speak may sing fluently after music-based interventions (melodic intonation therapy).
- Case example: patient with language impairment after stroke regained fluent singing and improved mood using musical therapy, suggesting cross-hemispheric recruitment and neural plasticity.
- Melodic intonation therapy uses melodic and rhythmic cues to facilitate speech production in aphasic patients; evidence supports cross-hemispheric compensation.
- Music therapy can be extended to other impairments (visual/hearing impairment, motor disorders, orthopedic challenges) by leveraging rhythmic and musical engagement to improve function.
- Training can shift language and music processing toward more independent neural networks, supporting rehabilitation and functional recovery.
Rhythm, timing, and entrainment: social and motor coupling
- Humans can entrain to external rhythmic pulses automatically and unconsciously, enabling social coordination (e.g., two people walking in step).
- Rhythmic entrainment is a robust feature of human behavior and appears to be more natural in human-human interaction than in human-computer interactions (humans synchronize better with others than with a computer).
- Rhythmic auditory stimulation and pacing can assist gait rehabilitation in Parkinson’s disease and stroke patients, helping to retrain motor timing and coordination.
- Active music-making in aging populations yields physical, cognitive, emotional, and social benefits, indicating broad, integrative gains from musical engagement.
Practical and educational implications of music
- Early music training can modestly influence IQ: a randomized study with 144 children across four interventions (keyboard, vocal, drama, and no lessons) showed an approximate 3-point IQ increase in the music groups relative to controls.
- Family and home environment matter: children in music-enriched homes and those who have supportive educational resources tend to show broader cognitive and social benefits.
- The observed IQ gains may reflect broader educational advantages and family contexts rather than music alone; the causal role of music remains nuanced.
- Music education should emphasize musical development for its own sake rather than solely as a means to boost math or language scores, to avoid devaluing music as a discipline.
- Early singing and musical play (e.g., singing time with parents) predicts later musical achievement; the age at which a child first sings a recognizable song correlates with higher future attainment.
Amusia, perception, and the limits of tone perception
- Amusia or congenital amusia (tone-deafness) is distinct from general perceptual ability; some individuals who consider themselves tone-deaf perform well on perceptual tests.
- Many who identify as tone-deaf may struggle with vocal confidence or have had negative social feedback, rather than fundamental perceptual deficits.
- Scaffolding and guided practice can help individuals improve singing ability, suggesting that many cases of perceived tone-deafness reflect missed developmental steps rather than immutable limitations.
Physics of sound and musical acoustics: pitch, timbre, and harmony
- Pitch is a perceptual quality linked to frequency; acoustic frequency expressed in Hertz (Hz) corresponds to pitch.
- The range of human hearing spans roughly from 20 Hz to 20,000 Hz, with higher numbers corresponding to higher pitches.
- Octaves: notes separated by a frequency ratio of 2:1; the octave spans across a doubling of frequency. An octave can be divided into 12 semitones; the ratio between adjacent semitones is the 12th root of 2: r=21/12<br/>
- The concept of timbre (tone quality) arises from the presence of multiple harmonics (overtones) in a musical note; the same fundamental frequency produced by different instruments yields different timbres.
- Fourier analysis: any periodic waveform can be decomposed into a sum of sine waves; repeated patterns can be constructed from a set of harmonics.
- Beats: when two tones with close frequencies (e.g., 440 Hz and 442 Hz) are played together, the resulting waveform exhibits a beating pattern at the difference frequency f<em>extbeat=∣f</em>1−f2∣<br/>
- Loudness is not linearly related to amplitude; it is typically measured on a logarithmic scale (decibels, dB).
Physics of instruments: resonators, edge tones, and strings
- Instruments produce sound via a primary vibrator (string, reed, air jet), a resonator that amplifies certain frequencies, and an outlet/edge that constrains sound emission.
- Edge tone and reed mechanisms: air flowing through an opening can cause oscillations of sides due to fluid-structure interaction; reeds oscillate similarly, producing a tone when the air column resonates.
- Pipes and organ-like instruments: frequency depends on pipe length and boundary conditions. Closed pipes vs open pipes:
- Closed pipe: $$ f = rac{N v}{4L} \