Midterm 1 Content: Comprehensive Summary

This summary covers fundamental concepts in sound, acoustics, audio technology, and listening practices, drawing from your document and broader internet knowledge.

1. John Cage and Experimental Music

  • John Cage (1912-1992): A highly influential 20th-century American composer known for his experimental approach to music. He explored new possibilities by incorporating non-traditional sounds and silence into his compositions.

  • Prepared Piano Sonata V (1946-1948): A famous example of Cage's experimental work where he altered a grand piano by placing various materials (like bolts, erasers) between or on its strings to produce unique and unconventional sounds when played. This technique predicted aspects of the ambient music movement.

  • 4'33": One of Cage's most famous and controversial pieces, composed in 1952. It consists of four minutes and thirty-three seconds of intentional "silence" where performers are instructed not to play their instruments.

    • Purpose: The aim of 4'33" was to draw the audience's attention to the ambient sounds of the environment (e.g., coughs, shuffled feet, external noise) that occur during the performance, challenging the traditional definition of music and encouraging active listening to the sounds around them.

    • Impact: It activated different ways of thinking about sound and music, highlighting that any auditory experience can constitute music, as absolute silence is practically impossible.

  • Deep Listening: A concept developed by Pauline Oliveros (not Cage, as might be inferred) that encourages focusing one's brain on all sounds in the surrounding environment, including subtle noises like shirt rubbing or seats moving. It emphasizes a meditative and expanded awareness of sound.

2. Soundscape Ecology (Bernie Krause)

  • Soundscape: Refers to the acoustic environment as perceived by humans, or in relation to humans. Bernie Krause, an ecologist and musician, pioneered the study of soundscapes and their components.

  • Three Basic Sources of Soundscape: Krause categorized natural and human-made sounds within an environment into three components:

    • Geophony: Sounds produced by non-biological natural sources. Examples include wind in trees, water waves, thunder, and earthquakes.

    • Biophony: Collective sounds generated by all living organisms in a given habitat. This includes animal vocalizations like bird calls, insect chirps, and marine mammal sounds.

    • Anthrophony: Sounds generated by humans, whether coherent (like music, language, theater) or incoherent/chaotic (like noise from machinery, traffic, or human activity).

3. Properties of Sound

Sound is a mechanical wave that travels through a medium by causing vibrations.

  • Acoustics: The branch of physics that deals with the study of sound, including its production, control, transmission, reception, and effects.

  • Psychoacoustics: The scientific study of sound perception, specifically how humans perceive and interpret different sounds and the psychological responses to them. It bridges the gap between the physical properties of sound and its psychological effects (e.g., pitch, loudness, timbre).

  • Semantics of Sound: Refers to the meaning or information conveyed by sound. For example, the semantic meaning of a siren indicates an emergency.

  • Aesthetics of Sound: Deals with the artistic and emotional qualities of sound, exploring how sounds can evoke feelings, beauty, or unpleasantness.

4. The Human Ear and Sound Perception

  • Process of Hearing: Sound waves enter the ear canal, cause the eardrum to vibrate, which then transfers these vibrations to the ossicles (malleus, incus, stapes). The stapes vibrates against the oval window of the cochlea, a fluid-filled structure in the inner ear. Hair cells within the cochlea convert these vibrations into electrical signals, which are sent to the brain via the auditory nerve, where they are interpreted as sound.

  • Frequency Range: The range of frequencies that humans can typically hear is from approximately 20 Hz (Hertz) to 20,000 Hz (20 kHz).

    • Infrasound: Frequencies below 20 Hz, generally inaudible to humans. These can induce feelings of anxiety, fear, or discomfort due to body vibration, even if not consciously heard.

    • Ultrasound: Frequencies above 20 kHz, also inaudible to humans but used by animals (like bats for echolocation) and in medical imaging.

  • Decibel (dB): A logarithmic unit used to express the ratio of two values of a physical quantity, such as sound intensity or power. In acoustics, it measures the loudness or sound pressure level.

  • Threshold of Hearing: The quietest sound a human ear can detect, typically around 0 dB SPL (Sound Pressure Level) at 1 kHz.

  • Threshold of Pain: The sound level at which sound becomes painful to the ear, typically around 120-130 dB SPL. Prolonged exposure above 85 dB can cause hearing damage.

5. Sound Wave Characteristics

Sound waves have several key properties that define their characteristics:

  • Wavelength: The spatial period of the wave; the distance over which the wave's shape repeats.

    • Low Frequencies: Have longer wavelengths and diffract more easily, meaning they can spread and go around surfaces, making them less directional and filling a room more evenly.

    • High Frequencies: Have shorter wavelengths and diffract less, making them more directional and more easily blocked by objects.

  • Frequency: The number of cycles per second, measured in Hertz (Hz). It determines the perceived pitch of a sound (higher frequency = higher pitch).

  • Amplitude: The magnitude of displacement of a sound wave from its resting position. It determines the perceived loudness or intensity of a sound (larger amplitude = louder sound).

  • Envelope (ADSR): Describes how the amplitude of a sound changes over time. It has four stages:

    • Attack: The time it takes for the sound to reach its peak amplitude.

    • Decay: The time it takes for the sound to fall from its peak to its sustain level.

    • Sustain: The level at which the sound remains constant after the decay.

    • Release: The time it takes for the sound to fade from the sustain level to silence after the note is no longer played.

  • Harmonics (Overtones): Integer multiples of the fundamental frequency of a sound. They contribute to the timbre or "color" of a sound, making instruments or voices sound distinct even when playing the same fundamental note.

  • Timbre: The quality or "color" of a sound that distinguishes different types of sound production, such as voices and musical instruments, even when they have the same pitch and loudness. It's primarily determined by the harmonic content and the sound's envelope.

  • Fourier Theorem: States that any complex periodic waveform can be broken down into a series of simple sine waves (a fundamental frequency and its harmonics). This principle is fundamental to understanding and analyzing complex sounds.

6. Room Acoustics

  • Reverberation: The persistence of sound in a particular space after the original sound is produced, caused by multiple reflections of sound waves.

  • Echo: A distinct reflection of sound that arrives at the listener's ear after a significant delay (usually > 50ms) from the direct sound, perceived as a separate sound. Reverberation is a continuous series of reflections that blend together.

  • Sound Absorption: Occurs when sound waves strike a surface and part of their energy is converted into another form (e.g., heat), reducing the sound's intensity. Soft materials like blankets, carpets, and curtains absorb sound, reducing reflections.

  • Sound Reflection: Occurs when sound waves bounce off a surface. Hard, smooth surfaces reflect sound, contributing to reverberation and echoes.

  • Sound Diffusion: The scattering of sound waves by a surface, helping to distribute sound energy evenly throughout a space, preventing harsh reflections, and making the sound field more uniform.

  • Room Acoustics: The study of how sound behaves within a room, influencing the sound quality. Good room acoustics aim to control reflections, reverberation, and echoes to achieve desired sound clarity and balance.

7. Sound Filters and Related Concepts

  • Cymatics: The study of visible sound and vibration, often observed by vibrating a medium (like sand on a plate) and seeing how patterns emerge from different frequencies.

  • Resonance: The phenomenon where an object vibrates with maximum amplitude at certain frequencies when subjected to a vibrating force or another vibrating system.

  • Eigentone (Room Mode): Natural resonant frequencies of a room, determined by its dimensions. These can cause certain frequencies to be exaggerated or canceled out in different parts of the room, affecting sound quality.

  • Sound Filters: Electronic circuits or software algorithms that modify the frequency content of an audio signal. They are used to shape the timbre of a sound or to remove unwanted frequencies.

    • Low-Pass Filter (LPF): Allows frequencies below a certain "cutoff frequency" to pass through while attenuating (reducing) frequencies above it.

    • High-Pass Filter (HPF): Allows frequencies above a certain "cutoff frequency" to pass through while attenuating frequencies below it.

    • Band-Pass Filter: Allows frequencies within a specific range (a "band") to pass through while attenuating frequencies outside that range.

    • Notch Filter: Attenuates a very narrow band of frequencies while allowing frequencies above and below that band to pass largely unaffected. Used to remove specific unwanted resonant frequencies or hums.

8. Audio Recording and Digital Audio Basics

  • Analog Recording: Captures and stores sound as a continuous wave, physically representing the sound wave's variations. Examples include vinyl records (groove variations) and magnetic tape (magnetic patterns).

    • Linear Editing: An early form of editing (e.g., with magnetic tape) where edits are made sequentially. It is "destructive" in that each copy or cut can lead to a loss of quality and an increase in noise.

  • Digital Recording: Converts analog sound waves into a series of numerical data points (binary code).

    • Sampling: The process of taking discrete measurements (samples) of an analog audio signal at regular intervals. The "sampling rate" determines how many samples are taken per second (e.g., 44.1 kHz for CDs means 44,100 samples per second). A higher sampling rate allows for the capture of higher frequencies (Nyquist-Shannon theorem).

    • Quantization: The process of converting the amplitude of each sample into a discrete numerical value.

    • Bit Depth: Determines the number of bits used to represent the amplitude of each sample. A higher bit depth provides more possible amplitude levels, resulting in a wider dynamic range and a higher resolution, especially for quiet sounds, with less quantization noise. (e.g., 24-bit audio offers a dynamic range of 144 dB).

    • Non-linear Editing: A modern editing approach (common in DAWs) where edits can be made in any order and are typically "non-destructive," meaning the original audio files are not permanently altered, and there is no loss of quality with each edit. This offers great flexibility for experimentation and revisions.

  • Signal-to-Noise Ratio (SNR): The ratio of the strength of a desired signal to the level of background noise. A higher SNR indicates a clearer signal with less interference.

  • Dynamic Range: The difference between the loudest and quietest sounds that a system can record or reproduce.

  • Overdubbing: The technique of adding new layers of sound to an existing recording, allowing musicians to record different parts separately and combine them.

  • Multitrack Recording: A method of recording where individual sounds or instruments are recorded onto separate tracks, allowing for independent control, mixing, and processing of each element.

  • Mixing: The process of combining and balancing multiple recorded audio tracks into a stereo or surround sound file. It involves adjusting levels, panning, equalization, and adding effects.

  • Mastering: The final stage of audio production, where a mixed track is optimized for distribution. It involves fine-tuning the overall sound, loudness, and preparing it for various playback formats.

9. Audio Equipment

  • Preamp (Preamplifier): An electronic amplifier that boosts a weak electrical signal (like from a microphone) to a "line level" that can be used by other audio equipment without significant noise or degradation.

  • Mixing Board (Audio Mixer/Console): A device used to combine, route, and change the level, timbre, and/or dynamics of audio signals. It allows multiple sound sources to be mixed down to one or more output channels.

  • ADC (Analog-to-Digital Converter): A device that converts an analog electrical signal (e.g., from a microphone or instrument) into a digital signal (binary code) that a computer or digital recording system can process.

  • DAC (Digital-to-Analog Converter): A device that converts a digital audio signal back into an analog electrical signal, allowing it to be heard through headphones or speakers. Filters are used to smooth the digital "steps" back into a continuous analog waveform.

  • DAW (Digital Audio Workstation): A software application used for recording, editing, mixing, and mastering audio on a computer. Examples include Protools, Logic Pro, Ableton Live.

10. Microphones

Microphones convert sound waves into electrical signals.

  • Microphone Types:

    • Dynamic Microphones: Robust and relatively inexpensive, often used for live vocals and loud instruments. They operate on the principle of electromagnetic induction.

    • Condenser Microphones: More sensitive and provide a wider frequency response, commonly used for studio vocals and acoustic instruments due to their detail capture. They require "phantom power."

    • Ribbon Microphones: Known for their warm, natural sound and delicate construction. They are often bidirectional (Figure-8 pattern).

  • Polar Patterns (Pickup Patterns): Describe how sensitive a microphone is to sounds coming from different directions.

    • Omnidirectional: Picks up sound equally from all directions. Less prone to proximity effect.

    • Cardioid: "Heart-shaped" pattern; picks up sound primarily from the front, with some rejection from the sides and good rejection from the rear. Most common type.

    • Figure-8 (Bidirectional): Picks up sound equally from the front and back, with strong rejection from the sides (90 degrees).

    • Hypercardioid: A more directional version of cardioid; has a narrower pickup pattern in front, better side rejection, but picks up a small amount of sound directly from the rear.

  • Microphone Accessories:

    • Shock Mount: Used to prevent vibrations from reaching the microphone capsule and being picked up as unwanted noise.

    • Boom Pole: A long, extendable pole used to position a microphone (often a shotgun mic) overhead, commonly used in film to capture dialogue without the mic being in the shot.

    • Wind Screen (Pop Filter): A foam cover or mesh screen placed over or in front of a microphone to reduce plosives (sudden bursts of air from "p" or "b" sounds) and wind noise.

  • Microphone Placement: Critical for capturing desired sound. It's often advised to "go and hear from the same position as the mic" as "mic is bionic ear". Common mistakes include using too many mics, using the wrong mic, hanging a mic in front of an amplifier, improper storage, not cleaning vocal mics, bad placement, or cupping the mic head.

11. Audio Cables and Connectors

  • Cables: Designed to maximize the signal-to-noise ratio, ensuring the strength of the audio signal is high relative to background noise.

  • Unbalanced Cables: Use two conductors per channel (a positive signal wire and a ground wire). They are more susceptible to picking up electromagnetic interference and noise, especially over longer runs. Typically use TS (Tip-Sleeve) connectors.

  • Balanced Cables: Use three conductors per channel (a positive signal wire, a negative signal wire, and a ground wire). The signal is sent twice with one wire having inverted polarity. At the receiving end, the inverted signal is flipped back, cancelling out any noise picked up equally on both signal wires. This provides superior noise rejection for longer cable runs. Typically use TRS (Tip-Ring-Sleeve) or XLR connectors.

  • Audio Connectors and Adapters: Various types of connectors (e.g., XLR, 1/4" TRS/TS, RCA) used to connect audio equipment. Adapters convert between different connector types.

12. Auditory Illusions

Phenomena where the brain misinterprets or creates sounds that are not physically present or distorts acoustic reality.

  • Phantom Words: An illusion where listeners perceive words or phrases in non-speech sounds (like noise or repeating syllables), often influenced by their expectations or suggestions.

  • Shepard Tone Illusion: An auditory illusion that creates the sensation of a tone endlessly rising or falling in pitch without actually getting higher or lower. It's created by layering multiple tones separated by octaves, with varying volumes. Used in film (e.g., Dunkirk) and video games to build tension or create an endless progression effect.

  • Binaural Recording: A method of recording sound using two microphones placed in a dummy head (or special array) with ears, designed to simulate human hearing and create a 3D, immersive sound experience when played back through headphones. This technique leverages how our ears and brain localize sound sources.

  • Binaural Beats: An auditory illusion occurring when two slightly different frequency tones are played simultaneously, one in each ear. The brain perceives a third, "beating" frequency (the difference between the two tones) which can influence brainwave states and mental states like relaxation or focus.

13. DJ Spooky (Paul D. Miller) and Sampling

  • DJ Spooky (Paul D. Miller): A notable artist, composer, and writer known for his work in sampling and digital music. He authored "Sound Unbound: Sampling Digital Music and Culture".

  • Sampling: The act of taking a portion (or "sample") of one sound recording and reusing it in a different recording. It's a fundamental technique in hip-hop, electronic music, and other genres.

    • Cultural Appropriation and Copyright: Sampling often raises complex questions about cultural appropriation (using elements from another culture without proper acknowledgment or respect) and copyright law, as it involves reusing pre-existing intellectual property. Miller views sampling as an externalization of memory and a natural artistic process, likening it to "quotation" in literature and arguing that "everyone who uses language is a sampler". He believes that art has always involved give and take, and youth culture is externalizing this process into a new art form.