1/85
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Linear Script writing
Scripts are planning documents that describe (specifically) the audio content of a media project. They are commonly used in a number of media industries. The term linear is applied to differentiate the form of media.
Interactive media such as computer games and non-linear video products may also employ scripts for planning their content but the form would be more complex involving branching paths and other issues
Video
Video at its simplest is a high speed sequence of images which can convey the illusion of motion over time.
Pre-rendered
pre-rendering means to generate the media in its final form and sequence at some point before delivering it to the consumer. An example would be a cinematic movie.
real-time rendered resources
real-time products such as 3D first-person computer games and virtual worlds which compose the audio and video "on-the-fly" in response to highly complex interactions by the players. These systems still use some "assembly" but the media fragments they are assembling are much smaller. Often these systems use many small pre-rendered images and audio fragments to compose the final video and audio that the consumer experiences.
Media Production
Pre-Production Phase
Production Phase
Post-Production Phase
Delivery Phase
Analog Sound
Usually it is used to mean a continuous wave form "sound" that is present in the environment.
The other way "analog" is used is to describe older style equipment that operated on a non-digital principle.
This means that the microphone was connected to an amplifier and the signal was either stored on analog tape or output to a speaker. The key idea being that at no point in the process was the signal "digitized".
Digital sound
In contrast to analog sound, digital sound is stored, manipulated and transmitted as digital information. It still has the potential to contain a mixture of desirable and undesirable information, both noise and signal.
Are YouTube Videos considered as Media Resources?
Yes
What are the important elements of a script?
Scripts are planning documents that holds information that is useful to coordinate the efforts of the production team.
They document the important sound elements that need to be generated and collected for a particular project. Usually they are used to manage voice and dialog but can equally be applied to sequence and plan other sound sequences, such as sound effects to match animation or video tracks. Usually these elements will be linked with other elements in the project using some kind of synchronization system like a time code or storyboard frames.
Scripts like any production planning document are simply a tool to control the production and limit the cost of the production.
3. What are the two essential elements of video?
Audio and visual
What is a sound sample?
The terms "Recording" and "Sampling" are interchangeable in this context.
What scenario would require real-time generated audio?
Web based audio-conferencing
Is analog audio equivalent to digital audio?
No
What are the constraints on audio in portable devices such as smart phones?
Codecs, file format, storage capacity, download capacity, bandwidth, and costs,
What happens during pre-production?
This phase involves the planning and assembling of resources. For audio production this may be a script or sound design notes. For Video it may be a script plus storyboard.
Is the production phase of media production more or less important than the pre-production phase?
Less, it is the assembly of the pieces.
Post-production can only begin when production has completely finished. Discuss.
This is the phase were the editing of the media elements happens. The segments are cleaned (trimmed to size, mistakes removed or fixed, different sections joined together), combined together (mixed for audio, composited for video) and finally rendered to their output form.
ADC and DAC
ADC (analog to digital converter) and its inverse DAC (Digital to Analog Converter) are usually components of the electronics in the system that either perform the sampling (ADC) of a waveform into digital samples or convert digital samples back to an analog waveform (DAC). Like any system, the garbage-in-garbage-out rule holds. They will not make a bad sound better or add quality that is not already there.
Sampling
Sampling the waveform occurs in the ADC (Analog to Digital Converter). It simply measures the amplitude of the waveform and converts that to a discreet number on the spectrum. How often this happens is referred to as "the sampling frequency" or "sampling rate".
Sampling rate
The sampling rate is the number of samples taken per time unit. Usually this is measured as samples per second described using the term hertz (Hz). Hertz is a unit of frequency defined as the number of cycles per second of a periodic phenomenon.
Nyquist sampling frequency
A mathematician named Harry Nyquist has proved that to digitise a pure sinusoidal sound wave of N Hz, it is necessary to sample the sound at a minimum rate of 2N Hz. A complex sound waveform with maximum frequency component of Nmax Hz must be sampled at least at 2Nmax Hz rate to retain all the component frequencies. Conversely, if a complex sound waveform is sampled at a rate of X Hz, frequency components higher than X/2 Hz (if any) are lost.
In simple terms a signal has to be sampled using a sampling frequency greater than twice the maximum frequency occurring in the signal. Sampling at this frequency (or higher) provides enough samples to reconstruct the original signal.
Oversampling
Sampling at a higher frequency than the Nyquist frequency means collecting more information than the minimum required to reconstruct the original signal.
Keep in mind that you cannot get "better" than the original by doing this, so the additional samples are redundant information. This means you are storing (and transmitting) useless information that does not add any value.
Under sampling
Sampling at a frequency lower than the Nyquist frequency means collecting less information than is the minimum required to reconstruct the original signal.
If the original signal cannot be reproduced in some respect, it is degraded. This loss of fidelity cannot be "fixed". Keep in mind that quality loss is relative. A little loss may be imperceptible, while large amounts of loss may be obvious. The tradeoff is a smaller number of samples and thus a smaller amount of information to store (and transmit) which is a very attractive property for many applications.
Time base
Digital sampling has a fixed frequency based on a high frequency, high precision clock. In comparison, analog systems do not use a synching clock and so errors can occur in a recording due to inconsistencies in the mechanisms or electronics that cause a "drift" in the recording relative to time. This may be cyclical acceleration or deceleration errors or constant, additive or subtractive synch drift.
Bit rate
Bit rate is a measure of the size of the information being stored. The reference to bits is about the number of computer bits that are used for the number used to describe each sample. In simple terms, more bits equals more numbers, and more numbers can be used to more precisely describe each sample of amplitude. Usually this has an effect on the quality of the sound. While the Nyquist frequency affects the quality of the samples over time, the bitrate affects the potential quality of each individual sample.
Similar to the issues of oversampling and undersampling mentioned above, using an excessive bitrate results in storing information in excess of that needed to record the amplitude with sufficient precision to reproduce the original waveform. Conversely using a low bitrate without sufficient resolution to record the waveform will result in degradation of the signal. Again, similar to undersampling, this may have an imperceptible effect or it may be obvious. This variability is usually due to the actual sounds being recorded and how much degradation is being caused.
Volume
Volume is the perception of physical power of the sound. This is measured in decibels. The property of the sound data that describes the volume information is the amplitude of the wave form. The higher the amplitude, the louder the sound.
Clipping
Digital data is recorded as numbers. The amount of data used to represent that number places a limit on the amount of amplitude that can be encoded. If the volume of the signal being encoded exceeds the capacity of the spectrum that can be encoded in the number scale used, then the amplitude is limited to the maximum possible value. This effectively flattens the top of the waveform causing distortion to the original sound. This artifact, the flattening of the top (and bottom) of a sound wave form is called "clipping".
MIDI
Musical Instrument Digital Interface (MIDI) is a communication protocol originally used to allow communication between musical instruments and sequencers. Rather than communicating data to reconstruct a waveform, MIDI contains instructions about the musical notes, tempo, key etc.
Size of digital sound resources
Sampling rate x Bit Rate x Channels x Time in seconds / 8 = number of bytes of information required to store the sampled sound.
Like any digital file, this can then be compressed using a range of different techniques to try to reduce the size on disk.
When converting from stereo to mono, what is lost?
one channel, and dimension or depth
What is the frequency range of human hearing?
between 20Hz and 20,000 Hz
How is volume measured?
dB
What is the difference between signal and noise?
The final definition is a subjective one. It uses the term "signal" to mean the desirable parts of all the sound being perceived. The undesirable part of the sound is described as "noise". Often you will hear the two terms combined in a phrase like "the signal to noise ratio"; meaning the amount of "good stuff" verses the amount of "bad stuff" within the total sound.
Keep in mind that this is often a subjective evaluation, but in a situation where you are trying to capture a human voice (good stuff) while there is a car driving by (bad stuff) this signal to noise ratio is an important feature of the recording process.
What is the difference between vibration and sound?
Sound is the result of an energy wave being transmitted from a source discharge by particles and molecules "passing it on" to their energy neighbors (in certain frequencies), vibration is the movement of kinetic energy.
How does digital sound differ from analog sound?
analog sound has distoritions from electro-mechanical playback and has higher and lower frequencies (higher peaks and lower troughs) while digital sound chops the top and bottom off these peaks and troughs within a frequency range.
What is the minimum sampling rate for recording sound without distortion?
2N Hz
What is the time base of a sample?
Digital sampling has a fixed frequency based on a high frequency, high precision clock. In comparison, analog systems do not use a syncing clock and so errors can occur in a recording due to inconsistencies in the mechanisms or electronics that cause a "drift" in the recording relative to time. This may be cyclical acceleration or deceleration errors or constant, additive or subtractive synch drift.
Why is bit rate important?
The bitrate affects the potential quality of each individual sample.
How can clipping be fixed?
The simple way to establish the correct volume level is to test the recording system before the recording starts
Reducing the Amplitude recording/output level
How long should a script be?
varies between directors, but usually constrained by budget.
What is the connection between a script and a project budget?
A script is the document that contains what goes on, and the budget limits the scope.
How is the script related to the production of media elements for a project?
Th script contains what is required, such as dialogue, sound effects, music, narration, lighting, etc
List three common elements in a script?
What is dialog?
Talking
Differentiate between singing and music in a script?
People sing, Music adds emotion such as suspense or drama. A musical score which is musical notation with accompanying lyrics and much more complex vocal direction.
What is the purpose of a SLUG line?
The SLUG line provides very brief information about the scene.
How would you direct the camera to trolley?
For a long take
What is a CU shot?
Close up
How would you communicate a lighting design involving a flickering light?
when the lighting is an integral part of the narrative of the scene.
The human eye
The human eye can detect electromagnetic radiation in wavelengths from about 390nm to 750nm. This can be expressed in frequency terms as 400THz to 790THz. The maximum sensitivity is at 555nm or 540THz
What are the essential issues when using a camera?
Light
Focus
Framing
What is the least expensive storage option for 100GB of project files being used by three production staff?
External hard drive
What are some of the issues with capturing analog broadcast signals?
special purpose hardware needed, This standard does not provide any temporal compression as it works on each individual frame. quality is traded with compression.
How can quality loss be avoided when converting between analog and digital?
Reduce compression rate
Is trial and error a viable method for audio editing? If so why?
Yes, it allows the operators to re-create effects and methods.
Is trial and error a viable method for video editing? Contrast with audio editing.
No, it is costly
What are the strengths of Real-time video editors? To what purpose are they most suited?
streaming, less sync errors
What is the value of computer animation? Where is it most applicable?
displaying or explaining a process
Why is microphone position important?
Needed for good sound pick up and recording level
What is the clipping threshold?
The maximum ability for a sound or sample to be amplified
Mid Shot Framing
This shows the subject(s) from about waist upward. It provides some detail while still providing context about the subject's posture and body language.
Narrative Sound Effects
During the recording session with the actors it is often enough to just describe the sound effect to keep everyone moving through the script.
Foley Sound Effects
"Foley" is a particular group of sound effects to do with "body sounds".
Extreme Wide Angle Framing
Extreme Wide Shot (EWS) - This shot is a context view at such a broad view that the subject may be too small to see in the context. This sort of shot is often used as an establishing shot to provide the context for a narrative.
Wide Angle Framing
Wide Shot (WS) - Shows the "whole" of the subject and enough of their surroundings to place them in the scene. May also be called a "Long Shot" or "Full Body Shot".
YUV signals converted from RGB (colour) signals
Y= Luminance
U signal= the difference between the luminance and the red signal
V signal=the difference between the luminance and the blue signal.
Why is synchronization important when scrubbing?
Frames can drop and throws the sync out.
How does the human eye perceive motion?
The brain perceives motion when something in the visual field changes. The artifact is called "Persistence of Vision" and is the phenomenon of an afterimage persisting in the retina for about 1/25th of a second after the change in the visual field.
How is motion communicated using reel to reel film?
different frames played through a light projector at a fast rate.
What is a colour model?
Computer Monitors and Televisions have displays made up of triplets of Red, Green and Blue colour elements (pixels) which can be combined together to form colours.
This is called an additive colour model because adding all the colours together results in White. (The unlit pixels provide black.)
What is the difference between additive and subtractive colour models?
All colour pixels lit to give white (additive), opposite for black (subtractive)
What is the frame rate of broadcast PAL television?
25fps
Why are time codes important?
keeps video and audio sync'ed
What method is used to synchronize audio and video in a broadcast stream?
This process is known as Multiplexing (time-division multiplexing). This strategy provides a solution to many of the possible errors in the above strategy and is used very frequently. Both these strategies still assume that the video and audio start "in sync".
Multiplexing is usually shortened to Muxing (the "packing" process) and DeMuxing (the "unpacking" process).
For streaming media, video telephony and broadcast media, keeping audio and video in sync is critical. These kinds of applications tend to use MPEG streams. This format uses Multiplexing to interleave chunks of video with chunks of audio, which allows viewers to enter the stream at any point and pick up synced audio and video.
What is 3D Television?
TV that plays two different offset colour layers that are viewed through red/blue glasses
Differentiate temporal and spacial compression.
Temporal (time) compression looks for patterns and repetitions over time across several frames. Instead of describing every pixel in every frame, temporal compression describes all the pixels in a key frame (a representative frame, for example the 1st frame of a clip), and then for each frame that follows, describes only the pixels that are different from the previous frame.
Spatial compression is where unnecessary information from an image is discarded,
Transparency Keying
Many effects rely on manipulating the transparency of either a specific section of the visual area or the whole of the clip. In combination with layering tracks together, you can blend two clips together or "open" a hole in one layer to view another layer below.
How is time represented in a video editor?
in a timeline
Why would you use Chroma Keying?
Chorma Keying uses colour values in the frame itself to determine where the transparency occurs. This technique is called "Green Screening" or "Blue Screening" when video is taken against a green or blue screen respectively.
What should you see if you have two video tracks of clips occupying the same time segment?
only one track playing
Is rendering the same as previewing?
No
What is translation in terms of animation?
translate (move) the clip around in the view frame
If you saw a flash of black screen in the middle of a preview where there should not be any, what would you suggest might be wrong?
the two tracks are not butted against each other.
Should you render before you preview?
Yes, The process of turning all the instructions, layers and assets into a finished video file is called "Rendering".
Why do clips need to be imported into an editor?
This importing stage allows the software to check and pre-process the video data to allow it to be manipulated efficiently.
Frequency Masking
Even if a signal component exceeds the hearing threshold, it may still be masked by louder components that are near it in frequency. This phenomenon is known as frequency masking or simultaneous masking. Each component in a signal can cast a "shadow" over neighboring components. If the neighboring components are covered by this shadow, they will not be heard. The effective result is that one component, the masker, shifts the hearing threshold.