The next few sections will cover how sound is created and stored on a computer. But first we will take a quick look at what sound actually is.
A sound is created whenever anything disturbs the air around it. A twig snapping, a gong being struck, a note being played on a violin, all cause the surrounding air to move. This creates a variation in pressure that radiates out as a sound wave.
The effect is rather like throwing a pebble into a pond and watching the ripples radiate outwards. In the case of sound, the wave is a three dimensional sphere that expands outwards from the source, travelling at the speed of sound, which is about 340 meters per second, or 760 miles per hour. The speed varies slightly depending on conditions such as air temperature, pressure and humidity.
Short percussive sounds, such as the snap of a twig, or hitting a drum, send out a short burst of energy that radiates outwards to our ears.
In music, we are often interested in sound sources that vibrate, emitting a continuous stream of pressure variations at a particular pitch (or frequency). We hear different frequencies as different musical notes, and that is the basis of a lot of traditional forms of music.
Frequency is measured in cycles per second, also called Hertz. They are used interchangeably. Cycles per second is usually shortened to cps (as you will see sometimes in the CSound documentation). Hertz is usually shortened to Hz.
For example if you play middle C on a guitar, the string will vibrate 261.6 times per second (261.6 Hz). This will cause the air pressure to fluctuate at the same rate, and this pressure variations will radiate outwards and reach your ears.
You can usually recognise a tune from its sequence of notes, no matter what instrument it is played on. However, of course, the same notes played on a different instrument will sound different. The basic quality that makes, say, a guitar, oboe and saxophone sound different is called the timbre.
There are also more subtle differences between different instruments - in many cases when you play a note, the timbre and volume will vary over time. Experimenting with time varying timbre is a key part of designing new computer instruments.
In the real world, your senses will often be met with several sound sources, located in different places, each emitting their own stream of sound waves.
For example, a full orchestra might have a hundred different instruments, all around the stage, each creating their own different and interesting sounds.
But it is actually even more complex than that, because sound reflects. If you stand in a mountain range and shout, you will hear an echo coming back - the sound reflected back off the side of a nearby mountain. The same happens in a concert hall - the sound bounces off the walls, ceiling and floor, perhaps many times. Concert halls are actually designed to make that happen, that is what we mean when we say that a room has good acoustics. It makes the sound wonderfully full and rich.
If you could actually see the sound waves when and orchestra is in full flow, with a hundred sources of sound and a thousand reflections, it would be a complex and beautiful sight! But how can a computer program replicate that?
One thing that makes computer sound slightly less daunting is the fact that we can only hear the sound that enters our two ears. Specifically we can only detect sound waves that hit our eardrums (a small membrane in each ear that transmits sound from the air to the inner ear).
This means that we can create something like the sound of a full orchestra (or any other type of music) using a pair of headphones or two (or more) speakers.
Copyright (c) Axlesoft Ltd 2020