In human "physiology and "psychology, sound is the reception of such waves and their perception by the "brain. Humans can only hear sound waves as distinct pitches when the "frequency lies between about 20 Hz and 20 kHz. Sound above 20 kHz is "ultrasound and is not perceptible by humans. Sound waves below 20 Hz are known as "infrasound. Different animal species have varying "hearing ranges.
"Acoustics is the interdisciplinary science that deals with the study of mechanical waves in gases, liquids, and solids including vibration, sound, ultrasound, and infrasound. A scientist who works in the field of "acoustics is an acoustician, while someone working in the field of "acoustical engineering may be called an acoustical engineer. An "audio engineer, on the other hand, is concerned with the recording, manipulation, mixing, and reproduction of sound.
Applications of acoustics are found in almost all aspects of modern society, subdisciplines include "aeroacoustics, "audio signal processing, "architectural acoustics, "bioacoustics, electro-acoustics, "environmental noise, "musical acoustics, "noise control, "psychoacoustics, "speech, "ultrasound, "underwater acoustics, and "vibration.
Sound is defined as "(a) "Oscillation in pressure, stress, particle displacement, particle velocity, etc., propagated in a medium with internal forces (e.g., elastic or viscous), or the superposition of such propagated oscillation. (b) Auditory sensation evoked by the oscillation described in (a). " Sound can be viewed as a wave motion in air or other elastic media. In this case, sound is a stimulus. Sound can also be viewed as an excitation of the hearing mechanism that results in the perception of sound. In this case, sound is a "sensation .
Sound can propagate through a medium such as air, water and solids as "longitudinal waves and also as a "transverse wave in "solids (see Longitudinal and transverse waves, below). The sound waves are generated by a sound source, such as the vibrating "diaphragm of a stereo speaker. The sound source creates vibrations in the surrounding medium. As the source continues to vibrate the medium, the vibrations propagate away from the source at the "speed of sound, thus forming the sound wave. At a fixed distance from the source, the "pressure, "velocity, and displacement of the medium vary in time. At an instant in time, the pressure, velocity, and displacement vary in space. Note that the particles of the medium do not travel with the sound wave. This is intuitively obvious for a solid, and the same is true for liquids and gases (that is, the vibrations of particles in the gas or liquid transport the vibrations, while the average position of the particles over time does not change). During propagation, waves can be "reflected, "refracted, or "attenuated by the medium.
The behavior of sound propagation is generally affected by three things:
The mechanical vibrations that can be interpreted as sound can travel through all "forms of matter: gases, liquids, solids, and "plasmas. The matter that supports the sound is called the "medium. Sound cannot travel through a "vacuum.
Sound is transmitted through gases, plasma, and liquids as "longitudinal waves, also called "compression waves. It requires a medium to propagate. Through solids, however, it can be transmitted as both longitudinal waves and "transverse waves. Longitudinal sound waves are waves of alternating "pressure deviations from the "equilibrium pressure, causing local regions of "compression and "rarefaction, while "transverse waves (in solids) are waves of alternating "shear stress at right angle to the direction of propagation.
Sound waves may be "viewed" using parabolic mirrors and objects that produce sound.
The energy carried by an oscillating sound wave converts back and forth between the potential energy of the extra "compression (in case of longitudinal waves) or lateral displacement "strain (in case of transverse waves) of the matter, and the kinetic energy of the displacement velocity of particles of the medium.
Although there are many complexities relating to the transmission of sounds, at the point of reception (i.e. the ears), sound is readily dividable into two simple elements: pressure and time. These fundamental elements form the basis of all sound waves. They can be used to describe, in absolute terms, every sound we hear.
In order to understand the sound more fully, a complex wave such as the one shown in a blue background on the right of this text, is usually separated into its component parts, which are a combination of various sound wave frequencies (and noise).
Sound that is perceptible by humans has frequencies from about 20 Hz to 20,000 Hz. In air at "standard temperature and pressure, the corresponding wavelengths of sound waves range from 17 m to 17 mm. Sometimes speed and direction are combined as a "velocity "vector; wave number and direction are combined as a "wave vector.
The "speed of sound depends on the medium the waves pass through, and is a fundamental property of the material. The first significant effort towards measurement of the speed of sound was made by "Isaac Newton. He believed the speed of sound in a particular substance was equal to the square root of the pressure acting on it divided by its density:
This was later proven wrong when found to incorrectly derive the speed. The French mathematician "Laplace corrected the formula by deducing that the phenomenon of sound travelling is not isothermal, as believed by Newton, but "adiabatic. He added another factor to the equation—"gamma—and multiplied by , thus coming up with the equation . Since , the final equation came up to be , which is also known as the Newton-Laplace equation. In this equation, K = elastic bulk modulus, c = velocity of sound, and = density. Thus, the speed of sound is proportional to the "square root of the "ratio of the "bulk modulus of the medium to its density.
Those physical properties and the speed of sound change with ambient conditions. For example, the speed of sound in gases depends on temperature. In 20 °C (68 °F) air at sea level, the speed of sound is approximately 343 m/s (1,230 km/h; 767 mph) using the formula "v = (331 + 0.6 T) m/s". In fresh water, also at 20 °C, the speed of sound is approximately 1,482 m/s (5,335 km/h; 3,315 mph). In steel, the speed of sound is about 5,960 m/s (21,460 km/h; 13,330 mph). The speed of sound is also slightly sensitive, being subject to a second-order "anharmonic effect, to the sound amplitude, which means there are non-linear propagation effects, such as the production of harmonics and mixed tones not present in the original sound (see "parametric array).
A distinct use of the term sound from its use in physics is that in physiology and psychology, where the term refers to the subject of perception by the brain. The field of "psychoacoustics is dedicated to such studies. Historically the word "sound" referred exclusively to an effect in the mind. Webster's 1947 dictionary defined sound as: "that which is heard; the effect which is produced by the vibration of a body affecting the ear." This meant (at least in 1947) the correct response to the question: ""if a tree falls in the forest with no one to hear it fall, does it make a sound?" was "no". However, owing to contemporary usage, definitions of sound as a physical effect are prevalent in most dictionaries. Consequently, the answer to the same question is now "yes, a tree falling in the forest with no one to hear it fall does make a sound".
The physical reception of sound in any hearing organism is limited to a range of frequencies. Humans normally hear sound frequencies between approximately 20 "Hz and 20,000 Hz (20 "kHz),:382 The upper limit decreases with age.:249 Sometimes sound refers to only those vibrations with "frequencies that are within the "hearing range for humans or sometimes it relates to a particular animal. Other species have different ranges of hearing. For example, dogs can perceive vibrations higher than 20 kHz.
As a signal perceived by one of the major "senses, sound is used by many species for "detecting danger, "navigation, "predation, and communication. "Earth's "atmosphere, "water, and virtually any "physical phenomenon, such as fire, rain, wind, "surf, or earthquake, produces (and is characterized by) its unique sounds. Many species, such as frogs, birds, "marine and terrestrial "mammals, have also developed special "organs to produce sound. In some species, these produce "song and "speech. Furthermore, humans have developed culture and technology (such as music, telephone and radio) that allows them to generate, record, transmit, and broadcast sound.
"Noise is a term often used to refer to an unwanted sound. In science and engineering, noise is an undesirable component that obscures a wanted signal. However, in sound perception it can often be used to identify the source of a sound and is an important component of timbre perception (see above).
"Soundscape is the component of the acoustic environment that can be perceived by humans. The acoustic environment is the combination of all sounds (whether audible to humans or not) within a given area as modified by the environment and understood by people, in context of the surrounding environment.
There are, historically, six experimentally separable ways in which sound waves are analysed. They are: "pitch, "duration, "loudness, "timbre, "sonic texture and "spatial location. Some of these terms have a standardised definition (for instance in the ANSI Acoustical Terminology "ANSI/ASA S1.1-2013). More recent approaches have also considered "temporal envelope (ENV) and temporal fine structure (TFS) as perceptually relevant analyses.["citation needed]
"Pitch is perceived as how "low" or "high" a sound is and represents the cyclic, repetitive nature of the vibrations that make up sound. For simple sounds, pitch relates to the frequency of the slowest vibration in the sound (called the fundamental harmonic). In the case of complex sounds, pitch perception can vary. Sometimes individuals identify different pitches for the same sound, based on their personal experience of particular sound patterns. Selection of a particular pitch is determined by pre-conscious examination of vibrations, including their frequencies and the balance between them. Specific attention is given to recognising potential harmonics. Every sound is placed on a pitch continuum from low to high. For example: "white noise (random noise spread evenly across all frequencies) sounds higher in pitch than "pink noise (random noise spread evenly across octaves) as white noise has more high frequency content. Figure 1 shows an example of pitch recognition. During the listening process, each sound is analysed for a repeating pattern (See Figure 1: orange arrows) and the results forwarded to the auditory cortex as a single pitch of a certain height (octave) and chroma (note name).
"Duration is perceived as how "long" or "short" a sound is and relates to onset and offset signals created by nerve responses to sounds. The duration of a sound usually lasts from the time the sound is first noticed until the sound is identified as having changed or ceased. Sometimes this is not directly related to the physical duration of a sound. For example; in a noisy environment, gapped sounds (sounds that stop and start) can sound as if they are continuous because the offset messages are missed owing to disruptions from noises in the same general bandwidth. This can be of great benefit in understanding distorted messages such as radio signals that suffer from interference, as (owing to this effect) the message is heard as if it was continuous. Figure 2 gives an example of duration identification. When a new sound is noticed (see Figure 2, Green arrows), a sound onset message is sent to the auditory cortex. When the repeating pattern is missed, a sound offset messages is sent.
"Loudness is perceived as how "loud" or "soft" a sound is and relates to the totalled number of auditory nerve stimulations over short cyclic time periods, most likely over the duration of theta wave cycles. This means that at short durations, a very short sound can sound softer than a longer sound even though they are presented at the same intensity level. Past around 200 ms this is no longer the case and the duration of the sound no longer affects the apparent loudness of the sound. Figure 3 gives an impression of how loudness information is summed over a period of about 200 ms before being sent to the auditory cortex. Louder signals create a greater 'push' on the Basilar membrane and thus stimulate more nerves, creating a stronger loudness signal. A more complex signal also creates more nerve firings and so sounds louder (for the same wave amplitude) than a simpler sound, such as a sine wave.
"Timbre is perceived as the quality of different sounds (e.g. the thud of a fallen rock, the whir of a drill, the tone of a musical instrument or the quality of a voice) and represents the pre-conscious allocation of a sonic identity to a sound (e.g. “it’s an oboe!"). This identity is based on information gained from frequency transients, noisiness, unsteadiness, perceived pitch and the spread and intensity of overtones in the sound over an extended time frame. The way a sound changes over time (see figure 4) provides most of the information for timbre identification. Even though a small section of the wave form from each instrument looks very similar (see the expanded sections indicated by the orange arrows in figure 4), differences in changes over time between the clarinet and the piano are evident in both loudness and harmonic content. Less noticeable are the different noises heard, such as air hisses for the clarinet and hammer strikes for the piano.
"Sonic texture relates to the number of sound sources and the interaction between them. The word 'texture', in this context, relates to the cognitive separation of auditory objects. In music, texture is often referred to as the difference between "unison, "polyphony and "homophony, but it can also relate (for example) to a busy cafe; a sound which might be referred to as '"cacophony'. However texture refers to more than this. The texture of an orchestral piece is very different to the texture of a brass quintet because of the different numbers of players. The texture of a market place is very different to a school hall because of the differences in the various sound sources.
Spatial location (see: "Sound localization) represents the cognitive placement of a sound in an environmental context; including the placement of a sound on both the horizontal and vertical plane, the distance from the sound source and the characteristics of the sonic environment. In a thick texture, it is possible to identify multiple sound sources using a combination of spatial location and timbre identification. It is the main reason why we can pick the sound of an oboe in an orchestra and the words of a single person at a cocktail party.
|"Sound pressure||p, SPL,LPA|
|"Particle velocity||v, SVL|
|"Sound intensity||I, SIL|
|"Sound power||P, SWL, LWA|
|"Sound energy density||w|
|"Sound exposure||E, SEL|
|"Speed of sound||c|
"Sound pressure is the difference, in a given medium, between average local pressure and the pressure in the sound wave. A square of this difference (i.e., a square of the deviation from the equilibrium pressure) is usually averaged over time and/or space, and a square root of this average provides a "root mean square (RMS) value. For example, 1 "Pa RMS sound pressure (94 dBSPL) in atmospheric air implies that the actual pressure in the sound wave oscillates between (1 atm Pa) and (1 atm Pa), that is between 101323.6 and 101326.4 Pa. As the human ear can detect sounds with a wide range of amplitudes, sound pressure is often measured as a level on a logarithmic "decibel scale. The sound pressure level (SPL) or Lp is defined as
Since the human ear does not have a flat "spectral response, sound pressures are often "frequency weighted so that the measured level matches perceived levels more closely. The "International Electrotechnical Commission (IEC) has defined several weighting schemes. "A-weighting attempts to match the response of the human ear to noise and A-weighted sound pressure levels are labeled dBA. C-weighting is used to measure peak levels.
"Ultrasound is sound waves with frequencies higher than the upper audible limit of human hearing. Ultrasound is no different from 'normal' (audible) sound in its physical properties, except in that humans cannot hear it. Ultrasound devices operate with frequencies from 20 kHz up to several gigahertz.
Ultrasound is commonly used for medical diagnostics such as "sonograms.
This section does not "cite any "sources. (April 2018) ("Learn how and when to remove this template message)
Infrasound is sound waves with frequencies lower than 20 Hz. Although sounds of such low frequency are too low for humans to hear, whales, elephants and other animals can detect infrasound and use it to communicate. It can be used to detect volcanic eruptions and is used in some types of music.
|Wikiquote has quotations related to: Sound|
|Wikibooks has more on the topic of: Sound|
|Wikimedia Commons has media related to Sound.|
|"Wikisource has original text related to this article:|
|"Library resources about