- Acoustics — Sound
Musical Note Frequency Calculator
This musical note frequency calculator converts musical notes to their frequencies and wavelength and vice versa. Although we usually use pitch notation to describe audible sounds, it can also be used to describe higher and lower frequencies. We can hear notes lower than 20 Hz because they usually contain overtones that are falling within a human hearing range. Sometimes very high and very low frequencies are described in terms of octaves below or above the middle C note.
Example: Calculate the frequency of the note C4.
Scientific pitch notation octave number: 4
or MIDI Note
Standard Pitch A4
Share a link to the calculator, including the input values
Twitter Facebook Google+ VK
Definitions and Formulas
The basic formula to calculate frequencies of musical notes of the equal-tempered scale:
fn is the frequency of the note, which is n semitones (or half steps) away from the standard pitch A440;
f0 is the frequency of a fixed note, which is used as a standard for tuning. It is usually a standard (also called concert) pitch of 440 Hz, which is called A440 or note A in the one-line (or fourth) octave (A4);
n is the number of semitones (half steps) from the standard pitch; n > 0 for notes higher than the standard pitch, and n < 0 for notes lower than the standard pitch.
The formula above can be modified to calculate the number n of semitones away from the standard pitch f0:
To calculate frequencies of musical notes from known MIDI note number nm and the standard pitch A440, and vice versa, we can use these formulas:
where 69 is the MIDI note number of A4. There are 128 notes in the MIDI standard (0 to 127), from the note C–1 (minus 1), which is the MIDI note 0 with a frequency of 8.176 Hz, to the note G9, which is the MIDI note 127 with a frequency of 13289.750 Hz.
To calculate frequencies of musical notes from known standard (88-key) piano note number np and the standard pitch A440, and vice versa, we can use the following formulas:
where 49 is the piano note number of A4. There are 88 notes in the standard piano keyboard, from note A0 (sub-contra octave), which is the piano note 0 with a frequency of 27.500 Hz, to note C8, which is the piano note 88 with a frequency of 4186.009 Hz.
Note that the ratio of the frequencies of the two notes in any semitone interval a is equal to the twelfth root of 2:
The cent is a dimensionless logarithmic unit of measurement of musical intervals defined as the ratio of two frequencies f1 and f2, which is equal to:
100 cents make one semitone of the equal-tempered scale. In other words, one cent is 1/100 of a semitone.
To calculate the bias from a note in cents nb from known frequency fn, we will use the following formula:
For example, A4♯/B4♭ has the frequency of 466.164 Hz. The formula above gives nb = 100.0008857 ≈ 100 cents
The frequency 440.2542274 Hz is 1 cent above A4:
Then nb = 0.999999989 ≈ 1 cent.
The wavelength λ of a musical note with a frequency fn can be calculated as
where c is the speed of sound in air at 20 °C (60 °F), which is approximately 343 m/s or 1125 ft/s.
Definition of Sound
The Physics of Sound
Experiments with sounds. Chladni figures
Channels through which we learn about our surroundings: sight — 75%, hearing — 13%, touch — 6%, smell — 3%, and taste — 3%.
We receive most information about our surroundings through sight, while hearing is the second by importance channel of information for us, humans. Hearing is more important than all other traditional channels: touch, smell, and taste, and non-traditional senses such as pain sense, vestibular sense, the sense of heat and cold, and kinesthetic sense, which provides information about body movement and relative positions of the parts of the body. Scientists estimate the distribution of information received by humans through various channels approximately as follows: we get 75% of information through the visual channel, up to 13% of information through the auditory channel and the rest 12% of information is received through other channels. Of course, these estimates reflect only statistically average data, which can be completely different for every person.
Some animal species can hear well beyond the hearing range of humans. Humans cannot hear infrasound like elephants and whales or ultrasound like dolphins and bats. At the same time, the use of language as a communication system is unique to humans. Development of speech not only allowed humans to get an additional communication channel, but led to using it to reflect past events to transfer their experience to future generations and to use it for urgent actions at the present time (e.g. the tactics of collecting hunting, defense and attack) and planning behavior of the group and individuals for the future (survival strategy).
The pain threshold of sound pressure is 20 Pa
The human ear can sense the sound pressure from 2*10-5 Pa (hearing threshold) and withstand pressure up to 20 Pa (pain threshold). However, humans relatively poorly distinguish sounds by their loudness. That is why a logarithmic scale is used for the intensities of acoustic signals in terms of power and intensity. Other mammals and even birds can hear more quiet sounds than humans. Owls, foxes, and cats can hear the action of tiny little animals like mice at a much greater distance than humans can do because they make their daily diet.
Human rhythmic cognition is very good. Besides, in comparison to animals, humans can accurately sense the sound frequency and especially the difference in frequencies of different sounds. These capabilities lead us to the creation of a completely different, artificial world of sounds — music.
Probably the first musical instruments were percussion instruments. They have evolved from a hollow half-rotten tree trunk to modern electronic drum kits. Then, most probably, wind instruments appeared, as obvious evolution from seashells and horns of hoofed animals, from a simple whistle or a pipe made of a piece of hollow cane or a green willow twig to modern flutes, oboes, trumpets, saxophones, and organs. When mankind has entered the age of metals, string instruments appeared. However, they could not be used to play military music to inspire combatants or to assist in communication in battle because probably no one would hear the sounds of lyres, citterns, lutes, and harps on the battlefield. A modern music synthesizer capable of replacing all instruments and, among other things, to play pre-programmed rhythm imitating percussion instruments and basslines can be considered the evolutionary peak of musical instruments.
Perhaps the first people who appreciated the fascinating and ecstatic impact or sounds in the form of rhythms, chants, and prayers were representatives of various religious cults, from primitive shamans and priests of ancient gods to the highest hierarchy of various churches and denominations who commissioned famous composers to write immortal works.
The military also has long been appreciated the inspiring and disciplining role of percussion instruments and martial music in general. To the sounds of flutes and military drums, heavily armed Spartan hoplites broke through the Persian defense at the Battle of Marathon. To the sound of military music, a deadly phalanx of Alexander the Great armed with sarissas swept away everything in their path. Roman turmae and alae swept the battlefields, wedge formations of medieval knights attacked the enemy, dragoons and hussars of the Napoleonic Wars maneuvered on the battlefield, all of them — to the sounds of horns and trumpets signaling maneuvers. Even today the music is used in the armies all over the world to achieve the same goals as it was two thousand years ago!
Sound fought not only on land — from time immemorial boatswain calls had been used to pass various commands to the ship’s deckhand on sail handling. Rowers on war galleys of all ages and nations did their job listening to the beat of drums setting the pace of rowing. Contrary to the popular belief about the sad fate of the galley slaves, they were not usually put at the oars. Ancient navies during the time of Odysseus and Argonauts as well as navies during the Middle Ages and later Vikings on their longships preferred to rely on free men as rowers. Only in the Late Middle Ages did they began to employ slaves as rowers. Later the sound of the ship’s bell was used and still used to regulate the sailors’ duty watches and to indicate the time aboard a ship. Ship’s bells are also used in foggy conditions for safety and as “boat gongs” for various dignitaries and officers coming aboard and leaving the ship.
The sound also played and continues to play a leading role in traditional dog hunting. Ultrasonic and conventional whistles are used to give commands to the dog to retrieve the downed bird. Experienced hunters can determine by the tone of barking if their dogs hunting the prey by sight or by scent.
Not surprisingly, the sound itself, its processing, and generation became the source of income for a large part of humanity. A simple description of professions associated with the sound in one or another way could easily fill almost an encyclopedic volume. This list of people dealing with sound includes not only singers and musicians who compose and perform music. It also includes music critics and editors, sound engineers and radio and television broadcasters, acoustic engineers, acoustic architects, scientists working with sound and acoustics, and even environmentalists assessing noise pollution. This list can still go on and on...
Left to right: Aristotle, Euclid, Ptolemy, and Francis Bacon; source: Wikipedia
People were interested in how the sound propagates, how it can be recorded, and in other features of sound since time immemorial. Ancient scholars Ptolemy and Euclid understood the nature of the sound as mechanical vibrations describing it in terms relevant to the state of science at that time. Another reputable Greek scholar Aristotle suggested the finiteness of the speed of sound in air. With the development of science and technology related to the measurement of time, it became possible to determine the speed of sound experimentally. The British scientist Francis Bacon in his work Novum Organum written in the first half of the 17th century indicated the method of determining the speed of sound by comparing the time intervals between shots and flashes from them.
Left to right: Jean Picard, Marin Mersenne, Pierre Gassendi, and William Derham; source: Wikipedia
Using this method, during the 17th century various researchers (Marin Mersenne, William Derham, Pierre Gassendi, Jean Picard, Ole Roemer, Robert Boyle, and others), measured the speed of sound in the air, getting results in the range of 350–390 m/sec. The spread of values was due to the inaccuracy of measurements and inconsistency of units of length at that time.
Left to right: Ole Rømer, Isaac Newton, Hermann von Helmholtz, and Robert Boyle; source: Wikipedia
In his work Philosophiæ Naturalis Principia Mathematica often referred to as simply the Principia, Sir Isaac Newton gave theoretical substantiation of the speed of sound. Because of incorrect assumption about the audio transmission as an isothermal process, the speed of sound calculated by Newton at 298 m/s was approximately 15% lower than the true value. This was corrected by Pierre-Simon, marquis de Laplace who considered the process of sound propagation as an adiabatic process. According to the modern data, the speed of sound in air at normal conditions is 343.2 meters per second or 1,126 ft/s, or 768 mph.
The plates of the violin body have a number of resonances, that is, they vibrate more easily at certain frequencies and enhance string vibrations
A significant contribution to the development of physiological and musical acoustics was made by the outstanding German physicist, physician, physiologist, psychologist, and acoustician Hermann Ludwig Ferdinand von Helmholtz. As a theorist, he has created an acoustic resonance theory, developed the resonance theory of hearing, and examined the harmonic series. For the first time, he put forward the theory of combination tones, which are a psychoacoustic phenomenon in which a person artificially perceives an additional tone or tones when two real tones are sounded at the same time. Helmholtz explained their appearance by non-linearity of the mechanical system of the human inner ear. He also explained the phenomenon of the presence of dissonance by beats between two overtones in simultaneous sounds. To study the sound, he invented a device now known as the Helmholtz resonator. A set of resonators of different sizes has become the prototype of modern audio spectrum analyzers.
Definition of Sound
The concept of sound is defined differently in various areas of science. In physics, sound is defined as mechanical oscillations in the form of acoustic waves, propagated in an elastic solid, liquid, or gaseous medium as well as in plasma. There is no sound in a vacuum because of the lack of material of sufficient density to provide the transmission medium. In biology, physiology, and psychology sound is defined as the reception of such mechanical vibrations and their perception by the brain of humans and other animals. In an interdisciplinary branch of physics studying the sounds — in acoustics — the sound is understood by a rather narrow range of vibrations from 16–20 Hz to 15–20 kHz determined by the ability of the human ear to hear them. Sound below the human hearing range is called infrasound and above the audibility range up to several gigahertz is called ultrasound.
As any wave in physics, sound is characterized primarily by its amplitude and frequency or wavelength, which is the reciprocal of the frequency. If we take the sound range as recommended by the American National Standards Institute (ANSI) exactly equal to 20 Hz to 20,000 Hz, its wavelengths in the air at standard conditions for temperature and pressure will be in the range of 17 m to 17 mm.
The Physics of Sound
Because the sound can propagate only through the material environment, the sound in gaseous and liquid media as well as in plasma is transmitted in the form of longitudinal waves of compression and expansion. In solids, sound can be transmitted as longitudinal and transverse (at right angles to the direction of propagation) waves. In this case, we talk about shear stress. Compression and stretching of the turns of a spring under the influence of the source of vibrations can be a good example of longitudinal waves. Vibrations of strings of any string bowed (violin, cello, double bass) or plucked (lyre, harp, guitar, gusli) musical instrument can present an example of transverse waves.
The sound source creates a sound wave by means of creating vibrations in the environment. This sound wave spreads from the sound source at a certain speed, inherent in this particular environment. It should be noted that there is no movement of particles of the medium; the particles just oscillate relative to their equilibrium position.
The propagation of sound is determined by four factors:
- Density, elasticity, and temperature of the medium, in which the sound travels.
- The movement of the medium itself relative to the stationary source and the receiver of the sound.
- The movement of the sound source relative to the stationary environment and the sound receiver.
- The viscosity of the medium.
The first factor is the main factor, which defines the speed of sound in a given medium. Obviously, the higher the density of the medium or the pressure, or temperature of gaseous media, the higher the speed of sound in this medium. The speed of sound in gases is lower than that in liquids; the fastest speed of sound is in solids.
The second factor is intuitive from our everyday experience: the same source of sound will have different frequencies whether we receive sound, facing the wind (in this case its frequency increases) or standing downwind (in this case the frequency is lowered).
A sound-absorbing screen is sometimes used to reduce sound reflections from the walls behind the microphone.
The third factor is similar to the second one: the sound of an approaching train or car is different from the sound of a receding train or car. In physics, this change in frequency of the periodic signal (sound, electromagnetic, mechanical, or any other periodic event) for an observer, depending on the relative motion of the source of the sound and the observer is called the Doppler effect.
The last factor is associated with the attenuation of the sound wave during its propagation. Again, from everyday experience, we know that the distant rumble of thunder is not as deafening as if lightning struck nearby.
When sound, like any other wave, is passing through a medium with non-uniform (variable) characteristics, it can be refracted, reflected, focused, or scattered. In addition, diffraction is possible around objects if their size is comparable or less than the wavelength of the sound waves.
At the interface of two sound transmission media, elastic energy can be transmitted by the surface waves of different types. In this case, the speed of the surface wave propagation differs from the speed of propagation of longitudinal and transverse waves. An example of such waves is ripples spreading out when a stone is dropped into water.
Two Russian words /д/ом (house) and /т/ом (Tom, a name) have a completely different meaning because one phoneme was replaced by another one
In the daily activities of modern humans, sounds in the form of articulate speech play an important role as a means of interpersonal communication and a source of information. Reproduction or perception of sounds is difficult if a person has speech or hearing defects caused by congenital or acquired abnormalities due to various diseases. This hinders communication and understanding.
Speech is the oral form of the language, which, in turn, is a set of lexemes (words and word combinations in all their forms; note that all headwords in a dictionary are lexemes) and names constituting a dictionary of the language. They are used by certain rules specific to the particular language (syntax). The study of the sounds of human speech is called phonetics, which is a branch of linguistics.
Each word is created from a limited set of units of speech — vowel and consonant sounds — phonemes. A phoneme (from Ancient Greek φώνημα, sound) is a minimal semantic unit of language that distinguish one word from another in a particular language. The replacement of one phoneme by another completely changes the meaning of the word, for example, two English words ki/ss and ki/ll and two Russian words /д/ом (house) and /т/ом (Tom, the name) have a completely different meaning. A different number of phonemes is used in different languages. Some have only two dozen; others can have more than a hundred phonemes. Because of this reason, there are thousands of modern languages and dialects. The combination of phonemes called morpheme is the smallest meaningful grammatical unit of language. In turn, one morpheme (root) or several morphemes (prefix + root + suffix + ending) make up the word as the basic unit of language.
Two English words ki/ss and ki/ll have a completely different meaning because one phoneme was replaced by another one
In addition to everyday speech, each language has a special form that uses aesthetic and rhythmic qualities that can be found in any language. We are talking about poetry, of course. Poetry is a form of literature that uses such qualities of language as harmony, intonation, meter, and sound symbolism. The organization of poetic speech provides a sound structure for the ordering of lines of poetry, rhythm, rhyme, and meter. Depending on the poetic style, in poetry, syllables of a certain length, stress, and strength are put together in an orderly fashion.
Poetry is much older than writing dating back to the Sumerian Epic of Gilgamesh, the ancient Indian Ramayana, and Mahabharata. Their originals existed in the form of oral poetical speech. A closer geographically and historically examples are European literary monuments: Castilian epic poem El Cantar de Mio Cid, Old French epic poem La Chanson de Roland, Old Norse poetry collection Poetic Edda. They all originally existed in the form of oral tales.
Judging by archaeological finds of primitive musical instruments, music accompanied modern humanity since its evolution between 200,000 and 100,000 years ago in Africa. The oldest musical instrument (not counting percussion instruments) is considered to be a flute because flutes are often discovered by archaeologists. Carbon analysis of a sample fragment of this instrument made of the cave bear femur bone shown that its age is at least 40,000 years! It can be reliably assumed that even more ancient instruments of this type existed. They were made of wood, cane, and other readily available materials, which were much easier to handle. For obvious reasons, these artifacts could not survive for such a long period.
Nyckelharpa — a traditional Swedish bowed string musical instrument
In the historical context, different musical cultures used a variety of musical instruments with the design and music tuning that affected the style and manner of performance of old music.
Music is composed of musical sounds (tones) which are sounds of varying pitch (frequencies). A pure tone is a steady periodic sound with a sinusoidal waveform. The frequency (pitch) is the main characteristic property of a pure tone. A musical tone is different from a pure tone in that it can be characterized by its timbre, duration, intensity, or loudness. Unlike musical tones, musical notes can be additionally characterized by duration and such aperiodic aspects as vibrato, attack, decay, sustain, release and envelope modulation. Duration of musical notes is measured not in seconds and milliseconds, but in relative terms (whole, half note, quarter note, eighth note). Relative terms are also used to describe the loudness (piano, forte, and their derivatives). The absolute duration of any note will depend on the relative duration of the note and the tempo of the piece. That is, for example, a whole note in the pieces of different tempo will have different duration. The tempo is also an important characteristic of a particular piece. It is also expressed in descriptive relative terms such as largo, lente, adagio, moderato, allegro, vivo, presto as well as their derivatives.
Unlike the piano, which is tuned to equal temperament, the strings of a violin are tuned against each other in intervals of perfect fifths. First, the A string is tuned to a standard pitch (usually 440 Hz) using a piano or a tuning fork. The other two strings E and D are then tuned against each other and then string G is tuned with string D. Note that in the equal temperament the ratio of frequencies of sounds in perfect fifth (1.498307), which is used for tuning a piano, differs from that in Pythagorean tuning or just intonation (2:3) used for tuning a violin. How do they, well, almost perfectly, play together? Who can answer this question?
In addition to these characteristics of music sounds, there are specific musical terms that describe the performance techniques (staccato and legato), which affect the perception of music. Therefore, in physical terms, any piece of music is a temporal sequence of sounds of certain waveforms, frequencies, amplitudes, and durations together with pauses between them. This interpretation is closed to the modern interpretation of music given by music theorists who rightly consider all naturally occurring sounds as music. Suffice it to recall the composition of outstanding British group Pink Floyd, which perfectly combines alarm clock and Big Ben sounds, the plane explosion, and the sound of coins dropping into the coin tray of a gaming machine.
Greek music theorists, Pythagoras and Aristoxenus, related music to math. This was because music sounds were reproduced in harmony designed by Pythagoras based on certain mathematical laws, the relevant representation of his doctrine of harmony. Consonance in its simple form is two simultaneously reproduced sounds, which is now called a musical interval that sounds harmoniously. The relations between the tone frequencies, according to Pythagoras, are 8 to 9. This set of euphonious intervals defines the so-called Pythagorean system, which, when transposed (or transferred) to a different key (change of the original frequency), would give a completely different result — the transposed melody might sound false.
Pure tone with a 640 Hz sinusoidal waveform; the spectrogram (frequency versus time) is shown on the lower chart
Scientific substantiation of this phenomenon (dissonance) was given by the German scientist Heinrich von Helmholtz, who introduced the concept of a natural scale and overtones. An overtone is any frequency, which is greater than the fundamental frequency of a sound. He explained the phenomenon of dissonance by the presence of beats between higher harmonics. For these reasons, just intonation or pure intonation tuning system has been created. In this system, the frequencies of sounds related to each other by the ratios of small whole numbers (octave — 1:2, fifth — 2:3, quart — 3:4, major third — 4:5, minor third — 5:6, major tone — 8:9, minor tone — 9:10, semitone — 15:16). The result is a diatonic scale, which is absolutely harmonious, but produces wolf intervals. Pieces, which use this scale, cannot be easily transposed to a different key.
As a result, a twelve-tone equal temperament has been created in which every pair of adjacent pitches of all the 12 sounds is separated by the same interval with the ratio of frequencies, which is equal to the twelfth root of two. For classical and Western music, this system that divides the octave into 12 parts, all of which are equal on a logarithmic scale, is the most common for the past several hundred years. Nowadays, all instruments are usually tuned relative to a tuning fork with a standard frequency of 440 Hz, which corresponds to the musical note A above middle C. When a tuning fork, which is used as a standard of pitch for tuning musical instruments, was invented by a British musician John Shore, who was Sergeant Trumpeter to the court, tuning forks had different frequencies: 123.5 and 435 Hz. Now, if they do not use a tuning fork, musical instruments in orchestras are tuned to an oboe not only because of historical reasons, but because oboe tuning is considered very stable.
Acoustic beats of two 660 and 640 Hz sinusoidal signals
It should be noted that in addition to Western tuning systems, other tuning systems developed by the great civilizations (classical Indian, Japanese and Chinese) exist. Indian system divides the octave by 22 unequal steps, therefore the Indian traditional music sounds strange to the Western ear. Japanese and Chinese musical traditions as well as many European folk songs are based on scales, which coincide with some pentatonic Lydian scales. One of the hit songs “My Sweet Lord” written by former Beatle George Harrison uses the Indian music aspects in combination with traditional Western music using the slide guitar technique.
"Puck" Cylinder Player, England 1905
Many people cannot live one minute without music and scientific studies show that music addiction is as real as drug or sex addiction. With the creation of “canned music” when Thomas Edison invented the phonograph recorded on cylinders, which, when packed and made available to the public, resembled canned food, the music ceased to be a privilege of the upper classes of society. Having evolved from the audio recordings on wax, shellac, vinyl, compact-cassettes, CDs to modern digital media storage and online access to music content, and music is now available to all segments of society. It is a pleasure to watch as our children and grandchildren are trying to dance to the music. And managers of musical televisions frantically search (and find!) native talent among housewives and ordinary workers, which sometimes surpass the vocal range of opera divas and famous tenors and baritones!
Experiments with sounds. Chladni figures
To make this experiment we need an audio frequency generator or a computer with an application to generate sound signals, an audio amplifier, an electro-acoustic transducer (loudspeaker), a rigid metal plate, and some sand or other sand-like material (sugar, chalk, flour, semolina). We used a stainless steel tray and for simplicity, we did not connect the tray to the loudspeaker and just placed it above it. At certain frequencies, the plate vibrates in such a way that it creates areas with virtually no vibration. The sand remains in those areas creating various patterns. With increasing the frequency the patterns become more complex. We taped the tray to the regular loudspeaker using adhesive tape and changed the frequency from 250 Hz to about 1000 Hz. You can see the results below.
This article was written by Sergey Akishkin
Unit Converter articles were edited and illustrated by Anatoly Zolotkov
You may be interested in other calculators in the Acoustics — Sound group:
Sound Intensity Level Calculator
Sound Power and Sound Intensity at a Distance Calculator
Sound Frequency and Wavelength Calculator
Calculators Acoustics — Sound