One of the quietest places in New York City is a 520-cubic-foot room at the edge of East Village. Tucked behind a heavy, suctioned door, this tiny space is decorated on all sides with wacky wedge-shaped fiberglass protrusions. Instead of a hardwood or carpeted floor, there is a thin metal mesh that visitors can walk on—under which are more wedges of fiberglass. It’s a full anechoic chamber, and it’s the only one in the city. There are no secrets to be found inside, only silence. The word anechoic itself means “no echo.” It’s the perfect place to study the science of sound.
Standing inside the room feels like living within a massive pair of noise-canceling headphones. Speech sounds softer, rounder, quieter. Some people’s ears even pop when they enter the chamber. And while it’s silent to me, Melody Baglione, a professor of mechanical engineering at The Cooper Union’s Albert Nerken School of Engineering proves to me that we’re not experiencing absolute silence. As we stay still and hushed, the sound level meter she’s holding drops to around 18 decibels (dBA). When we made the same type of measurement outside in the Vibration and Acoustics Laboratory, the ambient sound level was around 40 dBA. The unit that the meter is measuring is sound pressure in decibels, but weighted towards the frequencies that the human ear is more sensitive to.
Sound, as we experience it, is a pressure wave propagated by a vibrating object. The wave moves particles in surrounding mediums like air, water, or solid matter. When sound waves enter human ears, they pass as mechanical vibrations through a drum-like membrane and to hair follicles that then send electrical signals to the brain. Hearing loss happens when those hair follicles get damaged. The higher-frequency hair follicles are at the very end of this pathway, and they tend to go first.
“The higher frequencies have wavelengths that are shorter, so higher frequency sound waves interact with the wedges in the anechoic chamber more so than lower frequencies,” says Baglione. “The lower frequencies have larger wavelengths, and they are harder to absorb. You need a bigger room.” There’s human error too—sometimes students drop things through the floor, and that imperfection causes a way for the sound to reflect.
[Related: A look inside the lab building mushroom computers]
Real estate in New York is at a premium, so this chamber falls on the smaller side when compared to ones at an Air Force base or testing facilities for carmakers. A wide variety of projects have undergone tests in The Cooper Union’s anechoic chamber. Students have characterized drone noises in order to figure out how to cancel those sounds out. They’ve compared the sound quality of a traditional violin versus a 3D-printed one. They’ve tested an internal combustion engine in order to inform muffler designs, sound localization for robots, and even virtual reality headsets. Baglione says that new proposals for using the chamber are always coming in.
How do anechoic chambers work?
Noise, reverberations, and echoes are all around, all the time. To engineer a space that can eliminate everything except for the original sound requires a crafty use of materials and deep knowledge about physics and geometry.
“Sound quality is often very subjective. Sound is as much a matter of perception and our experience and our expectation,” says Paul Wilford, a research director at Nokia Bell Labs. “In our work, largely through the anechoic chamber, we’ve learned that the sound we hear directly from a source may be actually at a lower level than the sound that’s coming from bouncing off of walls or being reverberated through a room.” The chamber he’s referring to is the one in Murray Hill, New Jersey, which is the first of its kind. Originally constructed in 1947, the room is currently undergoing renovations.
As a communications company, sound is a big part of what Nokia does. And they needed a way to quantify sound quality so they could design better microphones, speakers, and other devices. “What the anechoic chamber was conceived to be is a powerful environment, a measuring device, an acoustic tool where you can make high-quality, reliable, repeatable, acoustic measurements,” Wilford explains.
[Related: A Silent Isolation Room For Satellites]
Sound can either be reflected, absorbed, or transmitted through a medium. By studying the physics of how sound propagates through the air, the researchers at Bell Labs came up with wedges that are made of foam-based fiberglass encapsulated in a wire mesh. The impedance of that material is matched to the incoming sound waves so it can absorb it rather than reflect it. The sound waves hit these five-foot-deep cones and get trapped. Wedges are the standard design choice for anechoic chambers.
“What that means is that if you’re standing in that room, and there’s a source that’s emitting sound, all you hear is that direct source,” Wilford says. “In some sense, it’s the pure sound that you hear.” If there were two people in the room, and one of them turned around and spoke to the wall, then the other person would not be able to hear them. “There are other anechoic chambers around the world now, and because of these properties, these results are repeatable from room to room to room,” he adds.
What anechoic chambers are used for today
There are lots of ways to analyze sound in this type of echoless room. Scientists can characterize reverberation by timing how long it takes for a certain sound to decay. At Bell Labs, there are high quality directional microphones that are strategically placed in the room along with localization equipment that pinpoints where these microphones are in 3D space. They can use audio spectrum analyzers that look at the frequency response of these microphones, or move a speaker in an arc around the microphone to see how the sound changes as a function of where it is. They can also synchronize sounds and measure interference patterns.
[Related: This AI can harness sound to reveal the structure of unseen spaces]
Since its creation, the anechoic chamber has paved the way for many innovations. The electret microphone, which replaced older, clunkier condenser microphones, was invented at Bell Labs. The testing of the frequency response, the performance, was done in the anechoic chamber. A product from AT&T that improved voice quality in long distance phone calls was made possible by understanding sound reflections that were occurring in the network and developing the math needed to cancel out noise signals.
Recently, the chamber in New Jersey has been used a lot for work with digital twins, Wilford notes, a tech strategy that aims to map the physical world in a virtual environment. Part of that faithful recreation needs to account for acoustics. The anechoic chamber can help researchers understand spatial audio, or how real sound exists in a given space, and how it can change as you move through it. After updates, the chamber will have better localization properties, which will allow researchers to understand how to use sound to locate where objects are in IoT applications.
Before I visited the anechoic chamber at The Cooper Union, Wilford shared that being in the room “retunes your senses.” He’s become increasingly aware of the properties of sound in the real world. He could close his eyes in a conference room, and locate where the speaker was, and if they were moving closer or further away, just from how their voices change. The background noises that his mind blocked out suddenly became apparent.
After I stepped outside the lab in lower Manhattan, I noticed how voices bounced around in the metal elevator, and how the hum from the air conditioner changes pitch slightly as I turn my head. The buzz and chatter of background noise on the streets became brighter, louder, and clearer.