Public performances: music always too loud?

July 12, 2003  |  Edward Tufte
109 Comment(s)

We recently fled in the middle of a disastrous Steve Earle/Jackson Browne concert because the overwhelming and continuously loud amplification. Largely absent were variations in dynamic range, a major element in any communication. It was almost all continuously, hurtfully loud. It was impossible to hear, let alone understand, the words. Indeed, I’ve never been to a popular music concert where the sound was too soft. Aren’t there sound checks where the main performers walk around the room to get a sense of what the audience might be hearing? Driving home from the concert, we experienced such a relief at the richness and subtlety of the sound of the CD playing.

There is a thoughtful article on this matter by Lewis Segal of the Los Angeles Times who goes to many concerts (to be “endured rather than enjoyed” because of the over-amplification):

As a critic, you’re supposed to identify and highlight the most significant achievement of an event, and sometimes that responsibility involves acknowledging that music artistically outweighs dancing — as in a collaboration between cellist Yo-Yo Ma and choreographer Mark Morris at the Irvine Barclay Theatre in 1999. Conversely, critics and audiences sit through a lot of awful music in their hunt for great ballet performances. But what happens when sheer volume obliterates not only the dancing but also nearly all the qualities of the music itself?

Amplification in the theater has changed from its original mission: to allow audiences to hear what would otherwise be inaudible and to make it possible for the artists onstage to monitor themselves and one another.

Today, other priorities determine the sound levels we encounter. For example, midway through Viver Brasil’s first act, a call-and-response passage briefly featured the unamplified singing of six dancers onstage. Surprise: They could be heard perfectly unplugged. But hearing isn’t believing anymore, and the need to make the company’s music seem not merely natural but oh, wow, awesome left everyone else in Act 1 singing and playing into microphones — even a drum ensemble powerful enough to waken the ancient Orixa gods.

In our culture, many people live with music every waking moment, but it’s rarely live or acoustic. So when we do encounter live music, we expect it to match what we accept as the norm: the presence, detail and intensity of recordings. We’ve come to prefer processed music to the real thing. (continue reading)

Topics: 3-Star Threads, E.T., Science
Comments
  • Brett Cosor says:

    The tragedy is that there is a way to have a uniform sound pressure level throughout a venue. The problem is that manufacturers and sound companies don’t understand music from the same perspective humans do.

    Instead of creating one or two “piles” of speakers that have to make enough sound for everyone, make lots of smaller lumps. The problem is that the amount of acoustical energy required to fill a large venue is significant. If it all emanates from a single location, it will be unbearable for a significant radius (which is governed by a combination of the inverse square log rule and the effect of the echoes that hang in the air after the original sound has passed).

    It becomes obvious that the more speakers you have the lower the sound level is that has to come out of them. The logical extension would be to give everyone headphones. The compromise should be to scatter speakers throughout the audience and then us digital delays to create a natural propogation of the sound through the space as though it all came from the stage.

    If this were done, then the effect of the echoes on the sound could be reduced to zero increasing the intelligibility of the program. Let me state that no engineering will make someone like Joe Cocker exhibit the diction of Rex Harrison. Actually, I take that back, we could do that, too.

  • Kent Karnofski says:

    Regarding the sound check — the first problem is that a venue sounds completely different when empty (when the sound check occurs) than it sounds when full. So just trying to figure out what sounds good, let alone what sounds good all over the place is pretty difficult.

    The second problem is, the majority of rock culture is “louder is better”.

    Another problem is that most rock concerts are held in rooms not intended for music. I reckon if rock concerts were held in symphony halls, that the richness of sound could be quite different.

    I’ve been going to rock concerts for about twenty years, and I have to say the only reason to go is for the energy put off both by the performers and, perhaps more importantly, the crowd. And in that context, smaller venues (30-100 people) are far greater.

    If you want sonic artistry with your pop music, you’ll have to settle for CDs in your living room, I’m afraid.

  • Jeff says:

    Even as a teenager, lo these many years ago, I would always bring a set of old headphones to concerts to protect my hearing. I would jockey them around to let in an acceptable balance of sound. Even now I bring plugs when I go.

    I’ll go you one better though: I resent unnecessary amplification that seems to be everywhere these days. A string quartet, an a capella vocal group, in a small venue often wouldn’t require amplification but receives it anyway. Sadly, all the lovely nuance of live acoustics is subsequently overwhelmed by normalized amplified noise. Sigh.

  • Adam says:

    I have been to a number of gigs where the sound was too soft. This is normally with the more “melodic” performers. The reason? The large numbers of people who seem to use gigs like the pub. For example, at a Beth Orton gig in Cambridge I had to endure a group of people catching up on their previous two months absence from each other at loud volumes. I moved away but there was plenty of chatter around the hall. Even the relatively expensive tickets didn’t deter these people. So the sound people do have to fight the general crowd noise that is not present during the sound check.

    One band (amongst a few others) who I have found to be consistently good at setting sound levels is Shellac. However at least two of their members work in recording studios as producers and sound engineers, so they probably go to extra efforts to get the levels correct. Of course, their music is fairly harsh anyway, so it may not be immediately apparent.

  • David Glover says:

    There are a couple of reasons (both alluded to above).

    First, most venues have poor acoustics (acoustics was a long way down the list of design criteria after seating capacity, cost and bar access). The ideal venue for an amplified concert would be acoustically “dead” (no reverberation) so the sound engineer has complete control. This would also avoid a problem in intelligibility that arises when the reverberant (indirect) sound field approaches the level of the direct sound field. In other words, your ears hear sound from the speakers, then the sound that’s bounced off the back wall, side wall, floor, ceiling, whatever. Building a very large, acoustically dead space isn’t really practical (flat floors are out!). So we end up with an acoustic mess….that inept sound engineers try to solve by increasing the volume, which of course makes it worse.

    This is the second problem – the engineers. While there are plenty of good ones, there are way too many bad ones. And amazingly, they’re working for big names. I’ve given up on concerts because I’m sick of hearing too-loud, unbalanced sound – usually from perfectly capable sound systems.

    Glad to get that off my chest.

  • Jim Linnehan says:

    For all the importance of acoustical design, there are certainly other matters that can make a difference in a listening experience. Here’s one such matter we can control (though I haven’t tried it myself):

    The late Peter Mitchell at Stereophile magazine related how, as a college student, he enhanced his enjoyment of concerts at Boston’s Symphony Hall. By placing cotton in his ears before leaving home, he didn’t have to be subjected to the frequent screech of trolley wheels while en route to the concert hall.

    He arrived at Symphony Hall with “fresh” ears and took out the cotton immediately before the music began. He reported that this trick provided him a better concert experience, to say the least. But for any concertgoer, even one with a less noisy trip, the cotton (or earplugs) still ought to give the ears a decent rest before they are put to very good use.

    What would it do for music appreciation if that trick were used by everyone, especially those who can’t resist plenty of conversation during performances!

  • David Bishop says:

    Let me add to David Glovers’ statement about the annoying distortion caused by reflective surfaces in concert halls by suggesting a good set of foam ear-plugs. These dampen a majority of the reflected sound allowing the listener to hear the source much more clearly and with greater fidelity.

    Sad to say, most popular concerts today are mic’d to give an excellent signal to the mixer/recorder (that ominously large booth, usually in the middle of the floor) and the performers themselves, rather than to the audience; the notable exception being the Grateful Dead who have perfected sound amplification for large audiences.

  • Mark L. Hineline says:

    This reply is mere speculation, backed up by one good anecdote.

    I strongly suspect that many musicians, especially guitars in front, have lost substantial hearing and complain to their sound engineers that the band “isn’t loud enough.” I don’t know what role the fold-back speakers on stage play in this. The anecdote: in an interview, Gregg Allman stated that the guitarists in the Allman Brothers Band are all “deaf as bricks.” He, on the other hand, goes on stage with an earplug in his right ear (his organ is always on the left side of the stage).

    If audiences aren’t complaining that the music is too loud or lacking nuance (and I suspect that, by and large, they aren’t) while the musicians out front are complaining that the sound level is too low, this would account for matters as they are — including a consistently loud dynamic.

  • David Glover says:

    While in some cases Mark’s speculation is right, the stage foldback (or monitor) system is independent of the main sound system and creates an intentionally different mix (often a separate one for each member of the band).

    The level is often extremely high to get control of the mix (eg if you have a double Marshall stack right next to you, the vocals in the foldback have to be loud enough to get above the guitar level).

    This does mean the house system (the audience’s) has to be loud enough to get above any ‘spill’ from the foldback system.

    To counter this (and preserve their hearing) many bands now use “in-ear monitors” rather than foldback speakers. Yet their house sound is often still over-loud.

  • Edward Tufte says:

    Perhaps the hearing loss by members of the band begins at the high end, accounting for the extremely hot treble at many concerts. Or is the sound simply on the edge of feeding back because they are maxing out volume? Can band/audience differences in hearing be adjusted by different mixes for what members of the band hear and what the audience hears? This still leaves the paradox of what the people running the sound board hear and why they have chosen to produce bad sound — is their hearing also impaired? There are, of course, a lot of other variables running around loose here.

    At the Oakdale in Wallingford, CT, a concert by Bruce Springsteen (no band, just Bruce) a few years ago sounded excellent; but Bob Dylan (with band) and Steve Earle (with band) were largely a chronic blare. (And for 30 years John Prine and Joan Baez have always sounded good, regardless of venue!) So it is possible to get competent sound in that house, although again there are plenty of uncontrolled variables in these anecdotes.

    In my sound checks, I always listen to the same songs (“Desolation Row,” “On the Waterfront”) to have some standard across different rooms, to get a feel for the current (albeit empty) room and sound system, and to adjust the EQ appropriately. More importantly, one of our roadies, Kate McDonnell, is a very talented folk singer who does many gigs each year and so she knows how to evaluate and produce decent sound in live performances.

    Compared to band concerts, my work is a very simple situation (house amps, house speakers, voice only, with control only over a small board and mic choice). At least one generalization is possible from my experience mainly in hotel ballrooms and convention centers: newer rooms have better sound than older rooms. A notable exception is The Comedy Connection in Boston, an older room with a beer-drenched floor, which had excellent sound at least for voice, although the air conditioning roared — and where I had the memorable experience of indirectly opening for Frank Santos, the R-rated comedian (Frank even comped me for his show!).

    It is harder to be funny in a room with a very high ceiling — because the all-important start-up laughter from a small part of the audience has little contagion effect with the rest of the audience. The start-up laughter at a remark takes several seconds to go up to the high ceiling and come back down, too faint and too late to reach the yet-to-be amused members of the audience. The Comedy Connection has a low ceiling for good reason.

    A poor sound system will wear out my voice in just a few hours, as I (unconsciously) attempt to fix the sound by altering the pitch, pace, and volume of my voice.

    In adjusting the EQ, it seems to be a good idea to make fairly small simple moves on the board, evaluate what happens, and gradually increment to something acceptable.

    Also the less one need rely on the house AV, the better. Bring your own equipment.

  • Edward Tufte says:

    A Google search on “musicians deafness loud” turns up some harrowing anecdotes and audiology data. Earplugs help. Short-term exposure causes short-term damage to hearing; long-term exposure, forever damage.

    Would a visual artist stare at the sun?

  • Mark Hineline says:

    E.T., you have identified an interesting problem that may have no solution, or at best a partial solution. Boomers (interesting double-entendre) are the first generation to have been weaned on really loud amplified music; it is still difficult to imagine people in their 60s and 70s at rock concerts, but it is beginning to happen.

    I have a complementary complaint. The only rock concerts I regularly attend are Allman Brothers Band, and I have been doing so for a long time. I am the one in 1000 whose favorite parts of an ABB concert are the drum solos, which cross an enormous dynamic range — much of which I cannot hear because the subtler moments are drowned out by enthusiastic audience expression and participation (I’ve put it as euphemistically as I could).

    I complained about this on the ABB website, and was informed that Radiohead audiences are nearly silent throughout that band’s concerts. I am not sure what that means.

    But a website, rating the sound characteristics — especially the dynamic range — of musicians in concert might be helpful, especially in the case of acts like Jackson Browne, who must depend (substantially?) on return business from older and/or more discerning fans.

    A good analogue is the case of Macintosh G4 tower cooling fans. These computers were dubbed “wind tunnels” by some users, and a website formed (a) to provide advice on how to quiet them, and (b) to put pressure on Apple to correct the situation. The effort seems to have been successful, and probably contributed to the design of the G5 towers.

    If you have space on your server, you might think about setting up a website devoted to rating and improving performance acoustics. Certainly, the software you use to maintain the “ask E.T.” forum could be used for such a purpose. You might then want to be sure that the site gets publicized in magazines, such as “Mix,” for sound engineers.

  • Alison Fraser says:

    For the most part, concert sound has to do with economics, and personal connections in my experience. When I worked in the business, there was a rule of thumb that the budget for lighting at a given venue was always much more than that for sound. That has never made sense to me. “Loud” compensates for quality in the minds of concert promotors.

    I have been to an incredible live performance. The sound system was designed for quality above all else and paid for by the company I worked for — not the event organizers. But there isn’t a lot of incentive for event organizers to spend more on sound (as in lots of smaller piles of speakers). How many more tickets would they sell if they they had better sound. Lots of concerts sell out with lousy sound. When the sound is lousy, do you blame the artist? You likely blame the venue, or its management. If there’s an act you really want to see, you are inclined to go where they are playing, even if you know the sound is not great at that venue.

  • Bill Murphy says:

    As a researcher in hearing loss and occupational noise exposure, E.T.’s analogy of a visual artist staring at the sun is one of the best examples I’ve heard. One that we have used to motivate people is “if the ear bled a drop of blood every time it is exposed to too much noise… people would be running to their audiologist to get it fixed.”

    I had an interaction with a sound engineer setting up a performance. I expressed my concern over the high sound levels. He reassured me that his group had found that if the levels started low and then gradually increased, the congregation is not aware of the high levels of exposure. I promptly replied that in my business, it is called a temporary threshold shift. Get enough TTS and it will be come permanent.

  • Edward Tufte says:

    My friend Ken Jacob at Bose has helped developed a new concert sound system for live performances in smaller venues. Each performer has separate speakers and EQ.

    I’m going to try this out in a couple of weeks in my one-day course in Arlington to see if we can improve on the house-sound there. It would be our dream eventually to be free of the house-sound system in any venue.

  • Ken Jacob says:

    We started a research project here at Bose 10 years ago to understand why there are so many complaints about amplified live music. Musicians are very, very unhappy. They say they can’t hear themselves or each other, and have no idea what their audiences are experiencing. They are very concerned about hearing loss because of dangerously high sound levels. Audience members aren’t happy either. They say lyrics are often unintelligible and instrument sounds missing or garbled. Many complain bitterly about excessive sound levels.

    For five years, we worked to understand the root cause of these compaints. One problem we found results from electronic mixing of voices and instruments. When you hear multiple sound sources coming from a single direction (the nearest PA speaker if you’re in the audience and a monitor if you’re a musician) it’s like a conference call with lots of people talking at the same time: it’s very difficult to hear anything with clarity. We know from many psychoacoustic studies that you can hear much better in a multi-source environment when the sound of those sources come from different directions. It is not a coincidence that this is exactly the case for an all-acoustic music performance (e.g. string quartet).

    Another thing we found is that in an amplified performance, it’s difficult to enjoy the profound benefits of using your eyes and ears together. To give a sense of the importance of this, consider one major study that showed that when trying to hear one source in a multi-source environment, using eyes and ears together is the equivalent to turning down the volume level of the competing sources by 15 dB — more than half the loudness.

    The problem is that in an amplified performance, the sound does not come from the direction of the player. Instead of being able to automatically turn your head to face the sound and employ eyes and ears together, you hear the sound coming from the PA speaker (or for the musician, from his or her monitor) and get distracted visually hunting for the player who has caught your interest. Note that in an all-acoustic performance, the benefits of sight/sound integration are fully intact.

    Finally, consider the fact that the musicians do not control their sound. Instead it is controlled by the person operating the mixing console. In the case of the monitor mixes, major adjustments are being made by someone (the sound operator) who isn’t a member of the musical ensemble and who can’t hear what they’re doing because the sound operator is in the audience.

    In the case of the PA mix for the audience, major adjustments are being made that the people who ARE in the musical group can’t hear at all.

    Imagine a painter whose every mark is altered by someone else, and who furthermore is forced to look away as those alterations are made — imagine how hard it would be to make fine art under these circumstances — and you will have some idea of the handicap musicians that play with amplification face every time they play.

    These problems all conspire to produce the most commonly heard thing from musicians that use amplification: I can’t hear myself. What they do to compensate is play louder. Because each musician is reacting this way, the sound levels get higher and higher, to the point that players need earplugs — an obvious sign that something is terribly wrong.

    The other five years of the research project involved investigating what might be done to address these fundamental problems.

  • Edward Tufte says:

    The point about how seeing affects hearing is very interesting — seeing the source of the sound helps us sort out one sound in a cluster of sounds. The public address system, by combining all sounds into one flow, dilutes information about the spatial location of the sound.

    In larger venues is there a transmission-time problem for syncing sound and light? This is going to depend on where the PA speakers are located relative to the audience I suppose.

  • Edward Tufte says:

    Among many other things, the escalation of volume during the course of a concert is described in a chapter from Karl Kuenning’s Roadie: A True Story.

  • Niels Olson says:

    A cautionary note on the sight-sound connection: you can percieve a 85 or 90 dB signal in a 100 dB environment, but you’re still getting the 100 dB worth of hearing damage.

    When a pressure have crosses the tympanic membrane the incus, malleus, and stapes (the ossicles, or bones of the inner ear) transmit the force to the oval window of the cochlea, which is about 25 times smaller than the tympanic membrane, so the pressure entering the cochlea through the oval window is 25 times higher. Hearing loss is generally thought to be the result of broken cilia in the hair cells of the organ of Corti, which runs the length of the cochlea. Pressure in the cochlea moves the bodies of the hair cells relative to the tectorial membrane. The hair cells extend their cilia into the tectorial membrane, so the movement causes bending, which depolarizes the hair cells. Each hair cell is innervated by a neuron of the aural nerve, so the depolarization creates an action potential, which goes into the brain for interpretation.

    Excess pressure can literally rip the cilia from the hair cells. No cilia, no action potential, no hearing. The tectorial membrane varies in thickness and stiffness over its length, so different regions are responsible for recieving different frequencies. The high frequency region happens to break cilia first. The ear does have a couple of safety devices: the stapedius and tensor tympani muscles reflexively dampen the vibration of the ossicles. But a 100 dB is still a 100 dB. The brain can percieve an 85 dB signal in a 100 dB environment, but the 100 dB damage still occurs.

  • Estes says:

    Rock concerts are loud because, according to its philosphy, rock and roll is loud. Musicians by and large know how loud it is. They want it loud. It’s all part of the deal and a point of pride. This ain’t American Bandstand, and the moniker Rolling Thunder Review wasn’t inspired by Dylan’s subtle word play.

    In fact, some bands made whole careers (the Who are the classic example) out of being loud. Somewhere along the line it’s the same reason some guys (and girls) who ride bikes prefer to do so without helmets on their heads or baffles in the pipes. It’s about being loud and rude and out there.

    Of course, there are exceptions to these generalizations (although I wouldn’t think Steve Earl would be one). I think the volume, the complete abuse of sound, is the fashion of it all. Costello’s lyrics wouldn’t be the same without the loud, biting snarl behind it.

  • PJ Doland says:

    Go to a good audiologist and get fitted for a pair of Sensaphonics ear plugs. They are custom molded for a tight fit and they will come with two sets of interchangeable attenuators that are designed to cut frequencies evenly.

  • Edward Tufte says:

    For my one-day course, I’ve been having very good results from the Bose Cylindrical Radiator system. The idea is that each performer in the band has his/her own independent system and that the sound is heard directly from the performer’s own system (and thus, visually, direct from the performer). For a single voice, my case, the value is in eliminating (1) all the speakers in the hotel ballroom ceiling, with the separation of the voice from the performer, (2) the 4 second reverb time of some hotel ballrooms, and (3) the hotel A/V prices and skills (one day rental of a piece of equipment = 20% to 25% of purchase price brand new).

  • Seth Joseph Weine says:

    I agree that much of the loudness is a cultural issue: people want it loud, or management at least thinks that they do.

    This is a major problem in many many NYC restaurants: almost all have music, and it’s usually rather loud. When I’m in a restaurant with company, I don’t want to try to compete with the music to have a conversation. And when I’m alone, I like to read, which is not supported by very loud music.

    I’ve asked other restaurant goers if they have trouble hearing what their companions are saying against typical NYC restaurant music volume levels. One revealing answer was: “Well, I’ve developed a way of periodically injecting into the conversation innocuous phrases like ‘Really?’ and ‘Oh yes!’, even though I don’t often hear exactly what my friend is saying”.

    I was in Rome about 2 years ago. No restaurant I ate in had this problem. I only passed one place that had agressively loud music: the sign said it was “The New York Bar”.

    My conclusion is that the loudness of our society is as much a cultural issue as any thing else.

  • Edward Tufte says:

    Two pieces from the Chicago Tribune:

    The decibel debate: Sound and the symphony

    James R. Oestreich, January 11, 2004

    The problem of hearing loss, stemming both from the player’s own instrument and from those of others, is a real one among classical musicians worldwide. Hearing loss may manifest itself as a decreased ability to perceive high frequencies or slight changes in pitch. It may also extend to tinnitus, a buzzing or ringing in the ear. But as pervasive as hearing loss may be, it’s rarely discussed. Performers are reluctant to mention it, or any other work-related ailment, for fear of losing their standing in the field or their employability.

    “What is beyond dispute is that musicians suffer more damage than age-matched, unselected controls, and brass and woodwind [players] suffer significantly more than the strings,” Alison Wright Reid, an occupational safety expert, wrote in “A Sound Ear,” a widely cited study published in 1999 for the Association of British Orchestras. “Because of the tiny sample sizes, it is difficult to be sure of the percussion, but those players with hearing damage are typically worse than the brass.”

    The problem has grown over the centuries, as composers seeking to expand their expressive possibilities have pushed for ever greater contrast in dynamics; whereas a simple “piano” or “forte” would suffice for Bach, when he bothered to use dynamic markings at all, Tchaikovsky progressed to the likes of pppppp (pianissississississimo, but who’s counting?) and ffff in his “Pathetique” Symphony. And instruments have been modernized and fitted out to carry better in larger halls. Whether composers or instrument-makers led the way at any particular moment, the direction in classical music has been the same as that in rock and musical theater: louder.

    Danger: Music zone

    Louise Roug, September 8, 2004

    An often-cited study by Canadian audiologist Marshall Chasin measured hearing loss among rock musicians and found that about 30 percent were afflicted in some way. Among their classical music counterparts, the figure was 43 percent. Yet while noise-induced hearing impairment is a well-known issue in the rock world, long highlighted in educational campaigns featuring The Who’s Pete Townshend and rapper Missy Elliott, the discomfort from loudness suffered by classical musicians is generally kept hush-hush.

    Again, would an artist or painter stare at the sun?

  • Edward Tufte says:

    This thread opened with a report on the problematic sound at a Steve Earle /Jackson Browne concert.

    Steve Earle’s recent CD, The Revolution Starts Now, is superb, powerful, raw. Nothing like sound directed to one listener rather than 5000.

    On this CD, a beautiful song, “Comin’ Around,” is a duet by Steve Earle and Emmylou Harris. At the very high end, played fairly loud, EH’s sound is etched and, at times, ringing, like the sound of a wet finger tip on the edge of a wine glass. This effect also occurs on her recording 10 years ago of “Every Grain of Sand”. We first noticed wet-finger-wine-glass effect while testing the new Bose Cylindrical Radiator speakers for our one-day course. At first we thought it was the speakers, but it turned out that these speakers revealed information on the CD that we hadn’t heard before. A folk-music informant later suggested that this was in fact a sound engineering feature (by-product, bug) generated by those producing the sound of Emmylou Harris.

  • Dan Spock says:

    I think the observation that hearing impairment is the culprit is dead on. In my rock club experiences, the sound engineer is typically the deafest person in the room. The engineers have subjected themselves to more loud music over the years than even the band members since many of them are “house” engineers or, if touring with bands, are out in front of the band night after night, soaking up the decibels. The ubiquity of “treble creep” is overcompensation caused by hearing that is literally notched out by damage in the higher tonal ranges. This explains the excruciating sharpness so common in live rock audio mixes these days.

    I think another factor here is key, the specious practice of amplifying the drum kit. I think this got started when rock bands began playing arenas, but it then became fashionable to do this in even the most intimate of clubs. For anything but the most expansive club, the typical rock drummer is already playing at ear-splitting levels without any amplification whatsoever. Amping it just makes it worse, and a byproduct is that all the other instruments have to turn up to compete. The resulting muddying effect compounds when the voices get utterly drowned out. This can’t be helped when the drums are miked because if the volume on the voice gets cranked up to a level high enough to compete with the drums, it starts to feed back through the monitors. My advice to performers is to stop miking those dang drums!

  • Edward Tufte says:

    Recent readings on the issue:

    A new book by Clive Young on “live sound secrets of the top tour engineers” is called Crank It Up. The title accurately summarizes the secret. Here is a perverse escalation effect: a monitor engineer (for Tool, a band that prides itself on its loudness) describes a method attempting to save the band’s hearing:

    “They wear the flat-response plugs. I’ve got some of the ear plugs the guys wear, and there is something to be said for them. You cut down a lot of the reverb from the arena, though, so you have to add it back in from the wedges. It means you run everything a bit harder . . .” (p. 185)

    And then, an interesting article on Yo-Yo Ma appears in today’s New York Times: Seth Schiesel, “A Virtuoso and His Technology“.

    Some quotes, this by Yo-Yo Ma:

    “Now, the thing that is really hard to do, that I think may be one of the hardest things to do, is to be in one place and somewhere else at the same time, which means to be empathetic to another space other than your own. What I learned from hearing recordings from, let’s say, a mike that was placed at 20 feet versus 60 feet away is it makes the tempo sound different. It makes what you think may have been the right speed to do something — it may be wrong by the time you go 60 feet away. You can only really know that when there’s evidence. And a tape recorder actually gives you that evidence.”

    And this by Emanuel Ax:

    “There’s just the physical ability to play the instrument; there’s just no one better and probably no one as good,” said Emanuel Ax, the pianist, who has been close friends with Mr. Ma since the early 1970’s. “But one of the things that really distinguishes him from a lot of performers is that he really feels a connection with the audience and audiences are very important to him. You see people who are fantastic communicators, but they may not be at the very top of musical ability. And you see great players who are maybe kind of withdrawn and they commune with the music and the audience is welcome to watch, but they’re not as interested in communicating and being performers, as it were. And then you have Yo-Yo.”

    I will be conducting additional empirical research on the Steve Earle matter at Toad’s Place in a few weeks.

  • Edward Tufte says:

    Notes from Shure on hearing conservation.

  • Niels Olson says:

    I found I had to see many images of the inner ear before I could explain to myself why the structure works the way it does.

    inner ear

    inner ear

  • Alex Merz says:

    Just returned from a wonderful Tom Waits show at the Paramount Theater in Seattle. Wonderful, clear PA, not too loud, great dynamics — even in the back corner where we sat. Another win for clarity, even when Waits was singing in his “ripping canvas” mode.

  • Edward Tufte says:

    Clive Young’s detailed comment is very thoughtful and it was good of Clive to contribute to the thread. Maybe we should change the title of the thread to “Why is the music usually [rather than always] too loud?” In general, in the book, the dB readings are rather stunning although there are in fact several very thoughtful sound engineers who are alert to the issues of deafening sound.

    Perhaps part of the solution in the longrun will be in-ear monitors by performers, which will calm down the on-stage sound and perhaps therefore the audience sound. The Shure promo publication (that comes out 3 times a year) on music concerts and sound systems has been pushing hard on the virtues of in-ear monitors for performers. At least with in-ear monitors, the performers will be more likely to retain their hearing, which in turn will help the audience retain their hearing.

  • Edward Tufte says:

    Alas the remarkable Tom Waits was apparently deafening in a recent London concert. A very good review (5 stars for TW) from the Guardian describes the concert as “amplified to deafening volume.”

  • Alex Merz says:

    “Live, amplified to deafening volume,” says the Guardian of Tom Waits. Well, in Seattle last month the PA was probably the best I’ve ever heard. It was anything but deafening. Not even close to as loud as nearly any rock show I’ve seen in the last 20 years. I always bring, and generally have to use, earplugs (these days, Etymotics ER-20s). They were not neccessary at this show. It was noisy, yes, in the sense that cacophony has become a significant element of Waits’s music ever since Rain Dogs; but not loud in the sense of sound pressure. Certainly not anything like seeing Primus (the bass player and drummer of Primus have been at various times in Waits’s bands), Anthrax, and Public Enemy at the Salem Armory. THAT was a loud show.

  • Gary Harmon says:

    One of the most frustrating results of the over-amplification of rock concerts is the spillover to Broadway. Pleasant memories of past Broadway musical experiences in which the performers depended on their natural voice projection were blown away for me recently by a performance of Mama Mia! in which the amplified sound was cranked up to an uncomfortable level. The rest of the (younger) audience, though, seemed to enjoy and expect such an experience.

    As far as unwanted music in public places, perhaps what we need is a universal remote such as the amusing device recently announced for turning off TVs in public places.

  • Edward Tufte says:

    Indicating something about the concert sound discussed above, this photograph of Tom Waits at the Hammersmith Apollo in London appeared in the December 13, 2004 issue of Pollstar (a magazine about the music business).

     image1

  • Kent Karnofski says:

    That’s pretty funny.

    I have seen many performers use megaphones, most notably the World Inferno Friendship Society last Summer in NYC. Actually, I have not noticed this to cause an increase in volume, rather, megaphones help up in front of microphones can offer a wonderful sound effect, altering the voice to something scary in a not-human sort of way.

    It also adds an interesting visual as part of the stage show!

  • Edward Tufte says:

    A thoughtful and practical article by Mark Frink, from Mix, “Why Louder Sounds Better“:

    All too often we hear complaints that the sound is too loud at concerts. Further, widespread enactment of sound control regulations often requires concert sound engineers to limit SPLs to mandated levels. However, despite regulations and a growing awareness on the part of concert engineers that high SPLs are dangerous, few seem able to turn it down. Though concerts are louder than ever-and many in the music business suffer some hearing loss-there seem to be hidden forces at work that encourage engineers to turn up the volume. We are painting ourselves into a corner.

    There are many political and career forces that encourage engineers to turn it up-the guitarist’s girlfriend and the band’s manager spring to mind. Plus, the physiological and emotional impact of loud sound simply gets everyone’s heart beating faster. Bad venue acoustics or a terrible mix position often tempt a mixer to turn it up (not always a successful tactic). But there are also subtle mechanisms of human audio perception that tend to make the console’s faders “upwards sticky” and encourage higher concert levels.

    The ear is not a linear device-its response varies with frequency. Hearing sensitivity peaks in the high-mids and falls off at the extremes, and the hearing curve also changes with volume, becoming slightly flatter at higher SPLs. In order to maintain a perceived balance between high and lows (and mids and low-mids, and so on) a “flat” playback system may need to be EQ’d differently for different levels of reproduction. The “loudness” control on your stereo attempts to correct this problem by applying a progressive EQ that compensates for the well-known “equal loudness contours” of human sound perception. Because our ears become less sensitive to bass and treble at lower levels, a loudness control adds bass and treble when the hi-fi system is idling.

    Some research indicates that the ear is more sensitive to these relative EQ changes than to the volume change itself. As a result, music or other familiar audio sources that sound correctly equalized at one level may sound a little “off” at a different volume. When the level of a concert sound mix changes by 10 dB, it can sound as if an invisible hand is reaching over to the P.A.’s system EQ and changing the curve by a few dB in many places.

    It’s no surprise that our ears are especially sensitive in the octave around 4 kHz to begin with. But as a concert gets louder, the ear gets even more sensitive there. This corresponds to the resonant frequency of the ear canal, and is the frequency range where hearing overload and damage occurs first. Ears that have sustained damage often experience discomfort at these frequencies earlier. (continue reading)

  • Ken Jacob says:

    If an alien landed on earth and discovered that:

    a) making and enjoying music was as fundamental to being human as breathing, and responsible for some of their most intense and broadest intellectual and emotional responses,

    b) that the most important sense for this practice are two passageways through which the sound passes into the human brain,

    c) and that more and more, humans are stuffing special earplugs into those passages to BLOCK THE MUSIC…

    …they would surely conclude that WE were the aliens.

    For heavens sake, if this isn’t a sign that something is terribly wrong, I for one do not know what is.

  • Edward Tufte says:

    James Fenton, the poet and critic, on the art of voice projection in the Guardian.

  • Mike Prager says:

    The two-word solution: Classical music!

    In classical music and opera concerts, electronic amplification is not [generally] used. Thus, volume levels are reasonable, and one can hear the sound of real voices and real instruments.

    Amplification has become a plague. It is often used when not necessary, and when used is almost always too loud, distorted, and equalized for “punch,” not natural sound. Please do wear earplugs; that ringing in your ears afterwards is not a benign symptom.

  • Edward Tufte says:

    Attending classical music concerts may be all right for the ears, but playing classical music is another matter. A summary of hearing problems of symphony orchestra musicians by Dr. Timothy C. Hain can be found here. The site shows a very good animation of how ears work:

    hearing animation

    Here is the relevant material by Dr. Hain about music-making and hearing:

    Musicians and the prevention of hearing loss:

    Musical instruments can generate considerable sound and thus can also cause hearing loss. The most damaging type of sounds is in the high-frequencies. Violins and violas can be sufficiently loud to cause permanent hearing loss. This is typically worse in the left ear which is nearer the instrument. Unlike other instruments, the ability to hear the high-frequency harmonics is crucial to these musicians. Mutes can be used while practicing to reduce long term exposure. (Karlsson, Lundquist et al. 1983; Ostri, Eller et al. 1989; Royster, Royster et al. 1991; Sataloff 1991; Palin 1994; Teie 1998; Obeling and Poulsen 1999; Hoppmann 2001; Kahari, Axelsson et al. 2001). In a study of rock/Jazz musicians, almost 3/4 had a hearing disorder, with hearing loss, hyperacusis and tinnitus being the most common maladies. (Kaharit, Zachau et al. 2003)

    There are a number of strategies that can be used to reduce the chance of noise injury from other instrumentalists. Musicians ear plugs are generally “flat” so that bass and treble notes are not relatively favored, thus distorting perception. Nevertheless, a “vented” ear plug can be used to tune the ear cavity to low frequencies, which are less damaging. Drummers should use musicians ear plugs, such as the ER-25. Guitarists and vocalists can use the less attenuating ER-15. Too much ear protection can result in overplaying and not enough protection can result in hearing loss.

    Plexiglass baffles can be used to reduce the noise from other instruments.These are particularly relevant for drummer’s high-hat cymbals. Drums and brass can be particularly a problem. Ear monitors are small in-the-ear devices that look like hearing aids, that can be used to electronically protect hearing, while allowing the musicians to hear themselves. Acoustic monitors are stethoscope like devices that block sound from other in the group, but allow the instrumentalist to hear their own instrument.

    Loudspeakers produce both high and low frequency sounds. High frequencies tend to emanate in almost a straight line, while low frequencies are present in nearly all directions. Thus, standing besides a high-frequency source may provide some protection. Humming just prior to, and through a loud noise such as a cymbal crash or rim shot may provide some protection. Small protective muscles in the ear contract naturally when we sing or hum, and thus humming may protect from other noises.

    • Chasin M. Music appreciation 101. Woodwinds, large stringed instruments, violins and violas. The Hearing Review, Jan 2000, 46.
    • Chasin M. Music Appreciation 101. Bass players and drummers and guitar and rock/blues vocalists.
    • Hoppmann, R. A. (2001). “Instrumental musicians’ hazards.” Occup Med 16(4): 619-31, iv-v.
    • Kahari, K. R., A. Axelsson, et al. (2001). “Hearing assessment of classical orchestral musicians.” Scand Audiol 30(1): 13-23.
    • Kaharit, K., G. Zachau, et al. (2003). “Assessment of hearing and hearing disorders in rock/jazz musicians.” Int J Audiol 42(5): 279-88.
    • Karlsson, K., P. G. Lundquist, et al. (1983). “The hearing of symphony orchestra musicians.” Scand Audiol 12(4): 257-64.
    • Obeling, L. and T. Poulsen (1999). “Hearing ability in Danish symphony orchestra musicians.” Noise Health 1(2): 43-49.
    • Ostri, B., N. Eller, et al. (1989). “Hearing impairment in orchestral musicians.” Scand Audiol 18(4): 243-9.
    • Palin, S. L. (1994). “Does classical music damage the hearing of musicians? A review of the literature.” Occup Med (Lond) 44(3): 130-6.
    • Royster, J. D., L. H. Royster, et al. (1991). “Sound exposures and hearing thresholds of symphony orchestra musicians.” J Acoust Soc Am 89(6): 2793-803.
    • Sataloff, R. T. (1991). “Hearing loss in musicians.” Am J Otol 12(2): 122-7.
    • Teie, P. U. (1998). “Noise-induced hearing loss and symphony orchestra musicians: risk factors, effects, and management.” Md Med J 47(1): 13-8.

  • Drew Knight says:

    An article found on Australian ABC news (also covered in the Guardian here):

    Music festival introduces ‘silent disco’

    Britain’s Glastonbury music festival will feature a “silent disco” this year in an effort to sidestep a noise curfew, festival organiser Michael Eavis said on Tuesday.

    Instead of DJs blasting their sounds through speakers, thousands of revellers partying past midnight at the open-air music event will be given wire-free headphones with volume controls that directly tune in to a sound system.

    “It’s a unique way for people to party without offending those who want to sleep or disturbing the villagers nearby, who have complained about the noise,” said Mr Eavis, who founded the festival in 1970 on his farm in western England.

    “We’ve been looking at a solution like this for ages. The system was developed by a Dutch firm and successfully used at parties in the Netherlands and we hope it works here too,” he told Reuters.

    The idea is one of several introduced in recent years to improve relations with local villagers.

    A giant “super-fence” was erected around the site in 2002 to cut down on crime and foil gatecrashers.

    Glastonbury, an annual three-day festival famous for its mud and mayhem, has attracted some of the biggest names in music over the years, including REM, Radiohead and James Brown.

    Mr Eavis would not spill the beans on who would headline this year’s event, scheduled for June 24-26.

    “I can’t say who’s going top the bill this year, but the act is as big and will be as good as Paul McCartney was last year,” he said.

    – Reuters

  • Edward Tufte says:

    George Varga, San Diego Union Tribune:

    Too many concerts are aural nightmares: Sounding off on the bad sound of music”

    “We do a soundcheck every day, no matter what,” Don Henley of the Eagles told me last year. “And we have the same house (sound) mixer we’ve had for many, many years now. The problems at concerts are because bands play too loud, period.”

  • Jim Linnehan says:

    From the St. Louis Business Journal:

    Energizer, Mick Fleetwood promote hearing-loss prevention

    Energizer Holdings is teaming up with rocker Mick Fleetwood to promote hearing-loss prevention and treatment in baby boomers, the company said Monday.

    “Energizer is producing a concert hosted by [Mick Fleetwood of Fleetwood Mac] at the Rock and Roll Hall of Fame and Museum in Cleveland, in which the audience will listen through portable FM radio headsets, rather than speakers or amplifiers.”

  • Edward Tufte says:

    Everything in moderation . . . .

    Digital music craze stores up ear trouble in iPod fanatics

    The iPod — like all digital music players — is compact, stores huge amounts of music and can play for many hours. As a result, more people are listening for longer to their favourite tracks.

    But audiologists believe tens of thousands of young people are causing serious damage to themselves, and are likely to suffer tinnitus and loss of hearing in later life. The experts say MP3 players should be designed to prevent people playing music above 90 decibels, about two-thirds of the maximum volume of a typical device.

  • Kent Karnofski says:

    I was just pricing amps for my guitar, here is what I found at a local shop’s website:

    Peavey Special 130:

    • 130 Watts at 4 Ohm
    • 1×12″ Combo
    • Incredibly Loud!
    • in a Small Package

    It’s loud, they’re excited about it, and that’s about all they want to say. Just made me laugh and think of this forum topic. More evidence of the ‘loud culture’ discussed above!

  • Edward Tufte says:

    Perhaps music in performance appears to sound worse in the last 25 years because of the high-
    quality, intensely produced sound now available electronically in our earphones. Subtle thoughts on these
    matters are found in an excellent New Yorker essay by Alex Ross:

    The Record Effect: How technology has transformed the sound of music

    Ninety-nine years ago, John Philip Sousa predicted that recordings would lead to the demise of music. The phonograph, he warned, would erode the finer instincts of the ear, end amateur playing and singing, and put professional musicians out of work. “The time is coming when no one will be ready to submit himself to the ennobling discipline of learning music,” he wrote. “Everyone will have their ready made or ready pirated music in their cupboards.” Something is irretrievably lost when we are no longer in the presence of bodies making music, Sousa said. “The nightingale’s song is delightful because the nightingale herself gives it forth.”

    Before you dismiss Sousa as a nutty old codger, you might ponder how much has changed in the past hundred years. Music has achieved onrushing omnipresence in our world: millions of hours of its history are available on disk; rivers of digital melody flow on the Internet; MP3 players with ten thousand songs can be tucked in a back pocket or a purse. Yet, for most of us, music is no longer something we do ourselves, or even watch other people doing in front of us. It has become a radically virtual medium, an art without a face. In the future, Sousa’s ghost might say, reproduction will replace production entirely. Zombified listeners will shuffle through the archives of the past, and new music will consist of rearrangements of the old.

    I discovered much of my favorite music through LPs and CDs, and I am not about to join the party of Luddite lament. Modern urban environments are often so chaotic, soulless, or ugly that I’m grateful for the humanizing touch of electronics. But I want to be aware of technology’s effects, positive and negative. For music to remain vital, recordings have to exist in balance with live performance, and, these days, live performance is by far the smaller part of the equation. Perhaps we tell ourselves that we listen to CDs in order to get to know the music better, or to supplement what we get from concerts and shows. But, honestly, a lot of us don’t go to hear live music that often. Work leaves us depleted. Tickets are too expensive. Concert halls are stultifying. Rock clubs are full of kids who make us feel ancient. It’s just so much easier to curl up in the comfy chair with a Beethoven quartet or Billie Holiday. But would Beethoven or Billie ever have existed if people had always listened to music the way we listen now? (continue reading)

  • CJ Alverson says:

    A few notes on this:

    1) Music Source as a Determinant of Volume. I wouldn’t expect that Emmylou Harris, Mazzy Star or the Pierces would engender high levels of amplification. However, I would expect the engineers for any event invovling Dinosaur Jr. or Slater-Kinney to pretty much use the sound sytem to simulate the shock wave of a low yield nuclear blast. Certain music is designed to be played loud.

    2) Venue Size. I avoid larger venues, precisely because of the inept, usually front-heavy placement of speakers. By default, we’re blasted by front-dominant arrays of speakers. This effect yields lousy acoustics in larger venues. For this reason I avoid larger venues. One has a chance with smaller venues.

    3) Venue engineering. Few sites (besides acoustic studios) are actually engineered with acoustics in mind. Ironically, most of what we want to hear reproduced in these coarse venues are sounds produced with care in carefully designed spaces. Of course live music frequently fails the test. I prefer smaller venues, like Variety Playhouse in Atlanta, or Workplay in Birmingham. Workplay was specifically designed for sound, unlike many venues.

    4) Distance and Initial Stregth of Source. Sound diminishes rapidly with distance. When a venue relies on a bank of front speakers, one ends up over-ampliflying in order to reach the “cheap seats.” An expensive and complicated solution is to spatially distribute the speakers throughout the venue. Geeeeeeee, someone should develop that concept, “Surround Sound” would be a great name for that. With spatially-distributed speakers saturating the venue, modest levels of amplification would create superior sound. The drawback is that the spatial design would be venue specific, and sound check would be more demanding.

  • Clive Young says:

    Most large venues these days happen to be primarily sports-oriented — say, a football stadium or a basketball arena. These venues are actually designed to be loud, reverberant places so that cheering sports fans sound as loud as possible. It theoretically psyches up the players and gets the fans excited, too. It plays havoc with concerts, however.

    As for meters, almost all outdoor venues have local laws now that restrict their volume levels, where a show has to be within certain dB limits. These get tested throughout a show by someone not associated with the tour, due to legalities. Engineers, too, have assistant engineers who walk the venue, and report back on where the sound needs work. I wouldn’t be surprised if sound levels are occasionally part of the list of things to check for.

  • Edward Tufte says:

    As I venture out into new activities, I’m astonished at the intensity of corporate and government micro-regulation of those activities, although regulation by markets can also be arbitrary and obnoxious. Having the Feds wandering around rock concerts with sound meters is a bit much, even for the Nanny State. Maybe the regulators could economize by equipping the venue’s visitors from the FBI and DEA with sound meters.

    Thank you Kindly Contributor Clive Young, who is a world-class expert on live concert sound and the author of Crank it Up: Live Sound Secrets of the Top Tour Engineers, a book I’ve read cover to cover.

  • Niels Olson says:

    I believe most of the sound laws were put in place at the behest of the people who live in the vicinity of these stadiums, which may seem a little old-grumpish until one contemplates how many people may live within three miles of an inner-city stadium. I pick three miles only because that’s the only number I’ve ever heard mentioned: the Kansas City Star reported that as the furthest call to the police during a concert I attended in 1993 at Arrowhead Stadium.

  • Matso Limtiaco says:

    As Hain points out in his article on symphony musicians and hearing problems, the causes of hearing problems aren’t restricted to amplified music. In my previous life working with a university marching band, it was common knowledge that you had to wear earplugs if you stood in front of the band for any amount of time – especially for an entire Saturday afternoon at the football game. The retired marching band director had lost almost all hearing in his left ear over his 30+ years of work.

    I perform semi-regularly with jazz groups of different sizes, and I’ve noticed that it’s quite difficult to get some jazz drummers to play softly. I’m not a particularly loud player, and I don’t like “overdriving” my horn to compete with the drummer in an acoustic setting. Lately it seems like it’s been easier to get our 16-piece big band to play more quietly, and with more subtlety, than some drummers I’ve worked with!

  • Ginny Nichols says:

    When I began attending popular music concerts 30-something years ago, they were held in the traditional venues of the time. For example, in Oklahoma City, they were held at the Civic Center Music Hall, home to the local symphony orchestra. The sound was phenomenal. Hearing Pink Floyd (upon the release of Meddle) was like a 2-hour vividly intense dream.

    These memorable experiences were short-lived. The concert scene rapidly degenerated, as large numbers of people in altered states came for the social scene. The night someone set the carpet in flames was the last rock concert I saw in that civilized venue. Rock concerts moved into fieldhouses and sports arenas, and volume was added to substitute for the execrable acoustics. To add insult to virtual injury, concert tickets went from $5-$10 to the ridiculous prices we’ve been gouged with for the last 20 years. And how about those venues in which no actual seating is provided? I guess that’s what you call a “festival.”

    I am encouraged by the increasing frequency with which I see musicians willing to play in smaller auditoriums and amphitheaters where they and the audience can have an enjoyable experience together.

  • Paul Iacono says:

    “Aren’t there sound checks where the main performers walk around the room to get a sense of what the audience might be hearing?”

    They do — I actually “met” Mick Jagger once in the crowd of his own concert. The Stones’ warm-up band was playing to a packed house and I thought I should hit the men’s room before the boys came on. As I pressed through the crowd I neared the audio control booth which sat on a platform in the middle of the coliseum floor. Some big guys pushed past me in the other direction and I suddenly found myself in the center of a group of very discreet body guards, face-to-face with Mick, his head sort of down, pretending to be just another long-haired kid at a Stones concert. They all flowed around and past me and escorted Mick through a curtain around the bottom of the platform. I stopped and watched, and a moment later he emerged up top, listened for a minute with the control tech, made a few adjustments and was gone. As far as I could tell, no one else saw him. I was impressed with his courage, though. It could have been quite dangerous.

  • Edward Tufte says:

    Here is another issue in concert sound, described in the New York Times article below. As a public speaker, I experience the hearing-aid effect about one gig in ten. At least I think so, for a very very high-pitched sound appears when the audience is in the room and never appears during rehearsal. Once I made the polite request of the audience described in the article below and the tone went away. The tone (around the frequency of a television set tone, 18,000?) has also been attributed to a ringing halogen bulb or to a harmonic from an alarm system (where the tone is always on, and to set the alarm, a receiver is activated to detect motion, as in a museum alarm system). There was a room in the Princeton University Art Gallery that I found intolerable because of the piercing high-pitched tone of the alarm system. But both the light bulb and the alarm system tones should also appear during rehearsal, and they don’t, so it’s probably the hearing aid feedback. Any advice?

    Pardon Me, Sir, but Your Auricular Instrument Is Flat

    Every concert hall and opera house has found a way to remind audience members to please turn off all cellphones, pagers and beepers before a performance. At Avery Fisher Hall a message is projected on the back wall of the stage. At the Metropolitan Opera it is posted on each seat’s Met Titles screen, the first thing to greet people as they settle into their seats. Broadway theaters use amusing recorded announcements that list an assortment of mood disturbers, including crackling candy wrappers and crinkling plastic shopping bags.

    Yet one disruptive sound remains unmentionable: the hearing aid. Some performance halls make euphemistic mention of ”other electronic devices” in their requests. Mostly, though, everyone has been too discreet to single out the hearing aid.

    That is until recently at Carnegie Hall when Simon Rattle bravely mentioned the unmentionable. He was conducting the third of three triumphant performances by the Berlin Philharmonic, and had just completed an Apollonian account of the stirring first movement of Schubert’s Symphony No. 9 in C. Throughout at least the second half of that movement, many listeners must have heard the high-pitched whistle of feedback from a hearing aid. As Sir Simon took a short pause to ready the orchestra for the slow second movement, the high ringing sound became pervasive.

    Intent on remedying the problem, Sir Simon turned to the audience and explained that the sound seemed to be coming from a hearing aid. Then, with characteristic British courtesy, he asked, if I remember correctly, ”Could someone please help that person with this problem?” Or some similarly tactful words.

    Hearing aids have intruded on many rewarding performances. That telltale sound ruined an impassioned performance by the soprano Karita Mattila of Leonore’s great Act I aria from Beethoven’s ”Fidelio” at the Met a few seasons ago. A similar sustained high pitch pervaded Alicia de Larrocha’s genial account of Mozart’s Piano Concerto in A (K. 488) this summer, her farewell performance with the Mostly Mozart Festival. Only once before, though, have I seen an artist stop a performance to request that the disturbance be attended to. In April 2001, as the soprano Dawn Upshaw began the first work on a Carnegie Hall recital, she gestured for her accompanist to stop playing just moments into the performance and, turning to the audience, graciously explained that she heard a ”whistling sound.” As she waited anxiously, the sound was turned off. Before she started the song again, Ms. Upshaw said, ”That’s for all of us.”

    Though what had caused that sound was obvious, Ms. Upshaw refrained from identifying it. This is a delicate matter. Users of hearing aids at performances have an unfortunate impairment and are still, to their credit, trying to enjoy live music. Moreover, a person wearing a hearing aid often cannot hear the whistling that his device sometimes produces. It is a pesky sound to track down for others in the hall. Those high-pitched sustained tones throw you off. You could be sitting just seats away from a malfunctioning hearing aid and think that the whistling is coming from somewhere up in the balcony.

    The sustained high pitch comes from feedback caused by an improperly fitted hearing aid or a buildup of wax or fluid in the ear. Sometimes turning the volume too high can also cause the problem. This summer at the Mostly Mozart Festival a scintillating account of a Haydn piano concerto by Leif Ove Andsnes and the Norwegian Chamber Orchestra was nearly ruined by the buzzlike whistling of a hearing aid that occurred during only the louder passages.

    It is easy to get angry at obtuse concertgoers who hack away, not even trying to muffle their coughs, or at those who fiddle with plastic bags on their laps. But it would be mean-spirited to get upset at hard-of-hearing music lovers. Classical music tends to attract a disproportionate number of older listeners, who would be more likely to have this problem.

    Still, malfunctioning hearing aids are more intrusive than cellphones, which, however mood-destroying, are at least temporary. A hearing aid can whistle through an entire act of an opera. And think of the performers who are trying to produce notes on pitch while the high-pitched feedback distracts them from concentrating.

    Classical music has had to contend with the hurtful perception of the concert hall or the opera house as sacred temple for which one must get dressed up, sit rigidly still and not make a sound. A performance of a Wagner opera at the Bayreuth Festival in Germany can have the air of a religious ritual during which no one would dare to cough or stir.

    Yet in a noisy, hooked-up, fast-paced and overamplified world, the concert hall and the opera house are about the only places left to experience natural acoustics, unamplified voices, the radiant sound of a traditional orchestra. This precious environment is worth protecting.

    What to do? Audiences must try not to disturb the rapt atmosphere that ideally should accompany classical music performances. Patrons with hearing aids must be especially sensitive to the problems these essential devices can cause. More artists and institutions should follow Sir Simon’s example and deal with the problem straight on.

    Those sitting near people with malfunctioning hearing aids should politely point out the problem. Some, with low tolerance for intrusive sounds, are already practiced at this protocol. If you are sitting near Row L at the Met and before the performance a middle-aged guy with thinning brown hair asks you to please place your shopping bag under your seat, it will probably be yours truly.

  • Clive Conway says:

    As a communications person who is also a musician and performer of classical and rock music and who has worked as a sound engineer for many years, I guess I am well placed to comment on this 🙂

    Concert sound is pretty poor the world over. Engineers with no real understanding of sound, physics, acoustics or, for that matter, music, mix bands who know very little about music themselves for audiences who are often drunk, high and noisy in acoustics not designed for anything except flexibility and income-generation.

    Most engineers don’t really know what music is supposed to sound like and so mix ‘additively’. That is to say they get the volume about right and then add effects and equalisation till it sounds right to them. This lifts the resulting volume considerably. The other common method is to turn the level up until it starts to feed back, then dial back the frequency at which feedback is occurring. The preferred method is to set the perfect reasonable listening volume then apply equalisation subtractively to get it sounding right.

    Precisely because of the sound level in front of house, whenever I have been on stage in a rock performance I have had difficulty hearing the foldback which makes it very difficult to sing in tune, or play in tune for that matter (I play fretless bass).

    Regular rock concert-goers probably all have considerable ear damage by now, and wouldn’t be satisfied if the music were turned down, so I don’t think we ‘purists’ will ever be satisfied. The only real solution is to increase the already considerable cost of concerts so that young people wouldn’t be able to afford them. May I suggest classical music, in the interim?

  • Michael 'Bink' Knowles says:

    I mix live sound and am in a pretty good position to comment on this subject.

    Indeed, there are sound mixing folks who mix too loudly and/or crush the dynamic range of the music. There are others who handle the material very respectfully and work as hard as they can toward attaining for their audience members a dynamic, exciting and pleasing experience. In the same vein there are venue and concert sound systems that aren’t designed or installed or maintained well just as there are venue and concert sound systems that sound excellent and provide balanced, even coverage to all seats. When you (as a layperson) go to a concert you kind of roll the dice and get what you get. In fact, you can have the combination of an excellent sound system paired with someone in the driver’s seat running it far too loudly or a poor sound system run by a caring, knowledgable professional. (Of the two mismatches, I’d much rather have the latter! A good person at the controls makes the bigger difference. . . )

    By the way, in contrast to a pile of speakers blasting the people in the front rows, there are ways to hang sound systems that provide even coverage front to back. If you hang the speakers up front and tilt them down the right way, you get a similar amount of sound level across the whole seating area. A successful implementation of this makes it unnecessary to have smaller speaker zones dotted around the venue.

    If we assume the sound system is good and has the right coverage, you can often hear the results of two different people controlling things when you hear two bands in a row. There can be drastic changes from opening band to headliner. Some of those changes will be how well one band’s members interact and how cleanly their harmony lines interlace. And how the different instruments sound, of course. But a frightfully squashed or loud mix versus an exciting and dynamic one is all related to the guy or gal doing the mixing.

    As Clive says, the band’s artistic direction to the sound mixing person may include unyielding demands for a too-loud sound level or an unpleasing, unbalanced mix of the various musicians in the group such as “more guitar out front” — one famous ’80s era rock singer lets her husband direct the sound mixing person to boost the lead guitar and her voice and the kick drum and bury the bass, keys, rhythm guitar, and the rest of the drums until they are inaudible. You can see the guy hitting the cymbals as hard as he can but you can’t hear them at all. . . Trying to counterbalance such ridiculous demands may work to a degree and may also land you on the street looking for a new job. It’s a very frustrating tightwire act.

    One thing that separates the great soundguys and gals from the rest is their ability to mix in such a way that conveys all the excitement you came looking for in a live performance situation and gives you goosebumps when the music swells. Among many of the elements required for this is to make sure your loudest times are balanced across the audio spectrum and not spiking one or two frequency areas harder than the rest. Another is the introduction of a small amount of synthetic distortion added to parts of the mix to make it sound like your speakers are nearing their limits of ability (though they will likely have quite a bit more ‘go’ left.) This trick will help satisfy the crowd members that arrived expecting a bigger experience than the last time without exposing them to damaging sound levels. It also helps to satisfy the kind of artistic director that might be breathing down your neck to increase the volume.

    As far as everything being the same amount of loud; it’s all in the way you use the sound gear. Compression is your friend in a live situation but it’s also the culprit of a squashed mix. On the good side, compression is useful to make sure the very softest musical sounds made on stage are carried to last row, though at a level appropriate to the source. Compression can easily be overdone, though. The best mixes are assembled by people who carefully adjust the amount of compression everywhere it’s used.

    Contrary to some of the experiences shared here, I use a sound pressure level meter at mix position and walk around making sure of coverage at all seats. I wear earplugs on planes and during other soundchecks and shows to save my ears for the main mix. I use a professional sound analysis program coupled to an accurate test microphone to help me make sure my mix has no nasty spikes sticking out above my target sound level. When I’m allowed to, I mix in such a way that represents the music a cleanly as possible. I get the impression that more and more people in my career are taking these kinds of steps to improve their mixes. I hope you all get one of these people at your next concert!

  • Don says:

    Outside of RF interference or an electronic device causing noise, the ringing in auditoriums is usually caused by several factors. The most obvious is feedback and its elimination — most speakers/singers know to compensate for it immediately. Less obvious is what is known as a “mode” in the room — a single frequency band or set of frequencies that resonate in the room at much higher levels due to poor acoustic response of the space. Even less obvious is the effect of an average generated tone by a crowd and the resulting overtones/harmonics; when a chord is tempered correctly by either a chorus or instruments additional notes will sound above and below the performed notes.

    Here are some very basic steps I take when supporting voice indoors regardless of speaker array which almost always in my case is stereo to support stereo mixes. Note that I use a dbx Driverack PA or some other type of audio real time analysis.

    1. Maximise gain structure throughout the system to support the greatest amount of dynamic headroom possible.

    2. Pink noise the room with an RTA mic to equalize the speakers and place them in phase and to do any time correction needed. Check for any modes and correct with a parametric eq.

    At this point the system is flat or responsive to the desired eq curve.

    3. With a flat eq mic turned to just below feedback and no vocal material I use a noise gate to eliminate any hiss or line noise picked up from the ambient space.

    4. First mic check with the vocalist is done to set the compression needed on the mic. Compression is usually applied with fast attack, high ratio of compression, and pretty quick release; the threshold is specific to the vocalist (some whisper into the mic, others scream).

    5. Second mic check with the vocalist is done with analysis to check for any modes generated by the nature of the performers voice reacting to the acoustics of the space. At this point the frequencies that need to be tamed are “ducked out” with a parametric eq — good mixing boards have one on each channel. This is a crucial step in supporting voice indoors.

    6. Third and final mic check with the vocalist is to eq the channel for the desired performance characteristics and to apply any reverb or chorus desired. A good number of singers/rappers want some mid and a bit of reverb along with a flat mic without effects so they can A/B compare. I usually run a dry mic with the same ducked out eq settings for this purpose.

    After these steps I then continue working with the visiting sound engineer to ensure they can achieve the mix they desire which can include non-obvious things like automatic feedback elimination with the dbx gear.

    Does everyone follow these basic steps of sound reinforcement and use the technology available or do they just plug it in and hope for the best acoustical response? Multiply the above steps by the number of vocalists. Unfortunately few bands or individuals have an entire afternoon before the show to block out to ensure that these basic steps are followed, yet the house engineer is expected to pull good response out of a hat.

  • Edward Tufte says:

    Kindly Contributor Don should visit us on the road to show how to perfect one voice on 2 Bose cylindrical radiators sometime. I notice, as the speaker, a big difference between rehearsal and then a room filled with people. I have noticed how I’ll shift my voice to EQ a room once the talk has started, and strain my voice by end of 6 hours as a result of this intuitive self-EQ process. I am by the way very happy in hotel ballrooms with the cylindrical radiators, although we do not have room to room variation under control, nor empty/filled room variation under control.

  • Don says:

    The Bose system as you are running it is a mono channel single point source of sound. Because of the coverage pattern of the speakers there are several things to note about running a system like this.

    Contrary to stereo systems where spreading the speakers apart creates a wide stereo field and potential increase in perceived volume and coverage, spreading a pair of wide coverage mono sources apart will only benefit the far left and right close to the stage. Better would be to place each speaker as close together in the center as possible splaying the speakers outward only very slightly to achieve left/right coverage for any situations where the room is very wide. The goal is to achieve audio coupling with those two towers so they act as one big speaker with wide coverage and to mitigate any variance in speaker spread from room to room. A funny thing happens when you audio couple two speakers, the perceived volume goes up more than double. Because these speakers were designed to provide more even front to back coverage than traditional speakers coupling the speakers should provide a stronger more managable signal to everyone.

    Spreading two mono sources can actually introduce more problems than solve them. Excess side reverb, possible frequency cancellation center stage when one speaker opposes the other, and a mismatched arrival of the same mono signal resulting in artificially introduced reverb — the very cocktail party effect that the Bose engineers were trying to mitigate by using a single point source philosophy.

    Keep the speakers as close to each other at center stage as possible and keep that distance the same from room to room and you will notice a drop in room to room variance.

  • Don says:

    One problem you might face is rooms that are very deep where it can be hard to hear in the rear of the room and where the audio travels so far as to create a delay problem against hard reflective surfaces in the rear of the room. There is nothing quite like hearing the room repeat everything you have to say back to you half a second after you have said it. I would experiment with some center aisle low profile mono fill speaker in these situations, perhaps something like the wide coverage Bose 901 on a short stand? Start at about room center moving rearward taking note of what happens both on stage and in the back quarter of the room. Listen for the correct level balance between main and fill systems.

  • Edward Tufte says:

    Very interesting. We use 2 with one subwoofer in order to easily fill a large room. When the speakers are more toward the center, I sound odd to myself and have more feedback sensitivity as a walk around a lot in front of the speakers. To reduce feedback, I do use a really helpful Madonna-type wireless lav made by a Dutch manufacturer. But we’ll try the speakers a bit closer together. I should say that we’re very happy with the sound and our test music (Desolation Row) sounds excellent as I walk around the room during set-up. Hotel ballrooms are sure a mixed lot, some with mirrored or acoustically hard walls.

    Thank you for your good comments.

  • Don says:

    You should not have to change your natural speaking voice in pitch or volume for a room, even if you whisper. You should be able to speak as if your audience is one person standing 3 feet in front of you.

    I would place either the RNC or RNLA in between your mic setup and the inputs of your Bose system. Compression is your friend, it makes audio signals managable prior to the main volume control and eq.

    How it works is that a threshold is set, for example -12dB. For any signal above this threshold gain reduction will be applied. The amount is represented as a ratio, for example 1:2. What this means is that for every 2dB over -12dB only 1dBu of extra gain will be represented. This has the effect of reducing dynamic headroom but as you will notice the final stage is to reintroduce the gain that was taken away.

    The effect is that whispers become more audible and any loud spikes in volume are mitigated. The original program material contains less dynamic variance but the gain can be increased making everything more defined.

    This results in being able to turn it up without hitting the upper performance ceiling of your system with regard to distortion and clipping. Most folks feel forced to turn down the mic to avoiding feedback, popping, and spikes. Compression with a high ratio lets you keep the input volume up in a safe way and gives you greater control over dynamics. The perceptual change is that the speaker’s voice often seems closer to the listener in physical space.

    Ever hear a recording that sounded like the singer was whispering in your ear? Compression was used.

  • Niels Olson says:

    What Don describes with compression is exactly what the tensor tympani and stapedius muscles do by changing tension on the tiny bones of the middle ear. The autonomic nerves that supply these two muscles respond to increasing volume by increasing nervous impulses to the muscles, which in turn contract, introducing more tension in the middle ear linkage, thus requiring more energy for the same displacement at the oval window, the entrance of the sound wave into the sound-sensing inner ear. The idea of doing that electronically before the soundwave ever enters the ear sounds like an excellent idea! That you’re doing it before it ever gets to the eq also sounds like the ear’s design, where this modulation occurs before the wave is detected by the hair cells of the spiral nerve. At that point the nervous system starts modulating what you think you hear.

  • Don says:

    Thanks Niels, your added detail really validates my feelings regarding sound support. The science of audio DSP continues to advance and we come closer to creating the sound outside the ear that is naturally produced within the ear. I use Ultrasone headphones which claim to use the physics of the entire ear rather than sending audio directly down the ear canal, it is a key element in their S-logic technology which provides a very expansive natural surround sound type response. They are a headphone company using technology to enable listeners to perceive the same loudness at lower pressure levels.

    Similarly I feel the science of raytracing is advancing in ways which enable us to create similar processing before it reaches the eye. In reviewing some of the discussion and illustrations regarding mapped images I can’t help but wonder if there are any true three dimensional representations we can see.

    Apparently we have come a long way from the “Help me Obi-wan Kenobi” days of projected holograms with some current examples exhibiting touchscreen like interactivity.

  • Niels Olson says:

    Don,

    I’ve dissected the middle ear three times in the last two months and so far I’ve been the only person amongst two medical school classes (Tulane and Texas A&M) able to demonstrate the stapes, the smallest of the bones in the middle ear. I also majored in Physics and spent more of my life than I care to wearing headphones in the combat information centers of Navy ships. Ultrasone’s claim about using the entire ear to deliver sound is a marketing claim with no basis in fact, for several reasons. The shape of the outer ear (the auricle), the dynamics of hearing, the electronic requirements involved, and market competition.

    The shape of the auricle is too variable amongst people, and I doubt those headphones or any others have the equipment on board to sample that tight acoustic environment and then manipulate the incoming electronic waveform to adjust. In addition, the ears, that is, the organs that transmit sound to the spriral nerves in the internal acoustic meatus, can’t distinguish a pure tone as being in front of or behind the head. The ear-brain system actually manages quite a feat by distinguishing proximity of both high and low frequency sounds. For that the ear-brain’s methods are phase shift and time delay, respectively.

    Our experienced ear-eye-brain systems learn how to use environmental cues to locate sound sources in 3 dimensions. Even in the middle of a still, cool, dry, windless, empty, flat desert the trained ear-brain system might be able to distinguish a pure tone coming from in front or behind because the waves might behave differently at the auricle. From the front waves would be distorted one way as they bounce off the auricle and into the canal; from the back the waves would be distorted a different way as they are reflected off the back of the auricle. The experienced brain might be able to detect the difference. However, Ultrasone headphones surround the entire ear, just like any other headphone, and they certainly have a nice big speaker cone, which would make for greater fidelity to low frequencies, but argues against the phased array that would be required to direct sound toward the front or back of the auricle.

    Another electronic argument that is very easy to test is that to get another axis of freedom in a headphone, you’d need another input. Do these headphones have special jacks that plug into special equipment that provide that other input?

    My final argument against Ultrasone is this: it is very unlikely that Ultrasone figured out what the hoards of engineers at Sony, Bose, and Sennheiser haven’t.

    Honestly, looking through Ultrasone’s materials, they may make great headphones, but I’d be pretty suspicious of their marketing claims.

    The hologram thing, on the other hand, is interesting news. The physics of holograms has been around for a while. Last I’d heard was about, wow, five years ago that a solid state blue laser had been developed, which completed the complement of green, blue, and red lasers necessary to produce full color holograms in a way that has a chance of becoming inexpensive.

  • Don says:

    D.Sc. Florian Koning, the founder of Ultrasone responded back to an inquiry made based on exactly how the S-Logic technology achieves its goal. First I have to admit when I picked up these headphones I was skeptical as could be, simply because there is no pure surround sound for human beings because we do not have a third ear to receive and process this data and all of my previous experiences with surround involved the phased arrays mentioned by Niels — I do feel the use of the words surround sound were used to place this technology amidst an audio standards marketplace. How can a two point system provide surround? That was the headscratcher question I asked myself prior to listening and falling in love with my new headphones.

    More accurately these headphones can be described as creating a front spatialization auditory event through the means of off axis placement of the driver (down, to the front, and canted inward) with special design of the headphone buffer board and ear cup and specification of the earpad material. Indeed when I remove the earpads in my headphones I can see that the driver is not mounted in the center of the ear cup and there is a type of surface that resembles a scalloped mini-bandshell to provide hard surface directionality. The Ultrasone manual states that to achieve the correct S-Logic performance the headphones must be worn with the headband over the top of the head.

    D.Sc.Florian Koning does agree to Niels’ description of the variance in pinna/auricle response from person to person, from his response:

    “Reading your lines I aggree, that the outer ear shape / pinna reactions are fluctuationg very much, so that you have a standard deviation maximum at 4 – 7 kHz of 7 dB and max. differents of > 15 dB comparing individual humans! Normal or one mean headphone can’t work with all head-related hearing people world wide equalized! You need an individual adjustment to produce main anatomic filter effects for instance for a horizontal plane hearing image (stereo / surround) – de-centrics speaker placement of S-LOGIC … plus some acoustics adjusments to reduce standing waves for a point sound source near-by the pinna. “If you got it naturally perceived” the brain switches to an enhanced distance perception of auditory events and this causes a “subjectiv” SPL reduction. . .”

    Interesting to note in the AES technical papers submitted in 1995 and 1997 is the exploration of multi-driver designs within each ear cup to provide response similar to what occurs naturally in different parts of the pinna/auricle. For anyone interested I can forward D.Sc. Florian Koning’s response to me which includes the two technical papers submitted to AES. In the meantime I found a less technically detailed summary here: http://www.eastcoastaudio.com.au/news.html

    Subjectively. . . I was watching some major studio released feature film DVDs with these headphones alone in my studio and spun around to tell the film crew to stop shuffling around. I felt like I was on the set hearing the actors in the room in which they were speaking.

  • Niels Olson says:

    I can understand how this geometry and surface engineering would create the perception that sound is coming from the front, so the brain could then use its phase-shift/time-delay methods to decide where in the front 180 degrees the sound is coming from.

    Is it possible your experience in studios and stages cued your brain to interpret the shuffling as behind you? I wonder if a child would come to the same conclusion. I’ll check with the neuroscience folks. Do you have any sound engineering sources that can provide some insight on perceiving sound in the posterior 180 degrees?

  • Don says:

    There are a good number of factors that could lead my brain to interpret the sound of shuffling behind me and after I experienced it with what I thought would be a really clean signal source (a major motion picture on DVD) I started to listen for it elsewhere along with other audio artifacts. Good microphones pick up everything practically unless noise gated and if the capsule allows anything from the rear even slightly… you’ll hear folks shuffling around low in the mix.

    Part of my measure of a good microphone is if I can hear a cricket fart outside with all the sound dampened studio doors closed. (The studio is located in the center of a 100,000 square foot cinderblock warehouse.) I joke and say cricket fart because a good number of sound sources contain artifacts that are either missed or can’t be reasonably ducked out with a noise gate or any eq notching type of attenuation without the loss of some material… so the sound engineer leaves it in there.

    See Head Related Transfer Functions for Immersive Audio Telepresence.

  • Andrew Nicholls says:

    It’s possible to play music too loud, but is it possible to play it too slowly?

    The New York Times reports on the changing of a chord in John Cage’s ongoing piece, “Organ2/ASLSP“:

    (John Cage’s Long Music Composition in Germany Changes a Note

    HALBERSTADT, Germany, May 5 — “Three, two, one, out!” With those words, two organ pipes were lifted from their position on Friday. Like a sudden change in light, the chord that had been continuously sounding inside an ancient church here shifted, growing thinner and higher.

    It was another milestone — well, inchstone — in the performance of “Organ2/ASLSP,” a version of the John Cage composition titled “As Slow as Possible.” And slow means slow. The piece, which began on Sept. 5, 2001, is not scheduled to end until 2640. But there will probably be a break after the first movement, which lasts a mere 71 years. (continue reading)

    Here it is in full, at a much faster tempo:

  • Edward Tufte says:

    Here’s an interesting account of sensory overload from a personal blog:

    November 05, 2005

    Khaled, and why concerts are too loud

    We’ve got several CDs of Khaled. I like them for his voice, and the swing and rhythm of the music. Much of it is very “dancable”, but at the same time the rhythms are more than the simple ONE-two-THREE-four of western pop.

    So I thought I’d really enjoy him in concert this evening. I enjoyed it so little that it made me wonder about things.

    #1: The setup.
    On the CDs he’s often accompanied by only two or three instruments – acoustic guitar + accordion for example, or drums + violin + piano. His voice gets a lot of space, and has a lot of depth. Today, he was backed by lute, base guitar, 2 electric guitars, one whole rock-style drum set, one hand drum, two keyboards, and a 3-man brass section. His voice had two layers of effects (vibratos and echo) and during some songs, one of the keyboard players was doing more singing than Khaled himself. The net effect was that his singing got blended into a general mass of sound and didn’t stand out, and it all sounded more like a standard rock concert than rai.

    Does his voice no longer work on its own – has he lost it? Or is this an attempt to capture larger Western audiences by adapting the style to what the average European is used to?

    #2: The lighting.
    A floodlight of pure white, aimed at the faces of the audience, and about 4 times larger and stronger than anything aimed at the stage. Not just a little spotlight, this was so bright that it made my eyes water even when I closed them; I had to block it with my hand. What does a lighting designer think when doing something like this? “Let’s weed out the weak ones?”

    #3: The volume.
    Start out somewhat loud-ish. Turn it up. (We don earplugs.) Turn it up some more. And then a little bit more. Until it got to the point where it we found it physically painful, couldn’t stand it any more, and walked out.

    This was even more of a surprise because the Barbican can usually be relied on to provide good (or at least reasonable) sound quality – unlike the South Bank Centre (Royal Festival Hall / Queen Elizabeth Hall) that we’ve stopped going to for concerts, because their sound been bad far more often than good.

    This is not the first time we leave a concert because it actually hurts, so we’ve asked ourselves the same questions before.How can everybody stay there and seem to enjoy it? Are they all half deaf, since they’ve been hearing music at this volume for years? Do they hear but don’t mind?

    And more importantly, why is it done this way? Do people like it? Are the sound engineers deaf themselves? Or does everybody in the audience have tiny tinny speakers at home, so that they don’t know what music sounds like when it’s good — when the sound is well balanced and the volume is appropriately loud?

    So I Googled for a bit (“concert too loud”). The most informative page I found was Edward Tufte commenting on the same issue on his web site (which has a whole lot of other interesting stuff too). Here are some of the responses:

    The stage foldback (or monitor) system is independent of the main sound system and creates an intentionally different mix (often a separate one for each member of the band). The level is often extremely high to get control of the mix (eg if you have a double Marshall stack right next to you, the vocals in the foldback have to be loud enough to get above the guitar level). This does mean the house system (the audience’s) has to be loud enough to get above any ‘spill’ from the foldback system.

    I had an interaction with a sound engineer setting up a performance. I expressed my concern over the high sound levels. He reassured me that his group had found that if the levels started low and then gradually increased, the congregation is not aware of the high levels of exposure.

    In my rock club experiences, the sound engineer is typically the deafest person in the room. The engineers have subjected themselves to more loud music over the years than even the band members since many of them are “house” engineers or, if touring with bands, are out in front of the band night after night, soaking up the decibels. The ubiquity of “treble creep” is overcompensation caused by hearing that is literally notched out by damage in the higher tonal ranges. This explains the excruciating sharpness so common in live rock audio mixes these days.

    I think another factor here is key, the specious practice of amplifying the drum kit. I think this got started when rock bands began playing arenas, but it then became fashionable to do this in even the most intimate of clubs. For anything but the most expansive club, the typical rock drummer is already playing at ear-splitting levels without any amplification whatsoever. Amping it just makes it worse, and a byproduct is that all the other instruments have to turn up to compete.

    And a related comment regarding sound quality (from a standup comedian):

    “It is harder to be funny in a room with a very high ceiling — because the all-important start-up laughter from a small part of the audience has little contagion effect with the rest of the audience. The start-up laughter at a remark takes several seconds to go up to the high ceiling and come back down, too faint and too late to reach the yet-to-be amused members of the audience. The Comedy Connection has a low ceiling for good reason. All quite interesting. I think the only conclusion from this is that in the future I will think twice before buying tickets for a concert by one of the big-name artists. The less mass-market ones are likely to care more about sound quality.'”

    Today, we went home and enjoyed Khaled on CD instead.

    The comment quoted from a “standup comedian” was written by me; and that description is a definite first.

  • Tchad says:

    Sir Harrison Birtwistle joins our exclusive club of people against over amplification. Taking the stage at the 51st Ivor Novello Awards, to receive an award for his contribution to modern British composing, Sir Harrison Birtwistle launched into a tirade against the largely pop-orientated crowd:

    “Why is your music so effing loud?” demanded the knight “You must all be brain dead. Maybe you are: I didn’t know so many cliches existed until the last half-hour.” (from the Guardian)

    Fuller comments from Sir Birtwistle a few days after the ceremony can be found in the Telegraph:

    Why lash out at pop music in general for being too loud? And isn’t that a bit rich coming from the composer of The Mask of Orpheus – which Birtwistle himself describes as possibly the loudest piece ever written? “Yes, but we’re talking about contrast. Everything was so relentless at the Novello awards, I felt deadened. I’ve also written some of the quietest music ever.

    “The point I really wanted to make is the distinction between volume and energy. These days, so many things have a technologically produced noise but no energy. It’s the same in film. Compare the recent King Kong with the original. In terms of technology, the old one is primitive, but what feeling it has. The new one shows you everything, it’s got much better technology, but it’s empty.”

    Well, maybe, but it’s hard to see how this applies to music. “It’s a mental thing, it’s to do with something containing an energy that’s hidden and not to do with battering the senses. The other day I heard Alfred Brendel play Beethoven’s Fourth Piano Concerto. You remember how it begins with just that simple quiet chord? Think what’s required to achieve that.”

    So is he talking about classical performance? “No, it’s there in the music itself; the performer discovers it and makes it real. There’s a mystery in classical music, an energy that comes from avoiding routine. Listen to Bach, see how those progressions are always typical but always surprising. There’s always something that escapes analysis.

    “And, in my music, I’m always concerned not to repeat myself. Composing is all about getting out of a corner, and I’m always thinking about a different way to do that.”

    And an interview, that same week in the Telegraph:

    ‘Why me? How come I’m always cast as this bogeyman who makes horrible modern music?” Harrison Birtwistle sounds plaintive, but you could say he only has himself to blame. He writes pieces which are often very big, hugely noisy, and fantastically complex (he describes his opera The Mask of Orpheus as “one of the most complex works of art ever devised”). And as if that weren’t enough, he attacks pop culture, which is guaranteed to get him bad publicity. He still hasn’t lived down the time he sounded off against pop music at the 2006 Ivor Novello Awards. “Well, I had had a lot of champagne,” he says ruefully.

    So how come he’s now on the bill at Ray Davies’s Meltdown Festival at London’s Southbank Centre, alongside musicians such as Madness and Ron Sexsmith? He hasn’t a clue. Has he heard of Davies, famous frontman of the Kinks? “Of course I have. I saw him in that concert on Buckingham Palace roof some time back. I thought he was much the best.”

    Ah, so not all pop is bad? “No, just most of it,” he says with a little laugh, as if knowing he’s about to get himself into trouble again. “But I think my sensitivity to music means I can spot good things in pop, as in any genre.” I try to lead the conversation towards the difference between pop and classical music. “Hmm…” he says. There’s a long pause, and eventually he says: “The trouble is most people think classical music is something like film music. My taxi driver said he’d heard my orchestral piece Night’s Black Bird and told me ‘that would make really good film music’. He just didn’t get it.”

    So what is the difference? “I don’t know… it’s very difficult. Film music is a collection of sound-bites, art music is a journey.” Talking to Birtwistle is like following a pig snouting for truffles. It looks aimless, but you know something good will turn up if you just wait.

    Can what Birtwistle does be put side by side with a Beethoven string quartet? Isn’t he really doing something completely different? He doesn’t answer the question head-on. But eventually we arrive at a reply via a back route, when he strays on to the subject of classical pieces that mean the most to him.

    They turn out to be surprisingly traditional. “Mendelssohn’s violin concerto… it’s perfect.” What strikes him about it? “I love the way you’re flung straight into the heart of something really intense.” Then he mentions another piece straight out of Classic FM’s Hall of Fame: Schubert’s Unfinished Symphony. He loves it for its melancholy, which in Birtwistle’s lexicon is a term of high praise (he’s written a fine clarinet concerto entitled Melencolia 1).

    But more than that, he loves its mysteriousness. “The horn melody that begins it,” he says, humming it in a quavery tenor. “You think that’s the essence of the piece. Then you get that tremor on the violins, and then the oboe comes in with that lovely melody. You’ve got three things, but what is the relationship between them? Which is foreground, which is background? Exploring that relationship is what the piece is all about.”

  • hazelle jackson says:

    Just to let you know that yesterday I went to see Dwight Yoakam in concert in London on his 2006 European tour. (At the Hammersmith Apollo theatre). The show was late starting because rumour had it, Dwight was not happy with the initial sound check. But boy was it worth it when it finally got underway – the balance between band and singer was perfect – you could hear Dwight clearly even at the back of the circle where we were sitting. And the instruments of his four man backing band were just right too. Not overloud but crisp and clear. The audience was ecstatic.

    Shows it can be done. Maybe you could find out who Dwight’s sound engineer is for the tour and ask him how he does it.

  • Ken says:

    I took my kids to see Alice Cooper perform in a small theater last night. Like most of the concerts that I have seen recently (Springsteen, Dylan), the sound was an extremely loud, mushy, muddle of noise. Don’t get me wrong — I love loud music, but I want to be able to hear the vocals and instrumental solos, hopefully with some changes in dynamics, separately and distinguishably from the backing instruments. As other posters have stated, it can be done. For example, I recall a Tina Turner show in a 14,000 seat arena where every voice and intrument was perfectly clear – and still loud enough to leave my ears ringing 24 hours later. Please, let’s keep up the pressure on performers and promoters to improve sound quality. Otherwise, it hardly makes sense to go to shows anymore.

  • Gordon Clark says:

    I went to a Franti concert at Higher Ground last Saturday, and had never been exposed to such a deafening, relentless volume. I meant to bring earplugs after a concert experience a year ago, but forgot. Will never make that mistake again! The volume was so high that my internal organs were hurting. I would have gone outside to listen, but it was cold and I depended on the people who brought me there to get home.

    I am 53 and have been to some great concerts (Hendrix, Zappa, Dead, Miles, and many more) but NEVER was exposed to this loudness. The bass hurt, but the mid- and high-range was excruciating.

    I guess I don’t get around much anymore, as this seemed normal to all present. My pal said he thought it wasn’t as loud as the last show. I think everybody there must be half deaf, or they would have found the sound level painful. I worry for the futures of the young people there who will suffer significant hearing loss before they are half my age. Very sad. I couldn’t help thinking, is there no legal limit to loudness in these clubs? This is no joke; it is a public health problem.

  • Bob says:

    My wife and I had to bail out of a wedding reception for a close friend when the band was so loud that my wife was having a serious anxiety episode from it. I could just barely tolerate it. This was in a room perhaps 40 x 40 feet, and the amps up at outdoor concert levels.

    It was quite unfortunate, and I now wish I’d spoken to my friends about this. Perhaps something could have been done. I’m sure I wasn’t the only person who felt that way.

  • Matt says:

    Having worked in concert lighting when I was younger, I’ve had many occasion to talk with good audio engineers. You can generally tell them from the others since they are the ones with ear plugs.

    In talking with them, they often complain of geting overridden in sound decisions by the performers, who may be able to play their instruments but generally are horrible sound engineers. The view that louder is better pervades the industry.

    By putting earplugs in, you have effectively dampened the low frequency noise which goes a long way to making it more enjoyable not to mention staves off damage. If you attend concerts frequently, I recommend getting a box of disposable plugs. Don’t get the ones for construction/airport usage. Those will block too much of the sound.

  • Eric Isaacson says:

    Last week I attended a Los Angeles Philharmonic concert in the new Walt Disney Concert Hall. Among the works was Brahms’ Violin Concerto, with Joshua Bell performing. The acoustics in the room are so elegantly designed that even when Bell was playing the softest, most delicate moment in the solo cadenza, all the notes and the transitions between them could be heard in full clarity. And I was sitting in the last row of the top balcony. It was one of the most beautiful musical moments I’ve ever experienced. The trade-off (I suppose there must be one) was that I felt the bass range of the orchestra was a bit thin. Still, I’ll take this over an over-amped experience 100 times out of 100.

  • Bill Sharpe says:

    The sound at Walt Disney Concert Hall is fantastic. We attended an afternoon Philharmonic concert. There was a power failure delaying the start of the concert and the microphones weren’t working during the first part of the performance. However, the conductor’s explanatory remarks were very easy to hear throughout the hall with no mikes.

    My experience with loud music was limited to being a chaperone at some of our children’s dances in the school hall. After the first dance I always took ear plugs with me.

  • Jon Gross says:

    I would like to share a recent concert experience. We saw Larry Coryell (guitar), Paul Wertico (drums) and Larry Gray (bass) at the Jazz Showcase in Chicago recently. The Jazz Showcase is a venerable Chicago institution, now in its 60th year.

    During an extended bass solo, Larry Coryell switched off his guitar amplifier completely — yet his comping (accompanying playing) was audible in the small, quiet club. I don’t think I’ve ever experienced this. It was great!

    Paul Wertico played with great, expressive dynamic range, all the way to playing just his sticks.

    Overamplification is such an issue with me, I will simply avoid entire genres and facilities. I am happy to report this exception.

  • Kevin says:

    Just came home from a “Billy talent” concert. My ears are sore, ringing and I have a pounding headache. The Mix was simply wayy too hot for the venue. As mentioned above by others, I have witnessed the abuse of a perfectly capable system.
    The reinforcement was so loud that the vocals were distorting and clipping right up to the amps. I’m definitely not new to loud music. I own a touring PA system and I have worked in professional audio / lighting. When I was trained to do a concert house mix, the engineer always stressed going for a “natural” representation of the music being performed. What I witnessed tonight was everything but natural.
    For a more aggressive rock group such as BT it was really unnecessary to mix that hot. At times vocals were completely overlapping the instruments. With that much raw screaming and yelling into the mic, open gain without a pad and an engineer who was already hi/midfreq deaf was a very painful experience for everyone.

    It’s simple: if your concert rig is going scratchy from being overdriven/ hotmixed you are damaging the gear and the hearing of the audience. **Have fun changing roasted drivers after the show.

  • Fraser Moffatt says:

    The post about hearing loss in symphony musicians struck a chord (pun intended) with me. I always wondered how loud dozens of stringed, wind and reed instruments playing in close proximity was.

    Really, I shouldn’t have been wondering too much as I have spent the majority of my life playing in pipe bands (as in bagpipes) as a snare drummer. You have yet to meet a more generally hard-of-hearing group of people than members of a pipe band.

    If the children on the side of a parade route hold their hands to their ears as the pipe band passes, just image how it feels to be playing in middle of at least a dozen bagpipes and half a dozen drummers!

  • Matthew says:

    I went to a concert last night, Massive Attack, not normally a class of music that I would associated with good mixing, but I was in for a treat. The music wasn’t loud enough to get my ears ringing, I could clearly hear the vocals, the highs and the lows. I wish I knew who the sound engineer was, he or she deserves medal.

    It was the first time in about five years that I didn’t need my ear plugs.

  • Michael Wright says:

    I don’t object so much to poor acoustics at a rock concert, but I do object the continuous barrage of noise from muzak, TVs and radios placed in “public” spaces like malls, grocery stores, restaurants and fast-food joints. I can choose not to attend a concert, but I cannot escape from the noise that ruins my peace of mind and does not contribute to my experience of these places. Can’t we have quiet?

  • john says:

    Here’s another factor: Nearly every time I’ve heard a band or amplified singer/songwriter in a bar or coffee house, I’ve felt that the volume was too loud, detrimental to the music, and at times potentially harmful: I’ve feared and perhaps actually suffered permanment hearing loss from a night in a bar.

    I’ve played music in bars and coffee houses for years, and I make a point of keeping the volume down so that people can talk and enjoy themselves and because it’s a bummer when someone comes up and says, “Hey, can you turn it down? My friends and I are trying to talk.”

    Over the years, many people have thanked me for not playing too loud, and these comments have helped me to keep the volume down in subsequent gigs, even when people don’t respond/applaud after the songs and I kinda start to wonder what I’m doing there and wish I were someplace else. Without comments or complaints, I might assume based on audience reaction that everyone except me likes it loud.

    But, I have played for an hour at a reasonable volume with a smattering of applause after every other song (and silence after the others), then turned up the volume too loud (for one reason or another), and people start clapping after the songs. Nothing’s changed except the volume; and to me, it’s just gotten worse. People aren’t necessarily listening more or better or enjoying themselves differently; they’re just responding to the volume. It’s a sort of Call and Response: I’m loud, now you be.

    Because I’m so strongly against TOO LOUD, I’d rather keep the volume reasonable. Also, I see my role as background/ambience while others may see themslves, and may be, the main attraction. Of course performers judge how well things are going by the audience’s reaction: foot-tapping, comments, tips, no complaints, singing along, dancing, and APPLAUSE. I know from experience that one way to get cheap applause (or what some people might consider earned applause, positive feedback, appreciation, acknowledgement, love) is to simply turn up the volume. (I think some players simply turn it up as loud as possible. That is, until feedback’s a problem.)

    If you build it, they will turn it up: technology drives the bus. Give a band a loud PA, and many are going to turn it up, just because it’s there. And, I think, after you play awhile (a couple hours or a couple years) at a certain volume, that volume doesn’t sound as loud to YOU, you may even have trouble hearing it, so you … turn it up.

    Perhaps this is some universal truth: Over time, volume escalates.

    I think for some folk louder’s better just because it’s louder (faster is better just because it’s faster, and more is better just because it’s more).

    11’s one more than 10.

  • Edward Tufte says:

    The reference is to the famous discussion of rock amplifiers in the film
    Spinal
    Tap:

    Nigel Tufnel: The numbers all go to eleven. Look, right across the board, eleven, eleven, eleven and...

    Marty DiBergi: Oh, I see. And most amps go up to ten?

    Nigel Tufnel: Exactly.

    Marty DiBergi: Does that mean it’s louder? Is it any louder?

    Nigel Tufnel: Well, it’s one louder, isn’t it? It’s not ten. You see, most blokes, you know, will

    be playing at ten. You’re on ten here, all the way up, all the way up, all the way up,

    you’re on ten on your guitar. Where can you go from there? Where?

    Marty DiBergi: I don’t know.

    Nigel Tufnel: Nowhere. Exactly. What we do is, if we need that extra push over the cliff,

    you know what we do?

    Marty DiBergi: Put it up to eleven.

    Nigel Tufnel: Eleven. Exactly. One louder.

    Marty DiBergi: Why don’t you just make ten louder and make ten be the top number and

    make that a little louder?

    Nigel Tufnel: [pause] These go to eleven.

  • Edward Tufte says:

    See Adam Sherwin, “Why music really is getting louder,” from the Times (UK). The key point is about the loss of dynamic range.

    Dad was right all along – rock music really is getting louder and now recording experts have warned that the sound of chart-topping albums is making listeners feel sick.

    That distortion effect running through your Oasis album is not entirely the Gallagher brothers’ invention. Record companies are using digital technology to turn the volume on CDs up to “11”.

    Artists and record bosses believe that the best album is the loudest one. Sound levels are being artificially enhanced so that the music punches through when it competes against background noise in pubs or cars.

    Britain’s leading studio engineers are starting a campaign against a widespread technique that removes the dynamic range of a recording, making everything sound “loud”.

    “Peak limiting” squeezes the sound range to one level, removing the peaks and troughs that would normally separate a quieter verse from a pumping chorus.

    The process takes place at mastering, the final stage before a track is prepared for release. In the days of vinyl, the needle would jump out of the groove if a track was too loud.

    But today musical details, including vocals and snare drums, are lost in the blare and many CD players respond to the frequency challenge by adding a buzzing, distorted sound to tracks.

    distortion of dynamic range

    Oasis started the loudness war and recent albums by Arctic Monkeys and Lily Allen have pushed the loudness needle further into the red.

    The Red Hot Chili Peppers’ Californication, branded “unlistenable” by studio experts, is the subject of an online petition calling for it to be “remastered” without its harsh, compressed sound.

    Peter Mew, senior mastering engineer at Abbey Road studios, said: “Record companies are competing in an arms race to make their album sound the `loudest’. The quieter parts are becoming louder and the loudest parts are just becoming a buzz.”

    Mr Mew, who joined Abbey Road in 1965 and mastered David Bowie’s classic 1970s albums, warned that modern albums now induced nausea.

    He said: “The brain is not geared to accept buzzing. The CDs induce a sense of fatigue in the listeners. It becomes psychologically tiring and almost impossible to listen to. This could be the reason why CD sales are in a slump.”

    Geoff Emerick, engineer on the Beatles’ Sgt. Pepper album, said: “A lot of what is released today is basically a scrunched-up mess. Whole layers of sound are missing. It is because record companies don’t trust the listener to decide themselves if they want to turn the volume up.”

    Downloading has exacerbated the effect. Songs are compressed once again into digital files before being sold on iTunes and similar sites. The reduction in quality is so marked that EMI has introduced higher-quality digital tracks, albeit at a premium price, in response to consumer demand.

    Domino, Arctic Monkeys’ record company, defended its band’s use of compression on their chart-topping albums, as a way of making their music sound “impactful”.

    Angelo Montrone, an executive at One Haven, a Sony Music company, said the technique was “causing our listeners fatigue and even pain while trying to enjoy their favourite music”.

    In an open letter to the music industry, he asked: “Have you ever heard one of those test tones on TV when the station is off the air? Notice how it becomes painfully annoying in a very short time? That’s essentially what you do to a song when you super-compress it. You eliminate all dynamics.”

    Mr Montrone released a compression-free album by Texan roots rock group Los Lonely Boys which sold 2.5 million copies.

    Val Weedon, of the UK Noise Association, called for a ceasefire in the “loudness war”. She said: “Bass-heavy music is already one of the biggest concerns for suffering neighbours. It is one thing for music to be loud but to make it deliberately noisy seems pointless.”

    Mr Emerick, who has rerecorded Sgt. Pepper on the original studio equipment with contemporary artists, admitted that bands have always had to fight to get their artistic vision across.

    He said: “The Beatles didn’t want any nuance altered on Sgt. Pepper. I had a stand-up row with the mastering engineer because I insisted on sitting in on the final transfer.”

    The Beatles lobbied Parlophone, their record company, to get their records pressed on thicker vinyl so they could achieve a bigger bass sound.

    Bob Dylan has joined the campaign for a return to musical dynamics. He told Rolling Stone magazine: “You listen to these modern records, they’re atrocious, they have sound all over them. There’s no definition of nothing, no vocal, no nothing, just like – static.”

  • Edward Tufte says:

    Physics of the human voice in Scientific American:

    Why can an opera singer be heard over the much louder orchestra?

    John Smith, a physicist at the University of New South Wales in Sydney, Australia, belts out an answer to this query.

    In both speech and singing, we produce sustained vowel sounds by using vibrations of our vocal folds–small flaps of mucous membrane in our voice box (or Adam��s apple)–that periodically interrupt the airflow from the lungs. The folds vibrate at a fundamental frequency, fo, which determines the pitch of the sound. In normal speech fo is typically between 100 to 220 hertz (Hz), or vibrations per second. In contrast, a soprano’s fundamental frequency ranges anywhere from 250 to 1,500 Hz. The complicated motion of the vocal folds means that speech and singing also contain a series of harmonics–which are basically multiples of the frequency in question–with frequencies of 2fo, 3fo, 4fo, and so on.

    Usually, the fundamental frequency has the greatest acoustic power, but the very high harmonics, although less powerful, have the advantage of residing in a range above about 3,000 Hz, where the orchestral accompaniment provides less competition. Sopranos have an advantage over lower voices, such as the bass and tenor: Due to their higher range, the auditory frequency at which they sing, as represented by their fo, lies in the neighborhood of frequencies to which the ear is most sensitive. In contrast, the lower fundamental frequencies of male voices cannot compete as easily with the power of an orchestra; male singers, therefore, must often rely on their higher harmonics in order to be heard.

    Classically trained sopranos also make use of a technique called “resonance tuning” to intensify the vibrations of the vocal folds and increase the power of the voice. The vocal tract–the “pipe” between the voice box and the mouth–has a series of resonance frequencies (R1, R2, R3, et cetera), which provide an effective transfer of acoustic power from the vibrating vocal folds to the surrounding air. Harmonics that fall at or near these resonance frequencies are most efficiently radiated as sound.

    The two lowest frequency resonances, R1 (approximately 300 to 900 Hz) and R2 (approximately 800 to 3,000 Hz), play an important role in speech, providing high power for harmonics of fundamental frequencies that are close to them. Further, we can vary resonance frequencies by moving our tongue, lips, jaw, and so on. Thus, adjusting the configuration of the mouth alters R1 and R2, and this in turn changes which harmonics are emphasized.

    In singing, the fundamental frequency determines the pitch, which is specified by the composer or performer. Singers can significantly increase their loudness by adjusting the resonance frequencies of their vocal tract to closely match the fundamental frequency or harmonics of the pitch. Sopranos can sing with fundamental frequencies that considerably exceed the values of R1 for normal speech, but if they left R1 unaltered, they would receive little benefit from this resonance. Consequently they tune R1 above its value in normal speech to match fo and thus maintain volume and homogeneity of tone. This is often achieved by opening their mouth wide as if smiling or yawning for high notes, which helps the tract act somewhat like a megaphone. (This tuning of R1 away from its values in normal speech, as well as the large spacing between harmonics, has implications for intelligibility, and is one reason why singers can be hard to understand at very high pitch.) Singers at lower pitch sometimes tune R1 to match harmonics (for example, 2fo) rather than the fundamental, but do not usually practice resonance tuning as consistently as sopranos.

  • Edward Tufte says:

    Nascar racing sound levels by Viv Bernstein in the New York Times:

    The Sound and the Fury, and Possibly the Danger

    The roar of nearly 50 revving engines reverberated through the mostly empty stands Friday at Bristol Motor Speedway as Nascar’s elite Nextel Cup teams took to the half-mile oval racetrack for practice.

    It was loud — painfully loud. That seemed fine with Josh Whitt, 28, a fan from Trussville, Ala., who watched from the infield a few yards from the track.

    “I think that the exciting part about Nascar is the noise,” Whitt shouted as the cars raced past. “It’s energy. Energy makes it that much better.”

    The noise also makes it more hazardous not only for fans, but also for drivers, crew members and everyone else who spends time at a racetrack during a Nascar event. That is the finding from two studies by the National Institute for Occupational Safety and Health, or Niosh, which reports that sound levels at tracks reach dangerously high decibel levels. (continue reading)

  • Tchad says:

    Tesla Grandprix?

    Imagine what an electric race would sound like; there might even be an opportunity to layer events. Perhaps a race on the track and a concert in the middle (infield?). I don’t follow nascar but I would imagine it is only the first few laps and the last ones that are actually interesting as a spectator… it might be nice to have other options.

  • Edward Tufte says:

    An intriguing article by composer Andrew Waggoner:

    The Colonization of Silence

    The colonization of silence is complete. Its progress was so gradual that even those who watched it with alarm have only now begun to take stock of the losses. Reflection, discernment, a sustainable sense of tranquility, of knowing where and how to find oneself–these are only the most obvious casualties of marauding noise’s march to the sea. Much more insidious has been the loss of music itself.

    But wait, this can’t be: Music is everywhere; we have more of it, available in more forms, more often, than at any time in human history. I can go to the web and find O King of Berio, Baksimba dances from Uganda, something really obscure like Why Are we Born (not to have a good time) of the young Buck Owens, even Pat Boone’s version of Tutti Frutti; I can find all of the same at the mall. Surely this is a good thing. I can find renewal of spirit in Sur Incises of Boulez or stand aghast at the toxic grandiloquence of Franz Schmidt’s Book of the Seven Seals. Music is everywhere. Long live it.

    Just give me five minutes without it; that’s all I ask, perhaps all I’ll need to bring it back into being for myself. Imprisoned by it as I am now, assaulted in every store, elevator, voice-mail system, passing car, neighbor’s home, by it and its consequent immolation in the noise of the quotidian, it is lost to me as anything other than a kind of psychic rape, a forced intimacy with sonic partners not of my choosing. When music is everywhere, it is nowhere; when everything is music, nothing is. Silence is as crucial to the musical experience as any of its sounding parameters, and not merely as a kind of acoustical “negative space.” Silence births, nurtures, and eventually takes back the musical utterance; it shapes both the formation of its textures and the arc of its progress through time.

    And, of course, since silence–unless one is in an anechoic chamber–is never wholly silent, its presence, its expression, allows us to distinguish between sounds that are and are not music. Thus is not-music given entry into music; Cage aside, the sounds in the hall, in the street, in the club, are not music, though they become part of the shared discourse that is; they are the fragments of conversation overheard but not comprehended by speakers of a different language, passionately engaged in their own dialogue amidst the whirl of an alien culture. Thus is music’s wonderful strangeness amplified by a silence that is always trying to tell us something. Exactly what it tells us can be very precise, at least in musical terms.

    A few examples: The spaces that enfold the last phrases of Webern’s Three Orchestral Songs of 1913-14 open a door for us into an understanding of “O neige Dich, O komme wieder, Du grust und segnest…” (O incline to us, O come back…You greet and blessThe breath of evening takes away the lightI see your dear face no more) that most of us cannot access through a reading of the text alone. We are drawn into a time-sense that seems to pull us across an event horizon, where things seem both very big and very small. Thus, with the solo whispering of “Du grust und segnest,” the surrounding silence amplifies the softness of Du, the sibilance of segnest; the intimacy of this direct entreaty is made almost overpowering, embarrassing even, by the varied acoustical richness of each syllable. “You greet and bless” becomes a sacred utterance of hushed sensuality that transforms the “Mother of grace” whose “dear face [we] see no more” into a departed lover.

    In Webern’s Variations Op. 30 something wonderfully different happens. At the work’s outset, individual attacks are set off from each other by the silences that separate them. Our sense of the attacks, their lengths and intensities, is determined by the lengths of the silences. As we move through the piece and individual attacks are layered with others of varying durations, finally yielding extended lines, our apprehension of shape is fueled by the recollection of past silences in our present, echoic memory, on that level of consciousness where past remains present; we continue to experience silence through the welter of polyphony, a silence of varying degrees, each given its own feeling-tone by the different sounds that eventually overtake it. This form-giving potentiality of silence, that is to say the active memory of silence as an agent in the musical discourse, is so important in Webern’s music as to be generalized as a basic principle.

    This can be said of much other music as well, of course, really of all musics on some level. But what was a tacit understanding for most composers before Webern became for him a core expressive value. Thus we can say that without a silence within which to develop, and in which the listener is deeply immersed, Webern’s music only half-exists at best.

    This is true also of Morton Feldman, in whose late works a different–but no less dynamic–sense of silence is at play. In For Samuel Beckett, to take just one example, a massive texture unfolds slowly, almost imperceptibly, with columns of sound fanning out inch by sonic inch, their relationships to each other constantly changing through a gradual process of temporal displacement. The effect at first could be described as prismatic, with shafts of light intermittently piercing the slow turning of these huge shapes. Further on, however, it begins to feel as if the texture is breathing, with dynamic swells resulting naturally from the juxtaposition of different timbres. Listen further still and we now find silences parting the texture and defining the large-scale motion of the piece. In constructive terms these silences are the result of ever-widening spaces created by the gradual slowing of the canons that run from the work’s beginning to its end. The affective sense, however, is just the opposite: It is silence that drives the work’s pulmonary rhythm; silence asserts itself with greater and greater confidence, with a stronger sense of self, until it clears away the texture and is left with only itself, only its own perfect wholeness. For those who love Feldman’s music and are able to stay with it to the end the effect is tonic, serving to endow one’s own cluttered life with a sense of space.

    What do Webern and Feldman have to tell us about our lives and our time? Do they speak directly to the experience of life in a century of clanging metal and unabated trivia? Or does their work simply tell us about them? For Webern it seems to have been the composing-out of a singular view of art as the willful emanation of nature, the incarnation in pure, formal sentience of the structural purity of the cosmos, that constellated his unique sense of the delicate and the hushed. Feldman’s view was equally rich and evocative, colored as it was by his friendship with Cage, though expressed in more workaday terms (he once remarked that like a tailor, he was a craftsman, committed to quality of detail; “the suit fits better” he said.). On a personal level one could conclude that Webern, a small, modest, and introspective man was born to write small, quiet music. How then to explain Feldman? Big, garrulous, fun, brilliant, obscene, he seemed to present a persona that defied connection with his art.

    Whatever their sources in the personal histories and genetics of their composers, the musics of Webern and Feldman (and countless others from recent years) make possible for us a relationship to silence, and the room it gives us, that would seem otherwise impossible in a deafening age. The properties in their musics that accomplish this are not unique to them, of course, any more than the perception that the world is getting louder is unique to us as moderns. These are both matters of degree. It is probable that in two hundred years–if we are still in a position have this discussion–the level and din of information exchange, aural, digital, and (who knows) psychic, may make our current age seem a veritable Walden. That doesn’t mean we don’t have a problem.

    For us to be able to enter the world that music creates for us, we need a silence within which to listen. It will be said in response that in many cultures music is not presented as an object of veneration within a temple of adoring quietude, but rather as part of the rush and tumult of everyday life; thus we should not need the expectant hush of the concert hall ourselves in order to go into our music. These are valid points that do challenge the clear subject/object separation that classical music traditions have tended to enforce.

    In many world societies, however, there are still spaces–if only interior, or metaphorical, or temporal–set aside for contemplation, for noiseless recalibration of the soul, and in contemporary American culture there are almost none. Our social rituals are constrained by the incessant soundtrack imposed in our public spaces, and our places of worship, by and large, have given themselves over to a muzak-based sense of liturgy that tells us at every step of the way what to feel and with what intensity. Many of us, turning away from both mainline- and mega-church, have sought peace in new-age bookstores, but these, even with their palmists and meditation rooms, surround their patrons with a noxious haze of synthesizers, pennywhistles, and Inuit drums. But beyond shopping, what primary experience are we having here? Are we listeners seeking an archetype of beauty or seekers listening for the godhead? It turns out we are neither–though we may have been duped into one or the other conviction. We are simply consumers. The hope is that, like dairy cattle, we will become more productive if encouraged in our purchases by this kind of marginal musical discourse.

    This, of course, is the common denominator in all the examples above, and it extends beyond the ritual into the political. If we frequent any number of the hipper clothing chains we will find ourselves buoyed by emo or hip-hop beats that serve to wash away the sense of complicity we feel in supporting a sweatshop economy; the music is telling us that we belong here, that we’re different, we’re aware, we’re not the problem. We’re down with all the world’s peoples, with the losers and dreamers, with the left and the right. We’re down with EVERYONE; we don’t want any trouble, we just want to buy a pair of cargo pants. Once again, the absence of silence makes it impossible for us to decode the onslaught before we’ve succumbed to it. And this is not just a function of capitalism. It’s worse.

    We find ourselves as a culture unable to assuage our loneliness except through the ceaseless accompaniment of our everyday actions. In such a world buying a book or a shirt is not merely to acquire a thing, to fill a need; it is, rather, to participate in the forced scripting of our lives according to commercial archetypes that tell us, through the imaginary film score by which we buy, eat, make love, crap, worship, and, eventually, die, not who we are but who we wish we were, who the music tells us we want to be. Even our sense of time becomes hopelessly distorted, as we float through our lives according to the dreamlike spans of musical phrases rather than the waking rhythms of clock-time. Thus our capacity to be present for our lives, for our work especially, is compromised by a time-sense that is artificially constructed along unconscious models in order to give perspective on the conscious experience of time’s passing, not to replace that experience entirely. In losing silence, and the corresponding potential for musical discernment that silence engenders, we lose ourselves, our native sense of our motion through life.

    This, perhaps, is what Ligeti had in mind in the fourth movement of his Piano Concerto, where silence eventually succombs to madness. As in all of Ligeti’s music, spaces in the work, both textural and temporal, are gradually filled in through a dizzied layering of canon and pitch-cycle. In most of his later works, however, there is always space through which to “view” the process; the greatest premium is placed on clarity of both gesture and phrase trajectory (such is the case with the other four movements of the Concerto). In the fourth movement of this work, however, early silences of anywhere from six to eight beats are gradually compressed, until by the movement’s climax silence has been obliterated through an orchestral canon of nine parts, which eventually spins itself into nothingness. Ligeti provides no program, but the effect is that of any number of daily contemporary situations wherein sound is deafening but sense is absent.

    Sense may seem to return, of course, the moment we strap into our iPods. The “personal music system” makes available to us conditions that are near-anechoic. Crowned with headphones, we plunge into a total aural void, within which the silences of Webern, Feldman, Ligeti, Haydn, Ravi Shankar, or a templeful of Tibetan monks become real and rich for us once again. This would seem the ultimate solution to our problem, the most perfect means of apprehending music’s symbiosis with silence, until we consider two discomfiting realities: in voluntarily wearing headphones we are agreeing to the taming, by actual, physical sound, of our own interior landscapes, and we agree to go the process alone, for no one can reach us when we are plugged-in in this way. In this state, we become slaves to a jealous god who seeks to rob us of our deepest capacities for expression and relationship. For the silences of Webern, of Feldman, Haydn, Beethoven, Miles Davis, and Jimi Hendrix were conceived of as shared fields of sonic space. It is only now that they are being parceled off and sold, one by one, to individual buyers. The unanimous hush that fell with Hendrix’ final, plaintive, nightmarish phrases in the live new year’s eve 1970 recording of Machine Gun, has now receded into the ether; the sense of wonder it inspired has long since been replaced with a knowing sense of style, an aesthetic shorthand of narrowing signification. This is fundamental to the transformation of any art, of course, and is not necessarily a terrible thing. What is pernicious here, pernicious simply because of the ease and semi-conscious assent with which it is happening, is that the common sense of “Oh my God” that greets the best work of any artist is now more likely to occur as a singular, individual event, outside the frame of any human relationship. The hush that Hendrix inspired, the result of genuine stylistic, technical, and expressive transcendence acted out in the real presence of an engaged community, has little to no counterpart in today’s media culture. Instead we are offered unlimited choice in our listening, unlimited as to what, where, when, and in what format, but alone, all alone, outside any cultural frame other than that of the marketplace.

    Thus is the communal experience, the sharing, of music made a solitary enterprise; thus is the antidote for the poison of unwanted sound made an instrument of isolation and estrangement; thus are thoughts which once sounded only in the imagination drenched in a shower of tones; thus is music rendered impotent to the point of non-existence; thus is the colonization of silence complete.

    What then to do? On the level of culture, my hunch is that with the implosion of the CD industry and the (probably coincident) resurgence of so many different kinds of live performance, serving so many different constituencies, we will, not as one mass but rather as a linked set of smaller musical communities, find our way back to a shared musical life. Indeed this is already happening, in every genre of contemporary music, at least in cities and in virtual communities defined by specific musical tastes. What it will mean for the society as a whole, however, for the exurbs, the strip malls, the churches, even the edge of the wilderness (where canned music is increasingly common) is difficult to say; no definitive answer is on the horizon.

    One thing is certain: No luddite sensibility will save us; we’ve come too far, too fast. Even as I write this angry missive I, like every other musician I know, am striving to hear through the noise and find what is essential in it, what speaks uniquely of my and my neighborhood’s experience and to sing of that in my music. To hate the media is to hate ourselves: we all want the big medicine in the magic box to touch us, to dazzle us, to heal us. We know that ultimately it can’t, but we simply don’t know how or when to stop, we’re children eating Skittles; our mouths are full and we just want more. To pretend otherwise is, I think, poignant at best. But, at some point, stop we must. For now perhaps the best we can do as individuals is try not to be complicit in the occupation of our lives by music made noise. We don’t have to listen to music all the time; we still have some, though not much, degree of choice when it comes to the quantity and quality of sound we experience in our everyday world. Exercising that choice wisely, with an ear for the complexity of the aural environment and the need for space within it, will constitute a big first step toward righting the imbalance.

    In the meantime, the problem of silence remains.

  • Edward Tufte says:

    Long article by Robert Levine, “The Death of High Fidelity,” Rolling Stone. Includes insightful Dylan quote.

    In the Age of MP3s, Sound Quality is Worse Than Ever

    David Bendeth, a producer who works with rock bands like Hawthorne Heights and Paramore, knows that the albums he makes are often played through tiny computer speakers by fans who are busy surfing the Internet. So he’s not surprised when record labels ask the mastering engineers who work on his CDs to crank up the sound levels so high that even the soft parts sound loud.

    Over the past decade and a half, a revolution in recording technology has changed the way albums are produced, mixed and mastered — almost always for the worse. “They make it loud to get [listeners’] attention,” Bendeth says. Engineers do that by applying dynamic range compression, which reduces the difference between the loudest and softest sounds in a song. Like many of his peers, Bendeth believes that relying too much on this effect can obscure sonic detail, rob music of its emotional power and leave listeners with what engineers call ear fatigue. “I think most everything is mastered a little too loud,” Bendeth says. “The industry decided that it’s a volume contest.”

    Producers and engineers call this “the loudness war,” and it has changed the way almost every new pop and rock album sounds. But volume isn’t the only issue. Computer programs like Pro Tools, which let audio engineers manipulate sound the way a word processor edits text, make musicians sound unnaturally perfect. And today’s listeners consume an increasing amount of music on MP3, which eliminates much of the data from the original CD file and can leave music sounding tinny or hollow. “With all the technical innovation, music sounds worse,” says Steely Dan’s Donald Fagen, who has made what are considered some of the best-sounding records of all time. “God is in the details. But there are no details anymore.”

    The idea that engineers make albums louder might seem strange: Isn’t volume controlled by that knob on the stereo? Yes, but every setting on that dial delivers a range of loudness, from a hushed vocal to a kick drum — and pushing sounds toward the top of that range makes music seem louder. It’s the same technique used to make television commercials stand out from shows. And it does grab listeners’ attention — but at a price. Last year, Bob Dylan told Rolling Stone that modern albums “have sound all over them. There’s no definition of nothing, no vocal, no nothing, just like — static.”

    In 2004, Jeff Buckley’s mom, Mary Guibert, listened to the original three-quarter-inch tape of her son’s recordings as she was preparing the tenth-anniversary reissue of Grace. “We were hearing instruments you’ve never heard on that album, like finger cymbals and the sound of viola strings being plucked,” she remembers. “It blew me away because it was exactly what he heard in the studio.”

    To Guibert’s disappointment, the remastered 2004 version failed to capture these details. So last year, when Guibert assembled the best-of collection So Real: Songs From Jeff Buckley, she insisted on an independent A&R consultant to oversee the reissue process and a mastering engineer who would reproduce the sound Buckley made in the studio. “You can hear the distinct instruments and the sound of the room,” she says of the new release. “Compression smudges things together.”

    Too much compression can be heard as musical clutter; on the Arctic Monkeys’ debut, the band never seems to pause to catch its breath. By maintaining constant intensity, the album flattens out the emotional peaks that usually stand out in a song. “You lose the power of the chorus, because it’s not louder than the verses,” Bendeth says. “You lose emotion.”

    The inner ear automatically compresses blasts of high volume to protect itself, so we associate compression with loudness, says Daniel Levitin, a professor of music and neuroscience at McGill University and author of “This Is Your Brain on Music: The Science of a Human Obsession.” Human brains have evolved to pay particular attention to loud noises, so compressed sounds initially seem more exciting. But the effect doesn’t last. “The excitement in music comes from variation in rhythm, timbre, pitch and loudness,” Levitin says. “If you hold one of those constant, it can seem monotonous.” After a few minutes, research shows, constant loudness grows fatiguing to the brain. Though few listeners realize this consciously, many feel an urge to skip to another song.

    “If you limit range, it’s just an assault on the body,” says Tom Coyne, a mastering engineer who has worked with Mary J. Blige and Nas. “When you’re fifteen, it’s the greatest thing — you’re being hammered. But do you want that on a whole album?”

    To an average listener, a wide dynamic range creates a sense of spaciousness and makes it easier to pick out individual instruments — as you can hear on recent albums such as Dylan’s Modern Times and Norah Jones’ Not Too Late. “When people have the courage and the vision to do a record that way, it sets them apart,” says Joe Boyd, who produced albums by Richard Thompson and R.E.M.’s Fables of the Reconstruction. “It sounds warm, it sounds three-dimensional, it sounds different. Analog sound to me is more emotionally affecting.”

    Rock and pop producers have always used compression to balance the sounds of different instruments and to make music sound more exciting, and radio stations apply compression for technical reasons. In the days of vinyl records, there was a physical limit to how high the bass levels could go before the needle skipped a groove. CDs can handle higher levels of loudness, although they, too, have a limit that engineers call “digital zero dB,” above which sounds begin to distort. Pop albums rarely got close to the zero-dB mark until the mid-1990s, when digital compressors and limiters, which cut off the peaks of sound waves, made it easier to manipulate loudness levels. Intensely compressed albums like Oasis’ 1995 (What’s the Story) Morning Glory? set a new bar for loudness; the songs were well-suited for bars, cars and other noisy environments. “In the Seventies and Eighties, you were expected to pay attention,” says Matt Serletic, the former chief executive of Virgin Records USA, who also produced albums by Matchbox Twenty and Collective Soul. “Modern music should be able to get your attention.” Adds Rob Cavallo, who produced Green Day’s American Idiot and My Chemical Romance’s The Black Parade, “It’s a style that started post-grunge, to get that intensity. The idea was to slam someone’s face against the wall. You can set your CD to stun.”

    It’s not just new music that’s too loud. Many remastered recordings suffer the same problem as engineers apply compression to bring them into line with modern tastes. The new Led Zeppelin collection, Mothership, is louder than the band’s original albums, and Bendeth, who mixed Elvis Presley’s 30 #1 Hits, says that the album was mastered too loud for his taste. “A lot of audiophiles hate that record,” he says, “but people can play it in the car and it’s competitive with the new Foo Fighters record.”

    Just as CDs supplanted vinyl and cassettes, MP3 and other digital-music formats are quickly replacing CDs as the most popular way to listen to music. That means more convenience but worse sound. To create an MP3, a computer samples the music on a CD and compresses it into a smaller file by excluding the musical information that the human ear is less likely to notice. Much of the information left out is at the very high and low ends, which is why some MP3s sound flat. Cavallo says that MP3s don’t reproduce reverb well, and the lack of high-end detail makes them sound brittle. Without enough low end, he says, “you don’t get the punch anymore. It decreases the punch of the kick drum and how the speaker gets pushed when the guitarist plays a power chord.”

    But not all digital-music files are created equal. Levitin says that most people find MP3s ripped at a rate above 224 kbps virtually indistinguishable from CDs. (iTunes sells music as either 128 or 256 kbps AAC files — AAC is slightly superior to MP3 at an equivalent bit rate. Amazon sells MP3s at 256 kbps.) Still, “it’s like going to the Louvre and instead of the Mona Lisa there’s a 10-megapixel image of it,” he says. “I always want to listen to music the way the artists wanted me to hear it. I wouldn’t look at a Kandinsky painting with sunglasses on.”
    Producers also now alter the way they mix albums to compensate for the limitations of MP3 sound. “You have to be aware of how people will hear music, and pretty much everyone is listening to MP3,” says producer Butch Vig, a member of Garbage and the producer of Nirvana’s Nevermind. “Some of the effects get lost. So you sometimes have to over-exaggerate things.” Other producers believe that intensely compressed CDs make for better MP3s, since the loudness of the music will compensate for the flatness of the digital format.

    As technological shifts have changed the way sounds are recorded, they have encouraged an artificial perfection in music itself. Analog tape has been replaced in most studios by Pro Tools, making edits that once required splicing tape together easily done with the click of a mouse. Programs like Auto-Tune can make weak singers sound pitch-perfect, and Beat Detective does the same thing for wobbly drummers.

    “You can make anyone sound professional,” says Mitchell Froom, a producer who’s worked with Elvis Costello and Los Lobos, among others. “But the problem is that you have something that’s professional, but it’s not distinctive. I was talking to a session drummer, and I said, `When’s the last time you could tell who the drummer is?’ You can tell Keith Moon or John Bonham, but now they all sound the same.”

    So is music doomed to keep sounding worse? Awareness of the problem is growing. The South by Southwest music festival recently featured a panel titled “Why Does Today’s Music Sound Like Shit?” In August, a group of producers and engineers founded an organization called Turn Me Up!, which proposes to put stickers on CDs that meet high sonic standards.
    But even most CD listeners have lost interest in high-end stereos as surround-sound home theater systems have become more popular, and superior-quality disc formats like DVD-Audio and SACD flopped. Bendeth and other producers worry that young listeners have grown so used to dynamically compressed music and the thin sound of MP3s that the battle has already been lost. “CDs sound better, but no one’s buying them,” he says. “The age of the audiophile is over.”

  • ET says:

    The aesthetics of restaurant noise, from the LA Times:

    Din and bear it?

    Restaurant diners — when they can make themselves heard above the blaring music from a chef’s iPod playlist, the clatters and shouts from an open kitchen, and the roar of the cocktail drinkers in an adjacent lounge — are talking about restaurant noise these days more than the food. And the sound of that is finally reaching management ears.

    To address higher than anticipated noise levels — and diner complaints — the new Los Angeles brasserie Comme Ca has put carpets under tables, and Pizzeria Mozza has installed acoustic panels on its high walls. But don’t look for either popular restaurant to change its ethos, or radically alter those noise levels.

    Although restaurant designers, acoustics experts, industry professionals and restaurant owners agree that noise is increasingly a problem, the solution is not as simple as issuing a call for silence.

    How loud a restaurant is — or isn’t — has to do with the quality of noise as well as the quantity. The challenge is not necessarily to quiet a restaurant but to successfully manage its sound level, and, in the process, allow the ambient noise to be a complementary part of the mood communicated by the food, the chef, the location, the entire dining experience.

    In the sedate beige dining temples of decades past, this wasn’t really an issue. Chefs in white toques did their work largely behind closed doors; diners ate in respectful, if slightly bored, silence. But these days restaurateurs want “high energy,” and night-on-the-towners want a “scene.” Translation: Both want the noise and bustle that we have come to associate with good fun — and good business.

    Maybe that’s because, accurately or not, we now often associate quiet restaurants with empty restaurants.

    But the ideal noise level at a particular restaurant isn’t just about the decibel count. It’s about the combined effect of those decibels with all the other factors that contribute to how the restaurant sounds. There’s a texture to that sound, a way it operates in a given space: Call it the art of noise. (continue reading)

  • Tchad says:

    AEG recently conducted an advertising campaign that doubled as a public service bringing awareness to noise pollution.

    AEG street noise billboard

    This is a picture of the current level of street noise. It’s clever because it is not only teaching the consumer about a measure that the manufacturer wants to use as a differentiator — it also provides a reference point.

  • peufeu says:

    Yeah, it is always too loud.

    The musicians are deaf.
    The sound engineers are deaf.
    The public has been listening to earphones at full blast for years, so half of them are deaf, too.

    I tend to prefer exterior concerts. The best concert hall for acoustics is no concert hall at all. In this case, there is no reverberation. Of course, this only works with amplified sound, not for chamber music or classical…

    I always use ear protection at concerts. This is a matter of self-preservation. Besides, music is supposed to elicit pleasure and emotion, not to make ears bleed.

    I think another problem is the PA itself. Most of them, even at huge concerts, use rather crummy lo-fi speakers. The sound of a badly design high-efficiency speaker is unmistakable. And, it is so common that it tends to be perceived as normal, even desirable, just like the “theater sound”.

    In small concerts like Jazz I find the question to ask is : is the drum set amplified ? An angry drummer hitting his set will already produce enough sound for a venue larger than a pub. And you’d need several kilowatts of amps and a truckload of speakers just to reproduce a drumset without killing the dynamics, which is the best part of a good drum sound. So, if you go to a pub of small hall to listen to a band and the drum set is amplified, this is bad news, because it’s going to go through a set of cheap PA speakers and the amps are going to clip on each drum hit.

    Then, in a typical (read badly designed) high-efficiency speaker, you will get a 15″ midwoofer which would perform OK up to about 500 Hz or even 1K if it’s a good quality, expensive part. However those are usually crossed over to a horn and compression driver at about 2K.

    Boxes (plywood usually) have lots of resonances all over the spectrum due to the fact that they must be light for easy shipping, and cheap.

    Therefore, the typical sound of an average sound reinforcement kit comes out like this :

    – Low bass (< 50 Hz) : In a large venue this needs a few trucks worth of bass bins. Loud bass needs brute force. Therefore, bass quality is mostly a matter of budget. If budget is absent, low bass will be replaced by distortion. - Bass (up to about 100 Hz) : Generally OK but that depends on your position in the room, if it is small. - 200 - 600 Hz (male voices) : Box and panel resonances will destroy the intelligibility. Singer seems to have a cold. If this is corrected with EQ, singer will seem healthier, but it will still suck. Room acoustics will take care of destroying what remains of the lyrics. - 500 Hz - 1K (female voices) : Same effect, plus sound of a woofer entering breakup at above 1K. Screaming, screeching. - 1K - 3K (female voices again, harmonics from lots of instruments, drums, etc) : At this stage the big woofers are experiencing terrible breakup modes and beaming. Symptoms are sibilance, screeching sound, cymbal strikes replaced by bursts of white noise, and generally very aggressive sound. Fortunately you can move around to avoid the beaming, unless there are big tower arrays projecting in every direction. Also there are generally a few holes in this range because of anti larsen EQ and the overstretched woofer meeting the tweeter. Sometimes you'll get no upper midrange at all. Drums wil sound OK, but voices won't. - Upper range : Here the horns used, which are optimized for maximum output, and not maximum quality, coupled with drivers pushed to the limit, will generate all sorts of very nasty artifacts and distortion which are difficult to tolerate unless your high frequency hearing is already gone. It sounds basically like cat scratchings. Therefore I always wear waxballs in my ears. Those have an absorption rising with frequency, which tends to compensate the screaming highs of most PAs.

  • DC says:

    I’ve been playing in rock bands for 50 years (yeah, really). Don’t blame the band, don’t blame the engineer, don’t blame the venue. As in most things commercial; look to the customer. I can remember clearly the day I first realized the ‘loudness thing’. My band was playing at a bar in Alamogrdo, NM called Red’s; I was 17 years old. The audience was loving it. One of the bartenders came up to the stage between songs and said,”Can’t you guys turn it down some, you can’t hear yourself think in here?” The little light bulb came on over my head. The audience came here to escape their thoughts. After all this is a bar. If you want great performances you don’t plan on getting so hammered you can’t appreciate them. When the music is loud you can’t hear yourself think. There is nothing but the music. For a few hours all your problems are gone.

    There’s a second issue, compression. Since the early days of recording, music has been compressed to fit onto the records or tapes they were recorded on. This compression dramatically reduces the dynamic range of the music. (Dynamic range is the difference between the softest and loudest parts of a recording.Compression increases the level of the soft parts and lowers the level of the loud parts.) It is further compressed to be broadcast on radio or TV. The result is music that varies in volume very little. The album Californication by the Hot Chili Peppers is reputed to have a 4db difference between the soft and loud (music in clubs and concerts is in excess of 100db) parts of the album. When musicians perform in clubs the audience wants them to sound like the record. The extent to which you sound like the record is how you are judged. If you sound just like the record, you’re terrific; if you sound unlike the record, well…

    Because of this pressure to sound like the record bands compress their sound too. In the old days we did it with sheer volume. When you play really loud the ears of the audience supply the compression internally. It’s part of the ear’s self defense mechanism. Today we use electronic compressors. The band I play in today has about 15 db of dynamic range in live performance. Without compression it would be more like 40 or 50 db. Audiences love us. They say all the time that we sound just like the record.

    Yes, the discriminating few are made to pay for the lack of discrimination by the many. So, what else is new. Bands have to make a living and the discriminating few are too few to pay the bills. Look at any other industry and you’ll see the same problem. The unsophisticated often demand things that are not in their own best interests and they certainly don’t consider the interests of their more sophisticated fellows.

    So… Don’t blame the band, don’t blame the engineer, don’t blame the venue. The audience (at least the undiscriminating majority) is getting exactly what they want.

  • Edward Tufte says:

    See Terry Teachout, “Musical Torture Instruments: Can Being Forced to Listen Really Be That Painful,” Wall Street Journal:

    What do you fear more than anything else? In “Nineteen Eighty-Four,” his 1948 novel about life under totalitarianism, George Orwell describes a mysterious torture chamber called Room 101 where prisoners are exposed to “the worst thing in the world” in order to make them talk. “It may be burial alive, or death by fire, or by drowning, or by impalement, or 50 other deaths,” the chief interrogator explains. I thought of Room 101 when I read that the U.S. military uses loud music to soften up detainees who refuse to talk about their terrorist activities. Not surprisingly, some (though by no means all) of the musicians whose recordings have been used for this purpose want to have it stopped. Reprieve, a British legal charity that defends prisoners whose human rights are allegedly being violated, has gone so far as to launch Zero dB, an initiative specifically aimed at practitioners of what it calls “music torture.”

    President Obama’s decision to close the U.S. detention center at Guantanamo Bay and conduct a review of CIA interrogation techniques will doubtless have some as-yet-unknown impact on the use of music for coercive purposes. But speaking strictly as a critic, what I find most intriguing about this practice is the list of songs and performers reportedly used to “torture” detainees that Reprieve has posted on its Web site, http://www.reprieve.org.uk. It is an eclectic assemblage of tunes ranging from AC/DC’s “Hell’s Bells,” a heavy-metal ditty that sounds as though it had been recorded by an orchestra of buzzsaws, to such seemingly innocuous fare as Don McLean’s “American Pie” and the Bee Gees’ “Stayin’ Alive.” To be sure, most of the records cited by Reprieve have one thing in common: They’re ear-burstingly loud. But the presence on the list of “I Love You,” the chirpy theme song of “Barney & Friends,” a longtime staple of children’s programming on PBS, suggests that the successful use of music as a tool of coercion entails more than mere volume.

    I’m also struck by the fact that music is, so far as I know, the only art form used for such purposes. No doubt it would be unpleasant to be locked in a windowless room that had bad paintings hung on all four walls, but I can’t envision even the most sensitive of spies blurting out the name of his controller to escape the looming presence of Andy Warhol or Thomas Kinkade. Yet I have no trouble imagining myself reduced to hysterical babbling after being forced to listen to shred, grunge and “I Love You” for 16-hour stretches, a technique said to have been employed by Guantanamo interrogators. (continue reading)

  • Byron Estep says:

    I’ve been both a performer and musical theater director for 15 years, and much of what has already been said on this topic resonates with me, so to speak. Here’s what rings true in my experience:

    1) Front-of-house sound engineers are the most likely culprits here. They are the ones who directly control the levels one hears, and they are also likely to be at least somewhat hearing impaired, especially in the higher frequencies (sorry, FOH guys). As a result, they tend to boost high-end in a way that makes loud music seem even louder (and harsher). It may sound good to them, but it doesn’t sound so great to someone who hasn’t been listening to crushingly loud music night after night for (in many cases) years.

    2) Musicians generally do NOT hear onstage what the audience hears in the house. Most musicians hear either a) an in-ear monitor mix, b) a sub-mix coming from a wedge speaker right next to or in front of them, or c) the amplifier/instrument (uh, drums) they are closest to because there are no monitors (in small venues). In all cases, this is nothing like what an audience member, who is in front of the house speakers, hears- even if the mix in the house and the monitors is identical. This is because the volume of the FOH speakers is almost always unrelated to the volume of the monitor mixes, and the FOH sound interacts with the larger room (reverb, etc.) in a way that is barely perceptible onstage. As someone who has played many a venue, from small clubs to large stadiums, I have never once gotten the sense that what I was hearing onstage was exactly like what someone standing in the house was hearing. Maybe the mix was roughly or even exactly the same, but certainly not the volume and overall sound quality.

    3) As has been pointed out repeatedly in previous posts, the acoustics of most venues are horrendous. This often exacerbates the problem by creating unpleasant and obfuscating reflections, as well as encouraging sound people to turn things up even more in an effort to overwhelm the reflected sound with “direct” signal. Anyone who has attended a concert in a dome-shaped tent or in a hall with a very high ceiling or a lot of hard walls/surfaces can relate to this.

    4) This may just be an old fogey prejudice on my part, but I get the impression that the newer, mind-numbing speaker arrays, which are standard issue these days, are harsher than the older, less “perfect” sounding speaker “stacks” of yesteryear. Something about the “accuracy” of the newer speakers detracts from their warmth and the natural smoothing of high-end that occurred in older, less efficient/accurate speakers.(Maybe someone who does FOH sound can weigh in on this)

    5) Since most of the above problems/dynamics are unlikely to go away anytime soon, I suggest the following “solution.” Get yourself a pair of molded earplugs with a set of flat-frequency-attenuating filters you can pop in and out. The molds require a visit to an audiologist, but, once you have them, they should last years. Then, you can get two or three different filters for different amounts of attenuation and essentially “turn down” the volume at all the future concerts you attend. Of course, the advantage of this particular solution is that you don’t lose all the high end in the process (like you would with foam plugs). You will essentially hear the music as it is, only softer.

  • Edward Tufte says:

    A fascinating article, a bit off-topic, on pitch and word clarity in opera. Findings need replication.

    The Wagnerian Method

    When physicist John Smith spent the night in his garden with the score to Gotterdammerung, the final opera in Richard Wagner’s four-part, 15-hour epic, Der Ring des Nibelungen, he wasn’t interested in its account of the apocalyptic struggle of Norse gods for control of the world. Smith was concerned with a struggle of a different sort–one between the opera’s words and music that might elucidate the controversial German composer’s peculiar vision for the future of art.

    On Smith’s mind was an age-old difficulty all soprano singers face: They mispronounce lyrics when singing powerfully in the top half of their range. This “soprano problem” was formally recognized at least as far back as 1843, when French composer Hector Berlioz wrote in his Treatise on Instrumentation that “[sopranos] should not be required to sing many words on high phrases, since this makes the pronunciation of syllables very difficult if not impossible.” It does not appear, however, that Berlioz–or anyone else–ever understood why this problem occurred.

    In 2004, Smith and his colleagues Joe Wolfe and Elodie Joliveau at the University of New South Wales published a study in the Journal of the Acoustical Society of America that revealed the physiological cause of the soprano problem for the first time. They sent an acoustic signal through the vocal tracts of nine sopranos and used a microphone to measure how the signal changed when the sopranos sang vowel sounds at various pitches. They found that when a soprano sings at high pitches, she adjusts her vocal tract to make her voice resonate. In effect, she “tunes” the resonance frequency of her vocal tract to match the frequency of the pitch at which she is singing. This vocal-tract tuning, which gives a soprano’s voice enough power to fill an opera house, is what makes certain words at high pitches difficult for the audience to understand. (It is joked by singers that Wagner’s character of Siegfried in Der Ring des Nibelungen ought to have been called Sahgfried, as his name is sometimes pronounced that way by sopranos looking to get the most volume out of their voices.) Jane Eaglen, a critically acclaimed soprano who has performed Wagner’s works in opera houses worldwide, explains that sopranos must try to find a balance between power and clarity. “It’s really about how you modify the vowels at the top of the voice so that the words are still understandable but so that you are also making the best sound that you can make,” she says. (continue reading)

  • Mike H. says:

    I think the loudness at concerts is at least in part because the sound-people are trying to duplicate (unsuccessfully) the visceral effect produced by modern pop/rock recordings.

    Modern recording, mixing, and mastering technology has given engineers the ability to create the illusion that what you are hearing is very loud. It involves more than just dynamic range compression. Computer analysis tools ensure that every millisecond of a recording is takes maximum advantage of the available frequency spectrum (and the stereo spectrum) in order to get the maximum possible information under the 96 DB headroom of a CD.

    In fact, a common technique is to allow the mix to actually distort(!) very slightly at the peaks, further increasing the perception of loudness. As has been mentioned, the ear itself distorts at high levels. If you can mimic that effect digitally, the listener gets the impression of a wall of sound that is very loud, even if the actual level is very reasonable.

    This kind of painstaking processing isn’t usually present in live music. So a live performance at 110 db may sound dull compared to the cd version of the same song you listened to at 90 db in the car on the way there.

  • Jon Gross says:

    Indeed, a “loud” (i.e. distorted) sound can be low volume at the same time.

    Scan through any guitar magazine, and you will find at least a half-dozen examples of “distortion boxes” for sale.

    My bass amplfier has a convenient “grunge” setting that adds distortion without additional hardware, and without tinnitus–inducing volume. Every now and then, playing with distorted tone really satisfies. It’s the aural version of Jackson Pollack flinging paint at canvas….

    Wikipedia has a lengthy article on distortion as it applies to music.

  • John Fennessy says:

    It’s come to this…
    A few years ago a band, whose name I don’t recall, hired a well known and respected sound designer (Broadway-30 shows, concerts etc. – a theatrical household name), to do a site survey of a new concert hall in North Carolina. They wanted to know how much extra sound equipment they would need. His recommendation: The hall is acoustically perfect – bring your instruments and nothing else.

    The result: They refused to pay him.

  • Edward Tufte says:

    Metallica drummer struggles with tinnitus, from CNN:

    Albany, New York (CNN) — The noise in the concert hall is loud, throbbing. The crowd chants, “Metallica … Metallica!”

    Lars Ulrich holds a drumstick high above his head. For a split second, the frenzy quiets to a dull roar. Ulrich brings his drumstick down with a crash and is swallowed by astonishing noise — wailing guitars, thumping bass and his own furious banging on the drums.

    “I’ve been playing loud rock music for the better part of 35 years,” said Ulrich, 46, drummer for the heavy metal band Metallica. “I never used to play with any kind of protection.”
    Early in his career, without protection for his ears, the loud noise began to follow Ulrich off-stage.

    “It’s this constant ringing in the ears,” Ulrich said. “It never sort of goes away. It never just stops.” (continue reading)

  • Dominic Brown says:

    NPR visual history of loudness

    A decent explanation of the problem, and a decent graphical presentation — better printed than viewed on-screen, though.

  • Edward Tufte says:

    From The New York Times, a report in a change in venue by the Allman Brothers Band:

    The Beacon is Booked, So Allmans Will Move

    On Tuesday the Allman band announced that when it came to New York in March, it would not appear at the Beacon, where it has played 190 shows over the past 20 years. . . the Beacon had been booked for a new show by Cirque du Soleil, the Montreal circus troupe.

    In the meantime, the Allman band and its representatives contemplated several other New York spaces for its residency. But the Hammerstein and Roseland Ballrooms and the Nokia Theater were deemed too small, as were several Broadway houses, which do not typically host rowdy Southern rock bands and their fans. Radio City Music Hall, another MSG property, seemed better suited to other acts.

    “Forget about it,” Mr. Allman said. “That’s Hannah Montana’s place. I think she’s great, by the way.” So, “by process of elimination,” Mr. Holman said, the group came to the United Palace, a 3,293-seat auditorium, where it will play eight shows from March 11 through 20.

    Though the space has been criticized by concertgoers for its echo-filled acoustics, Mr. Allman said, “With our sound system, we’ll make our own damn acoustics.

  • Sean Mullen says:

    The concert sound of the Grateful Dead (ca. 1993), an excerpt; FD is Frank Doris, the interviewer, DH is Dan Healy, the live sound engineer.

    DH: We design each setup for the particular hall. We use auto CAD (computer aided design), an architectural drafting program. We scan in the dimensions of the halls–literally, the architectural drawing–so we can set up the sound system “in the computer” before we set it up in real life. The software can also run tests–dispersion, amplitude, and frequency characteristics, standing wave characteristics.

    FD: The reflectivity, reverberation time and so on. . .

    DH: Right. What we really use it for is [to determine the] 3dB down points and so on, so we know how to overlap the speakers. What we call a 3dB down point is really a figure for the worst case; we allow ourselves a plus or minus 1-1/2 dB variance in the SPL [sound pressure level] at any point in the room.

    That’s [determined] before we even leave home. When we get to the hall, it’s up to Uwe to see that [everything is] interpreted and installed right. The rigging is based on the computer-determined points. Then I come in and determine how many speakers to put up, in what arrays, how much curve and how much tilt and so on in order to get everything to converge properly with smooth coverage throughout the room.

    FD: What are the tolerances involved?

    DH: When it’s all working the way it’s supposed to, it’s within plus or minus an inch or two.

    FD: How do you do your measurements?

    DH: I do scale drawings before the fact. When we get to the hall, everything references off a stake that’s in the front center lip of the stage. Everything is measured from that, so the stage is vectored out on angles from that [which is] known and predetermined. They literally take a transit and set it up just as if you were surveying, and you take that pole and go [to the location points]. But you have to use surveying tools. You can’t use a tape measure — you have to be serious about it!

    As I recall, they also used a weather station at the soundboard for their outdoor shows; when the temperature or humidity changed, the sound was affected and required adjustment.

  • Martin Brooks says:

    I haven’t been to a concert in years in which the levels weren’t absurdly loud and frequently (in my opinion) beyond the threshold of pain.

    This has been going on for so long that I don’t think there are any engineers left who understand good live sound and as others have stated, they all have substantial hearing loss anyway. This is as true for Broadway shows as it is for rock concerts (The Lion King being an exception — I found the sound levels quite reasonable at that show.)

    I do wear high quality, custom made ear plugs at shows, but in spite of the hype, they still do limit high frequencies more than other frequencies, so the music loses its life. But if the levels are high enough, they won’t necessarily protect you. Some years ago, I attended a concert and used the plugs, but the levels were incredibly loud and I had a cold. When I left the concert, my ears were ringing and I experienced tinnitus for a year. It probably also increased the rate at which I’m losing the ability to hear high frequencies. (Young kids can hear to 20KHz, some even to
    22KHz, but most adults are in the 15KHz to 18Khz range and by the time you’re 50, you can easily be down to 12-13KHz, especially if you listen to loud music, work in a loud factory without protection or ride the NYC subway. Those with damaged hearing frequently can’t hear above 7-8KHz. Aside from high frequency loss, you lose threshold – the lowest level at which you can begin to hear a sound at a particular frequency. Young kids are using ring tones that are above the frequency at which most adults can hear, so you won’t know when their phone rings. Some shopping centers are doing the opposite: they’re generating annoying high frequency tones so young kids won’t hang around.)

    There was a time when most sound engineers would limit loudness at the point when the signal became audibly distorted. Since the amplifiers of the time weren’t all that powerful, that limited the levels. Most bands now have more power on stage just for stage monitors, as used to be used to amplify the entire auditorium. And distortion no longer stops engineers from constantly raising the levels.

    Why the bands and their engineers don’t understand that the key to excitement is dynamic range and not constant loudness, I’ll never know. A thunderclap is exciting when it’s surrounded by silence, not when surrounded by other thunderclaps. Inevitably, during a concert, the levels rise because the tedium leads the engineer’s ears to drop levels. What they do in audio is the equivalent of boldfacing, highlighting, underlining and italicizing every field and field label in a chart (and entering the data in all CAPS).

    Part of the this is the usual ego trip problems. The bands, even the good ones, feel that they won’t have impact without loudness. And of course, the band isn’t hearing on stage what the audience is hearing anyway. And the house engineer wants to feel like he/she (but usually he) is having an impact on the show, so they are constantly playing with the levels. If something is buried in the mix, they’ll never bring down the other instruments, they’ll always raise the level of the signal that’s buried.

    Dynamic range is also an issue for recording. CDs promised dynamic range of 96db, but since every band wants their recording to be the loudest, there is actually less dynamic range on most CDs than there was on most vinyl recordings. That’s one of the reasons why many contend that vinyl sounds better. It has more to do with dynamic range than with analog vs. digital recording.

    The one thing that surprises me is that there hasn’t yet been a class action lawsuit by employees of such venues who now have severe hearing loss. Levels in most concerts exceed OSHA regulations for factories and other workplaces.

  • Andrew Howe says:

    Perceived volume, at least up until the point that noticeable distortion happens in your eardrum, is extremely subjective. Our sense of volume seems to be based entirely on recency-bias… we only know if something is ‘louder’ or ‘quieter’. Once a certain volume threshold is passed, however, all engineer’s mixes (no matter how good) sound compressed all the way to mashed potatoes, completely losing their nuance. This subjectivity is so remarkable that often, if a musician asks me to “turn it up” when it seems unecessary, I pantomime an adjustment at the board without actually making any changes, only to hear a “that’s much better” response after they give it a try.

    Having mixed live events on everything from glorified home stereos to 30,000 Watt touring systems, I have found that often the most demanding audiences are found in houses of worship. The hand of the operator becomes extremely clear in this environment, as there is a constant battle between improving how audible music and spoken words are without violating the relationship between the people in the room.

    I recently spent a few Sunday mornings at the board in a large contemporary (folk-meets-rock) service in a traditional high-steeple stone church. They had a new digital mixing console, the musicians handed me a decibel meter and told to push the music to 90dB as read at the back of the sanctuary, because otherwise people would miss the music.

    I made good use of the musician’s practice and soundcheck time, and with proper balancing on all instruments I never felt the need to cross the 75dB mark. I didn’t get a single complaint from musicians or lay-people.

    Perhaps live sound engineers who push things too far should be sentenced to a few weeks ‘repenting’ at a mixing console in some house of worship.

    One final note — there ARE high-end earplug solutions for those who have a desire to keep experiencing live performances but would like to preserve their hearing ability and prevent headaches. I believe Hearos brand sold a “Hi-Fi” product for around $20 that used varying-sized silicone baffles to provide 20dB or so of noise reduction with a more balanced frequency response. These sorts of products make the sound quieter, but more-or-less retain the frequency balance of the concert. For several hundred dollars, an audiologist can make custom molded plugs with favorable frequency deadening, as well.

    It would be much better if more sound engineers would think more critically about making real use of the incredible dynamic range our ears can handle and improve their skill. There are a few lines of defense, however, against enjoyment-destroying dB abuse.

  • Joseph Hickman says:

    This is an interesting perspective from a punk rock pioneer, via NPR:

    The Evens: The Power Of Turning Down The Volume

    MacKaye: “It’s crazy the amount of money it costs to put a show on, so if you’re trying to put a show on for a low ticket price, you’re up against it. So we discussed finding a way to split off from that system, and one way to do it was just to turn down the volume. Turning down allowed us to play basically anywhere. … It’s so great to play in a barn, or a museum, or an art gallery, or a theater lobby. Quite often, when you put music into an unusual or untraditional space, in many cases, the music really steps up. It’s not being filtered through the venue experience as much.”

Contribute

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.