Popular Topics in this Blog:

Wednesday, March 28, 2018

About #Popmusic And Rhythm - History, Examples, Technics

Pop music is a genre of popular music that originated in its modern form in the United States and United Kingdom during the mid-1950s. The terms "popular music" and "pop music" are often used interchangeably, although the former describes all music that is popular and includes many different styles. "Pop" and "rock" were roughly synonymous terms until the late 1960s, when they became increasingly differentiated from each other. Pop music, as a genre, is seen as existing and developing separately.

This "further reading" section may contain inappropriate and/or excessive suggestions. Although much of the music that appears on record charts is seen as pop music, the genre is distinguished from chart music. Early pop music drew on the sentimental ballad for its form, gained its use of vocal harmonies from gospel and soul music, instrumentation from jazz and rock music, orchestration from classical music, tempo from dance music, backing from electronic music, rhythmic elements from hip-hop music, and spoken passages from rap.

Pop music is eclectic, and often borrows elements from other styles such as urban, dance, rock, Latin, and country; nonetheless, there are core elements that define pop music. The beat and the melodies tend to be simple, with limited harmonic accompaniment. Identifying factors include generally short to medium-length songs written in a basic format (often the verse-chorus structure), as well as common use of repeated choruses, melodic tunes, and hooks. David Hatch and Stephen Millward define pop music as "a body of music which is distinguishable from popular, jazz, and folk musics". During the mid-1960s, pop music made repeated forays into new sounds, styles, and techniques that inspired public discourse among its listeners.

The music charts contain songs from a variety of sources, including classical, jazz, rock, and novelty songs. According to Pete Seeger, pop music is "professional music which draws upon both folk music and fine arts music". Although pop music is seen as just the singles charts, it is not the sum of all chart music. According to The New Grove Dictionary Of Music and Musicians, popular music is defined as "the music since industrialization in the 1800s that is most in line with the tastes and interests of the urban middle class." The term "pop song" was first recorded as being used in 1926, in the sense of a piece of music "having popular appeal". Since early in the decade, it was common for pop producers, songwriters, and engineers to freely experiment with musical form, orchestration, unnatural reverb, and other sound effects. Thus "pop music" may be used to describe a distinct genre, designed to appeal to all, often characterized as "instant singles-based music aimed at teenagers" in contrast to rock music as "album-based music for adults".

Pop music continuously evolves along with the term's definition. Hatch and Millward indicate that many events in the history of recording in the 1920s can be seen as the birth of the modern pop music industry, including in country, blues and hillbilly music. The main medium of pop music is the song, often between two and a half and three and a half minutes in length, generally marked by a consistent and noticeable rhythmic element, a mainstream style and a simple traditional structure. Common variants include the verse-chorus form and the thirty-two-bar form, with a focus on melodies and catchy hooks, and a chorus that contrasts melodically, rhythmically and harmonically with the verse. The lyrics of modern pop songs typically focus on simple themes – often love and romantic relationships – although there are notable exceptions.

Harmony and chord progressions in pop music are often "that of classical European tonality, only more simple-minded." Clichés include the barbershop quartet-style harmony (i.e. ii – V – I) and blues scale-influenced harmony. There was a lessening of the influence of traditional views of the circle of fifths between the mid-1950s and the late 1970s, including less predominance for the dominant function.

Throughout its development, pop music has absorbed influences from other genres of popular music. Please don’t call the rhythm police on me for this, but I usually let them play it however they think it should sound rhythmically. According to Grove Music Online, "Western-derived pop styles, whether coexisting with or marginalizing distinctively local genres, have spread throughout the world and have come to constitute stylistic common denominators in global commercial music cultures". In the 1960s, the majority of mainstream pop music fell in two categories: guitar, drum and bass groups or singers backed by a traditional orchestra. Some of the best known examples are Phil Spector's Wall of Sound and Joe Meek's use of homemade electronic sound effects for acts like the Tornados. This is an excellent article on the most sensitive issue in piano teaching , It would be possible to convene an international conference on this issue only – it is universal, and not of some teachers! In the mid-90s appeared the first articles about rhythm of pedagogy (as if konnakol or scat singing never existed) precisely because of the problems of the students in the read of pop music charts . At the same time, pop music on radio and in both American and British film moved away from refined Tin Pan Alley to more eccentric songwriting and incorporated reverb-drenched rock guitar, symphonic strings, and horns played by groups of properly arranged and rehearsed studio musicians. Consider utilising appropriate texts as inline sources or creating a separate bibliography article.

The word "progressive" was frequently used, and it was thought that every song and single was to be a "progression" from the last. Music critic Simon Reynolds writes that beginning with 1967, a divide would exist between "progressive" pop and "mass/chart" pop, a separation which was "also, broadly, one between boys and girls, middle-class and working-class." Before the progressive pop of the late 1960s, performers were typically unable to decide on the artistic content of their music. Assisted by the mid-1960s economic boom, record labels began investing in artists, giving them freedom to experiment, and offering them limited control over their content and marketing. This situation fell in disuse after the late 1970s and would not reemerge until the rise of Internet stars. Indie pop, which developed in the late 1970s, marked another departure from the glamour of contemporary pop music, with guitar bands formed on the then-novel premise that one could record and release their own music without having to procure a record contract from a major label. In 2014, pop music worldwide was permeated by electronic dance music.

Pop music has been dominated by the American and (from the mid-1960s) British music industries, whose influence has made pop music something of an international monoculture, but most regions and countries have their own form of pop music, sometimes producing local versions of wider trends, and lending them local characteristics. Some of these trends (for example Europop) have had a significant impact of the development of the genre.

Some non-Western countries, such as Japan, have developed a thriving pop music industry, most of which is devoted to Western-style pop. Japan has for several years produced a greater quantity of music than everywhere except the USA. The spread of Western-style pop music has been interpreted variously as representing processes of Americanization, homogenization, modernization, creative appropriation, cultural imperialism, or a more general process of globalization. Please ensure that only a reasonable number of balanced, topical, reliable, and notable further reading suggestions are given.

Take the ubiquitous, “Let it go” from the Disney movie, Frozen (I know, I can hear the groans already – bear with me!). In its original form, this piece is in F minor and features some quite tricky syncopation if the melody is played with the full accompaniment. You can take a look at the original version here on MusicNotes.

Given they’ve probably listened to it a million times, this should come after a it of trial and error. While they might not have actually “read” this from the music, you can now make a connection back to what that rhythm looks like on the music and how it feels and sounds to play. In this way, we’re building on students’ pattern recognition skills which are so vital for fluid music reading.

The great thing now is that students can play this chord progression pretty easily keeping a steady pace, while you can play the melody over the top.

The reality is that when you distil this song down to its underlying chord progression, you’ll quickly realise that the chorus is just a I V vi IV progression. In C, this would be: C G Am F.

You can then swap roles and they can try the first few melody notes (in time, of course), while you play the chords.

For those who are interested in playing familiar pop music like this, I often just have them figure it out by ear anyway, so that eliminates the discrepancy between how they think it should sound and how it is notated.

Remember, students are already reading music in many other aspects of their piano training. If playing pop involves using their ears more than reading, in my opinion, this is a good thing. Best-known for his blogging and teaching,

There arose an idea that rhythm as the language needs to be studied the rhythmic patterns through prosody . I personally use for this with much success the melodica.

Follow About the Author Best-known for his blogging and teaching, Tim is also a well-respected presenter, performer and accompanist based in Melbourne, Australia. You can check him out on Google+, Facebook and Twitter.

Learn How To Incorporate Latin Rhythms In Pop Music – DRUM .... (2018). Retrieved on March 21, 2018, from http://drummagazine.com/how-to-incorporate-latin-rhythms/.

Pop music. (2018). Retrieved on March 21, 2018, from https://en.wikipedia.org/wiki/Pop_music.

Teaching Rhythm in Pop Music. (2018). Retrieved on March 21, 2018, from https://timtopham.com/teaching-rhythm-in-pop-music/.


Popmusic Loudnesswar
The loudness war (or loudness race) refers to the trend of increasing audio levels in recorded music which many critics believe reduces sound quality and listener enjoyment. Increasing loudness was first reported as early as the 1940s, with respect to mastering practices for 7" singles. The maximum peak level of analog recordings such as these is limited by varying specifications of electronic equipment along the chain from source to listener, including vinyl and Compact Cassette players. The issue garnered renewed attention starting in the 1990s with the introduction of digital signal processing capable of producing further loudness increases.

With the advent of the Compact Disc (CD), music is encoded to a digital format with a clearly defined maximum peak amplitude. Once the maximum amplitude of a CD is reached, loudness can be increased still further through signal processing techniques such as dynamic range compression and equalization. Engineers can apply an increasingly high ratio of compression to a recording until it more frequently peaks at the maximum amplitude. In extreme cases, efforts to increase loudness can result in clipping and other audible distortion. Modern recordings that use extreme dynamic range compression and other measures to increase loudness therefore can sacrifice sound quality to loudness. The competitive escalation of loudness has led music fans and members of the musical press to refer to the affected albums as "victims of the loudness war.".

Similarly, starting in the 1950s, producers would request louder 7-inch singles so that songs would stand out when auditioned by program directors for radio stations. In particular, many Motown records pushed the limits of how loud records could be made; according to one of their engineers, they were "notorious for cutting some of the hottest 45s in the industry.".

Because of the limitations of the vinyl format, the ability to manipulate loudness was also limited. Attempts to achieve extreme loudness could render the medium unplayable. Digital media such as CDs remove these restrictions and as a result, increasing loudness levels have been a more severe issue in the CD era. Modern computer-based digital audio effects processing allows mastering engineers to have greater direct control over the loudness of a song: for example, a "brick wall" limiter can look ahead at an upcoming signal to limit its level.

Since CDs were not the primary medium for popular music until the late 1980s, there was little motivation for competitive loudness practices then. The common practice of mastering music for CD involved matching the highest peak of a recording at, or close to, digital full scale, and referring to digital levels along the lines of more familiar analog VU meters. When using VU meters, a certain point (usually −14 dB below the disc's maximum amplitude) was used in the same way as the saturation point (signified as 0 dB) of analog recording, with several dB of the CD's recording level reserved for amplitude exceeding the saturation point (often referred to as the "red zone", signified by a red bar in the meter display), because digital media cannot exceed 0 decibels relative to full scale (dBFS). The average level of the average rock song during most of the decade was around −16.8 dBFS.:246.

The concept of making music releases "hotter" began to appeal to people within the industry, in part because of how noticeably louder some releases had become and also in part because the industry believed that customers preferred louder-sounding CDs, even though that may not have been true. Engineers, musicians, and labels each developed their own ideas of how CDs could be made louder. In 1994, the digital brickwall limiter with look-ahead (to pull down peak levels before they happened) was first mass-produced. While the increase in CD loudness was gradual throughout the 1990s, some opted to push the format to the limit, such as on Oasis's widely popular album (What's the Story) Morning Glory?, which averaged −8 dBFS on many of its tracks—a rare occurrence, especially in the year it was released .

In 2008, loud mastering practices received mainstream media attention with the release of Metallica's Death Magnetic album. The CD version of the album has a high average loudness that pushes peaks beyond the point of digital clipping, causing distortion. This was reported by customers and music industry professionals, and covered in multiple international publications, including Rolling Stone, The Wall Street Journal, BBC Radio, Wired, and The Guardian. Ted Jensen, a mastering engineer involved in the Death Magnetic recordings, criticized the approach employed during the production process. A version of the album without dynamic range compression was included in the downloadable content for the video game Guitar Hero III.

In late 2008, mastering engineer Bob Ludwig offered three versions of the Guns N' Roses album Chinese Democracy for approval to co-producers Axl Rose and Caram Costanzo. They selected the one with the least compression. Ludwig wrote, "I was floored when I heard they decided to go with my full dynamics version and the loudness-for-loudness-sake versions be damned." Ludwig said the "fan and press backlash against the recent heavily compressed recordings finally set the context for someone to take a stand and return to putting music and dynamics above sheer level.".

In March 2010, mastering engineer Ian Shepherd organised the first Dynamic Range Day, a day of online activity intended to raise awareness of the issue and promote the idea that "Dynamic music sounds better". The day was a success and its follow-ups in the following years have built on this, gaining industry support from companies like SSL, Bowers & Wilkins, TC Electronic and Shure as well as engineers like Bob Ludwig, Guy Massey and Steve Lillywhite. Shepherd cites research showing there is no connection between sales and "loudness", and that people prefer more dynamic music. He also argues that file-based loudness normalization will eventually render the war irrelevant.

One of the biggest albums of 2013 was Daft Punk's Random Access Memories, with many reviews commenting on the album's great sound. Mixing engineer Mick Guzauski deliberately chose to use less compression on the project, commenting "We never tried to make it loud and I think it sounds better for it." In January 2014 the album won five Grammy Awards, including Best Engineered Album (Non-Classical).

In October 2013, Bob Katz announced on his website that "The last battle of the loudness war has been won", claiming that Apple's mandatory use of Sound Check for iTunes Radio meant that "The way to turn the loudness race around right now, is for every producer and mastering engineer to ask their clients if they have heard iTunes Radio. When they respond in the affirmative, the engineer/producer tells them they need to turn down the level of their song(s) to the standard level or iTunes Radio will do it for them. He or she should also explain that overcompressed material sounds 'wimpy' and 'small' in comparison to more open material on iTunes Radio." He believes this will eventually result in producers and engineers making more dynamic masters to take account of this factor. His point of view has been widely reported and discussed.

Broadcasting is also a participant in the loudness war. Competition for listeners between radio stations has contributed to a loudness "arms race". Loudness jumps between broadcast channels and between programmes within the same channel, and between programs and intervening adverts are a frequent source of audience complaints. The European Broadcasting Union is addressing this issue in the EBU PLOUD Group, which includes over 230 audio professionals, many from broadcasters and equipment manufacturers.

This practice (excessive compression, dynamic range reduction, loudness level enhancement, etc.) has been condemned by several recording industry professionals including Alan Parsons, Geoff Emerick (noted for his work with the Beatles from Revolver to Abbey Road), and mastering engineers Doug Sax, Steve Hoffman, and many others, including music audiophiles, hi-fi enthusiasts, and fans. Musician Bob Dylan has also condemned the practice, saying: "You listen to these modern records, they're atrocious, they have sound all over them. There's no definition of nothing, no vocal, no nothing, just like—static." The compact disc editions of Dylan's more recent albums Modern Times and Together Through Life are examples of heavy dynamic range compression, although Dylan himself might not have been responsible for it.

When music is broadcast over radio, the station applies its own signal processing, further reducing the dynamic range of the material to closely match levels of absolute amplitude, regardless of the original recording's loudness. This technique is also used as a security feature to prevent quiet passages or fade-outs from becoming dead air.

Opponents have called for immediate changes in the music industry regarding the level of loudness. In August 2006, the vice-president of A&R for One Haven Music, a Sony Music company, in an open letter decrying the loudness war, claimed that mastering engineers are being forced against their will or are preemptively making releases louder to get the attention of industry heads. Some bands are being petitioned by the public to re-release their music with less distortion.

The nonprofit organization Turn Me Up! was created by Charles Dye, John Ralston, and Allen Wagner to certify albums that contain a suitable level of dynamic range and encourage the sale of quieter records by placing a "Turn Me Up!" sticker on albums that have a larger dynamic range. The group has not yet arrived at an objective method for determining what will be certified.

In 2007, Suhas Sreedhar published an article about the loudness war in the engineering magazine IEEE Spectrum. Sreedhar said that the greater possible dynamic range of CDs was being set aside in favor of maximizing loudness using digital technology. Sreedhar said that the overcompressed modern music was fatiguing, that it did not allow the music to "breathe".

But is Dylan's remark just a replay of the quarrel between the ancients and the moderns? It would not be the first time the old guard despises what the new generation does. True, many sound engineers have joined the cause of "more dynamic” music. But are they speaking out for what is objectively better — or are they simply voicing their preference for a particular style of sound? My research aims to answer this question. We'll find out whether recent music is really louder, and whether it's really less dynamic. We'll also consider the hypothesis that loudness may be a stylistic marker for specific recent music styles, instead of being a bad habit only motivated by despicable commercial reasons. Finally, we'll take a close look at Metallica's notorious Death Magnetic, and see why so many people claim it doesn't sound good.

Yes it is, and there is no doubt about that. Let's take a large number of best-selling and/or very well received 'pop' music pieces recorded and produced between 1969 and 2010, normalise them so they peak at 0dB full scale, and measure their RMS value. Then let's sort all the values according to the year of release of the track to which they correspond. The first diagram, left, shows the experiment's outcome, and it is indeed spectacular! The red line shows the RMS median value for each year, and the rectangles give an indication of the distribution: the darker the rectangle, the more pieces showing such a level. There is, without question, a constant growth in average levels between 1982 and 2005, and today's records are roughly 5dB louder than they were in the '70s.

Admittedly, measuring the signal's RMS value only gives information about the 'electrical' or 'physical' content of the audio file, not a measure of loudness as we perceive it. For that, we evaluate the 'integrated loudness', as defined by the EBU 3341 normative recommendation. As seen on the second diagram to the left, in the context of our corpus of songs such a measure is highly correlated to the signal's RMS value, and the two graphs are very similar to each other. This second set of results confirms the first one.

Let's repeat the experiment using other criteria. For instance, one criterion commonly used to describe the dynamic behaviour of a piece of recorded music is the 'crest' factor. Put simply, the crest factor is the difference between the RMS level and the peak level over the course of the song. Intuitively, it measures the amplitude of the emerging 'peaks' in the audio stream. It's considered a good marker of the amount of dynamic compression that was applied to the music: more compression generally means a lower crest factor. Some professionals consider good handling of the crest factor as the cornerstone of successful mastering. Also, still generally speaking, the lower the crest factor, the louder the music.

The third diagram on the first page shows the evolution of a measure that's analogous to the crest factor. Based on the same 4500 tracks, this simplified crest factor is shown falling by 3dB since the beginning of the '80s, reinforcing the suspicion that the increase in loudness we've been witnessing since the '90s was brought by dynamic compression. You'll see that the evolution of the crest factor can be divided into three stages. First, from 1969 to 1980, the crest factor increases, probably due to the improvement of studio gear in terms of signal-to-noise ratio and dynamic transparency. From 1980 to 1990, the crest factor remains relatively stable. Then, from 1990 to 2010 — the era of the loudness war — the crest factor is dramatically reduced.

Finally, another relevant and helpful descriptor is the proportion of samples in a piece of recorded music that are close to 0dBFS once the piece is normalised. A high density of very loud samples suggests that the master recording has been allowed to clip, or that a lookahead brickwall limiter such as the Waves L-series has been employed. The fourth diagram traces the density of peak samples in the same 4500-track corpus. The first two diagrams show that music has got louder; the third indicates that this evolution is probably due to dynamic compression; and this illustration shows that such compression is probably applied via digital brickwall limiters.

This is a surprisingly difficult question to answer. Intuitively, we feel that dynamic range ought to measure how 'variable' or 'mobile' the music level is. Let's try to give this intuition some substance. The first diagram on the previous page compares the evolution of the signal's RMS value for extracts from two songs: 'Fuk' by Plastikman, and 'Smells Like Teen Spirit', by Nirvana. Apparently, the level of 'Smells Like Teen Spirit' is more mobile than that of 'Fuk'. This is no surprise, considering that Plastikman's music is minimalist techno, whereas Nirvana's productions often feature soft verses and loud choruses.

However, the results change radically if we perform the analysis using an analysis window of 100 milliseconds instead of two seconds. Over the long term, Plastikman's music is more stable in terms of RMS levels — but in the short term, as you can see from the second diagram, it appears to feature more variations in level, because of its loud, dry drums. So if we want to establish a measure of 'level mobility', we need to think about what time scale to employ.

In practice, however, this method proves to be unreliable. Amongst other problems, an isolated peak in an otherwise flat RMS curve would distort the measure, giving a false impression of significant RMS mobility. A better method, similar to the one used by the EBU to evaluate loudness range, consists of dealing with the RMS variability instead of its mobility. Instead of directly evaluating an 'RMS mobility', we compute the distribution of RMS values encountered during the analysis. Such a distribution is shown on the third diagram of the group I've been referring to. Then we measure the 'spread' of the distribution curve using a trick similar to the 'interquartile range method' in descriptive statistics: the spread of the curve will leave alone the top five percent and the bottom 10 percent values. We can see that for an analysis window of two seconds, 'Smells Like Teen Spirit' has a greater RMS spread than 'Fuk'.

Let's change the time scale again and measure this RMS 'spread' with RMS values every 0.1s. The outcome of the experiment is shown in the fourth diagram, and again the results are reversed: the spread for 'Fuk' is greater than it is for 'Smells Like Teen Spirit'. Suppose that we now repeat the same experiment for a variety of analysis windows. The result is shown on the last diagram of the same group. Interestingly, level variability for 'Smells Like Teen Spirit' is always greater, except for windows below 0.18 seconds, where the drum parts in 'Fuk' show a decisive influence.

What is shown in the fifth diagram is a very good candidate for a measure of 'dynamic range' of a piece of music. Suppose now that instead of dealing with the signal's RMS, we deal with a measure of perceptual loudness, such as the one mentioned in the ITU recommendation BS 1770: we would now be dealing with 'loudness range'. This is, in fact, the basis of how the EBU defines 'loudness range' in their EBU Tech 3342 document, as explained in the 'EBU Measure Of Loudness Range' box.

There remains the question of whether one should use such a term as 'dynamic range' at all: there is no official definition for it, and it may be confused with the dynamic range of a recording medium, which is basically the difference between the highest and lowest level it can handle. During the course of this article, therefore, I won't talk about 'dynamic range' in relation to a piece of music. Instead, I will be using 'RMS variability', or more generally 'dynamic variability'. The term 'dynamic range' will be reserved for the measure of signal-to-noise ratio of a recording medium. I will use the term 'loudness range' in strict reference to the EBU 3342 document, and the term 'loudness variability' in other cases involving loudness instead of RMS.

As we saw above, descriptors such as RMS level, integrated loudness, simplified crest factor, and proportion of samples above -1dBFS show spectacular evolution from the beginning of the '90s until sometime near 2005. This is the effect of the loudness war. So surely the EBU's loudness range measure should do the same? As shown on the first diagram of the group on page 179, it doesn't. What we see is that loudness range appears to be decreasing from 1969 to 1980, then stabilises until 1991. After 1991, instead of going down as expected, it follows a rather inconclusive evolution, and certainly doesn't decrease in any clear manner.

As we also saw above, the density of high-level samples in the audio signal rises spectacularly after the beginning of the '90s. This indicates increasing use of compression, and, more particularly, digital brickwall limiters, which in turn raise the overall level of the music corpus we're dealing with. But can the use of such limiters be linked to a diminution in loudness range? Let's answer that question by displaying EBU 3342 values versus high-level sample density — in other words, by plotting loudness range versus the amount of limiting applied. This is what is displayed in the second diagram, which shows extremely clearly that the answer is no. The increasing amount of limiting performed during the loudness war era didn't decrease the observed loudness range in any way.

This is not to say that processing audio with a brickwall limiter will not reduce its loudness range. As we'll see later in the article, it does. The observation here is just that from the analysis of actual records, the loudness war did not result in any obvious reduction in the loudness range of music.

Still, 'loudness range' as defined by EBU 3342 deals with time scales near and above three seconds. Let's see what happens using other window analyses. For that, let's evaluate the gated RMS variability based on 0.05 to 12.8s-long windows. And to be even more specific, let's modify the evaluation of RMS variability so that it singles out the respective influence of each time scale. This way, we will be able to see whether the loudness war reduced level variability at any time scale. The result for both experiments are shown in the third diagram. Not only does it corroborate the previous findings, it also goes much further, showing that the loudness war has had no clearly identifiable influence on level variabilities at any scale. This is quite a drastic conclusion: contrarily to what one can often read on the Internet, the loudness war did not cause any reduction in level variability. There is as much level variability now as there was in the '70s or '80s.

As we saw earlier, the amount of compression/limiting used in mastering drastically increased between 1990 and 2000. Yet at the same time, and even though limiting may in many cases reduce the loudness range of a piece of music (see 'Loudness Range & Limiters' box), it isn't possible to observe an overall reduction in loudness range in productions. How can we resolve this apparent contradiction?.

The first possibility is that mastering engineers may actually have been reasonable after all, only applying an amount of limiting that hasn't led to obvious loss of loudness range. This, as shown in the 'Loudness Range & Limiting' box, is theoretically possible, since the audio material's RMS variability may show a certain amount of resilience to limiting. I don't believe this is the case, though. Significant limiting can be measured or observed on the waveform, and can easily be heard: attacks are modified in a very specific way, everything seems to be more dense, more solid, and often brighter. Having listened to a very large number of tracks from the corpus I used for this article, it's obvious that a large proportion of recent tracks are limited in quite a heavy manner.

There remains only one solution I can think of: the loudness range of the music prior to mastering or even mixing has been increasing at the same time as compressing/limiting has been getting more drastic. In other words, the source material has more initial variability, and is more resilient to limiting. This is borne out by stylistic changes in music during the era of the 'loudness war'. The beginning of the '90s, which correspond to the beginning of the loudness war, witnessed the emergence of mass-audience rap artists, and rap music typically has sparse production with very loud kick and snare parts, which increase level variability at very small scales (0.1s or so). Around the same time, metal music evolved into 'nu metal', which integrated elements of funk and rap, and with it more percussive elements. On a slightly larger time scale, patterns at the end of musical phrases also evolved around the beginning of the '90s. Whereas many hits from the '80s would transition from one musical phrase to another using a mellow tom roll, hip-hop producers from the '90s preferred drastic 'cuts' in the sound, which may be liable to increase level variability at scales near 0.5s.

On a still wider time scale, related to the structure of songs, one could put forward the idea that modern productions use contrasts in level, where older pop songs might have employed key or chord changes to delineate different song sections. It's quite common to hear rap or even R&B tracks where the verses are so miminalist it's difficult to even extract a chord sequence from them, while at the same time, the chorus is buried under dense vocal harmonies and/or lavish tonal keyboard parts, which increase the RMS level quite a bit. 'Lollipop' by Lil'Wayne or 'Gangsta's Paradise' by Coolio are reasonably good examples, and so is, to a certain extent, 'Single Ladies' by Beyonce. In productions like this, level variation is being used to create a structure for the song.

But the way musical dynamics are expressed may change. Imagine you're listening to some music. You want to it louder. You walk to the volume control, and simply raise the volume. By doing so, you increase the signal's RMS, increase its peak level, and leave its crest factor untouched. We'll call that the 'first loudness paradigm'. Suppose now that you've got a region in Pro Tools that peaks at 0dBFS. You can't raise its volume in the traditional way, or it's going to distort. But you can insert a limiter, and lower its Threshold slider. By doing so, you still increase the signal's RMS, but this time its peak level remains stable and its crest factor gets reduced. That's what we'll call the 'second loudness paradigm'.

When Wagner writes an orchestral crescendo, he uses the first paradigm, by adding more instruments. But, using limiters, you can create a crescendo that employs the second paradigm. The difference in terms of resulting waveform is shown in the top image opposite: Mike Oldfield uses the first paradigm at the end of the first part of Tubular Bells, while the second is used in Trent Reznor's 'Closer'.

To get a more precise idea of the difference between both paradigms, let's take six crescendos from six different recordings, three of which use the first paradigm and three the second. Let's analyse them in terms of RMS, peak level and crest factor. The result of this analysis is shown on the second diagram, right. The first graph shows that all crescendos are based on an increase in RMS level. The second graph clearly distinguishes the tracks that use the two paradigms: in case of the second, the peak level is constant. The third graph shows the crest factor systematically decreasing in these crescendos, but suggests that in the others, there is no link between crest factor and loudness.

It could be argued that crescendos using the second paradigm are not 'pure' dynamic events: the louder the music gets, the more the limiter is allowed to change the signal, and the more it will modify the original timbre. But is the same not true of traditional crescendos? Performing a crescendo on a single violin note will not only change its level, it will change its timbre. And most orchestral crescendos incorporate additional instruments as they develop. The combination of the two factors results in a much more drastic change to timbre than any brickwall limiter could ever cause.

Metallica's most recent album has become a cause celèbre for opponents of current mastering practices. As far as I can tell, the main problem with Death Magnetic is a collision between the way it has been mastered and its guitar sound. The very aggressive mastering simply is not suited to Metallica's production style, which dates back to the '80s and relies heavily on solid, distorted guitars. To sum it up, the result is a music that's generally stable, and at the same time features very low crest-factor values. From a perceptual point of view, this translates as 'compact all the time'.

Such crest-factor values are comparable to what can be found on tracks from Kanye West's My Beautiful Dark Twisted Fantasy, or 50 Cent's Get Rich Or Die Tryin'. Those are stylistically loud urban music albums with really strong percussive elements that articulate the writing, and are better suited to low crest-factor values than Metallica's constantly buzzing guitars. They are also comparable to tracks from MGMT's Oracular Spectacular or Congratulations, two albums with a sound so distinctive that a constant use of the second loudness paradigm and/or dynamic compression artifacts is not a problem at all. But Metallica's 'classic' sound simply doesn't easily allow for sonic extravaganza.

To answer that question properly, it may be useful to adopt a point of view generally used in image processing, where it's possible to analyse a photograph or any picture in terms of luminance distribution. Photoshop does that in a dialogue called 'Levels'. To evaluate such a distribution, an algorithm makes an inventory of all the pixels in the image, and sorts them according to their luminance. This results in a distribution graph that shows if the picture, as a whole, includes predominantly light, medium or dark areas, and to which degree. The same process can be followed with audio files: we take an inventory of all the samples from a song, and sort them according to their absolute level. As shown on the image overleaf, the resulting distribution curve can teach us many things.

To go on with the comparison with images, it's as if, for the last 20 years, all pictures in books and magazines have been getting brighter and brighter. There are still deep blacks, the contrast remains intact, but all images look brighter. This is illustrated with the Tower Bridge pictures on the image. It's as if everything these days is supposed to look 'flashy', even though common sense suggests there are some images that shouldn't look flashy at all, in any situation. This is all the more true in the case of audio content, for which 'brighter' doesn't simply mean a higher density of clearer pixels. It also means reduced crest factor, envelope modifications, use of the second loudness paradigm and, in the worst cases, distortion. Common sense suggests that although there is nothing wrong with these characteristics as such, they shouldn't be on virtually all records.

In the end, it's all about style. Reduced crest factor values bring a 'compact' aspect to the sound; Waves describe it as a "heavily in-your-face signal that rocks the house” on their MaxxBCL page. It may be suited to your kind of music, or it may not. You might want to remain 'soft' on purpose. If you're doing heavy techno music, though, 'compact' is probably a good idea. Similarly, the two loudness paradigms described earlier each have a very distinctive 'flavour', and you may prefer one or the other. Do you want every loud attack modified by compressor/limiter? It might be a good idea in many cases, but it might prove disastrous in others. Do you want to reduce the loudness range of your music without changing anything else? Then you're probably better off with volume automation than with a limiter, since we saw that loudness range is naturally resilient to a certain amount of limiting.

In December 2010, the EBU released the Tech 3342 document, as a part of the loudness recommendation EBU R128. It gives very precise guidelines for measuring 'loudness range', a descriptor that may very well become a standard for the measure of the dynamic variability of audio content, so it's worth taking a few minutes to study in details what is in fact a measure of the 'three-second window, gated K-weighted RMS variability' of audio content. Let's break that down.

The analysis window length is three seconds, sampled every second. It means that this measure concerns dynamic phenomena more than three seconds in length. Thus, at one extreme, it will not take into consideration percussive sounds. At the other, loudness variations due to structural changes may not be clearly visible: they can be masked by variations happening at smaller scales. It's a compromise that was chosen by the EBU.

Instead of looking at RMS values, the measurement protocol looks at loudness values as defined in ITU-R BS 1770. This measure of loudness is simple: take the original file, EQ it, and then evaluate its RMS. The filter used in that case is quite basic, as shown in the diagram. It may come as a surprise that the ITU uses such basic filtering to define the difference between RMS and loudness, but as they put it, "for typical monophonic broadcast material, a simple energy-based loudness measure is similarly robust compared to more complex measures that may include detailed perceptual models”. The ITU calls such a filter 'K-weighting', and gives 'LKFS' as a loudness unit. At this point, the descriptor we're dealing with is a sequence of loudness values, which, on a side note, corresponds to "short-term loudness” as defined in EBU 3341. Though those values are measured in LKFS, the EBU favours the acronym 'LUFS' (Loudness Unit Full Scale) in that case.

This sequence of values is now gated. There are two successive gating processes. The first one, 'absolute gating', excludes from the measurement all values below -70LKFS, and is supposed to ensure that silence and background noise are not wrongly included in the measurement. The second gating process is called 'relative'. Once rid of the very soft parts of the signal, a mean loudness is evaluated. Relative gating will now exclude all loudness values more than 20dB below the mean loudness. If the mean loudness after absolute gating is, say, -15LKFS, then all values below -35LKFS will be removed from loudness range evaluation. This relative gating is used to remove 'atypical' parts of the signal. At this point, the descriptor we're dealing with is a sequence of 'three-second window, gated K-weighted RMS' values.

And now for the crucial part: the loudness range evaluation. It is done by computing the variability of this sequence of "three-second window, gated K-weighted RMS” values, using the statistical method described above, and illustrated by diagrams three and four in the group on the previous page. As such, we're really in the presence of a "three-second window, gated K-weighted RMS variability”, and the unit for it is LU (Loudness Unit).

The idea that a compressor or limiter might expand the available dynamic range is interesting, but not new. Many decades ago, engineers would compress the signal between the microphone and the recorder in order to increase the available dynamic range of the recording medium, so that its then low signal-to-noise ratio was less of a problem.

However, a 1dB loss in RMS variability is a very small amount. The threshold below which limiting really begins to affect the signal depends on the music you're processing. The second diagram shows the evolution of RMS variabilities at different scales for three pieces of music. Notice how the pop/rock music piece on the right shows RMS variabilities that are more resilient to limiting than the two other pieces, which are opera and jazz. This is especially valid for the lower time scales: in that particular case, the limiter's threshold had to be set to at least -6dB to get a noticeable decrease in RMS variability. This might very well be caused by the presence of a loud, very prominent kick drum part in this piece, which may indicate that the higher the initial RMS variability, the more its resilience to limiting. According to that point of view, high variabilities are not easily reduced. This initial resilience to limiting is another argument towards the contention that limiting doesn't automatically mean a reduction in loudness range, especially if the initial material is highly variable.

Many albums from before the digital era have been remastered. As an example, let's focus on the Cure's discography. Since 2004, each of their pre-1990 albums has been remastered and released with extra material. Diagram 1 from the group below compares the original editions with the remastered ones in terms of RMS level. The 'Deluxe' editions are indeed louder than the original ones, and their RMS level is generally 5dB above that of the original editions. That being said, they're not as loud as the albums released after 1995. On a side note, notice how recent Cure albums are definitely victims of the loudness war: between Wish and Wild Mood Swings, there is a sudden jump of 6dB so that Cure albums, generally less loud than the current trend, exhibit as much level as everyone else.

Let's focus on Pornography, originally released in 1982. The waveform capture on the same image compares the waveform corresponding to the original and remastered editions of the entire album. Obviously, the 2005 remaster relies heavily on digital lookahead brickwall limiters. Is that good or bad? I personally enjoy listening to both editions. From a more objective point of view, let's focus on the highlighted part of the waveform, which corresponds to the end of 'A Strange Day'. On the original edition, just before the short pause, we can see a slight decrescendo, followed by a short crescendo. Readers who know the song will agree that these loudness variations are very relevant to the actual musical content (song climax and then pause). In the original edition, those loudness variations use the first loudness paradigm as described in the main text. Now, look at same part on the waveform corresponding to the remastered edition. The loudness variations are now of a very different nature, and that may not be such good idea. In my opinion, this may be the main danger of remastering albums from before the digital era: if one is not cautious, it raises the density of very high-level samples, reduces the crest factor, and turns the first loudness paradigm into the second.

Records from notably famous and venerable bands such as the Beatles or Pink Floyd are often remastered several times, to the point where it becomes difficult to find a reference version for any of their albums. Let's take Dark Side Of The Moon, for example. Diagram 3 shows high-level sample density for five of its releases: each and every one is mastered or remastered differently. Even the two editions labelled "Original Master Recording” are not the same — probably because one is a vinyl record and the other a CD.


'Dynamic Range' & The Loudness War |. (2018). Retrieved on March 21, 2018, from https://www.soundonsound.com/sound-advice/dynamic-range-loudness-war.

. (2018). Retrieved on March 21, 2018, from http://cdm.link/2007/05/loudness-war-music-over-compression-demonstrated-on-youtube/.

Loudness war. (2018). Retrieved on March 21, 2018, from https://en.wikipedia.org/wiki/Loudness_war.

No comments:

Post a Comment