This chapter explores some of the terminology relating to music, which is also applicable to sound effects, for interactive media. We will explore degrees with which music is interactive within games, as well as some of the methods that are used to create sound that can repeat indefinitely. These two concepts, interactivity and continuous sound, are specific to music for interactive media; music for film does not contain these qualities because movies are a permanent, fixed, and a linear medium, and therefore the sound will always be fixed the same way to the picture. It is possible that this may evolve and change in the future, but unlikely unless films become interactive media themselves. In addition to interactive-media specific terminology, we do explore some concepts that relate to film music theory in this chapter as well that are also applicable to games.
By the end of this chapter, you should be able to:
- Label a piece of music (with justification for such labels) using the terminology discussed,
- Understand some of the challenges of creating music for essentially unending media, and
- Describe some of the techniques used by composers of interactive and unending media.
6.1 What is Interactive?
It is easy to assume that since video games are by nature interactive, that all music for games is inherently “interactive music”. However, this is not necessarily the case. Theories on what interactive music precisely means are plentiful, and a consensus on the matter has only been tentatively reached. We can look to discussion surrounding interactivity in concert music for some parallels: Jon Drummond states in “Understanding Interactive Systems” with regards to interactive concert music technology, that, “the description of interactive in these instances is often a catchall term that simply implies some sense of audience control or participation in an essentially reactive system.” This is primarily due to the large quantity of works for, for example, live instrument and electronics being termed “interactive”, when the instrument is merely being amplified and processed during performance of a notated score. This in itself does not produce any interactivity between the performer and the electronics; the performer executes a score, and the electronics react in whatever way they are programmed to do so. Robert Rowe, in his 1991 publication Interactive Music Systems, proposes the following definition of interactive music, which does not exclude the performance situation described above: “Interactive computer music systems are those whose behaviour changes in response to musical input. Such responsiveness allows these systems to participate in live performances, of both notated and improvised music.” This definition, however, also does not address the fact that interaction should involve responsiveness on part of the performer as a result of the electronic sound, as well. The study of game music is younger than the study of concert electroacoustic and electronic music, so theories regarding interactivity of music in games are fewer. However, the usual definition of interactive music in games is closer to Rowe’s, disregarding Drummond’s concern that it may be reactive instead. Therefore, game music has been accepted generally as interactive music, but there remains significantly less discourse surrounding the nature of interactivity in game sound. In the approach taken in this text, I expand upon Rowe’s definition, which applies to electronic concert music, and suggest that for game music to be truly interactive, the music’s behaviour must change actively in response to an intentional musical input by the player. Additionally, true interactivity of music should be a two-way exchange between the “performer” and the technology. This distinction is more difficult in media such as video games because the exchange between player and technology is not always obvious or intentional. I may enter a dungeon, and as a result the music changes to be ominous, and so I decide to back out of the dungeon accordingly. This is somewhat of an interactive exchange, but at the same time, I am not engaging with the game to intentionally change the music; I am engaging with the game and the music changes as an inherent result. Therefore, I also suggest that music for games is an interactive media that exists on a continuum, containing varying degrees of interactivity: reactive, adaptive, and fully interactive. I will describe each in detail below, including examples.
6.2 Reactive Music
A quote earlier by Drummond stated that most music we consider interactive in the concert hall is essentially reactive. This is due to some works, containing live electronics, claiming themselves to be interactive, or “with interactive electronics”, without careful consideration of the implications of the term interactive. A work for flute and electronic processing, for example, may not require any response on part of the player to the electronic sound, and merely require that the flautist plays written notes, and the electronics process this music according to prewritten algorithms. There is no interaction here, and it could therefore be termed reactive: the performer plays, the music responds, and the performer continues to play, without adjusting to the electronic effects or responding in kind. An extension of this can be applied to video game music. The term reactive refers to music that changes in response to a singular non-musical action of the player. An example of such music would be when Mario goes down a pipe, and the music changes from overworld to underworld music. Another example would be in early Final Fantasy games, when the music cuts from dungeon music to battle music when a random encounter begins. Both of these situations involve the character executing an action, and the music reacting to the player’s action, with a singular musical response. There are two important distinctions to make regarding reactive music that separate it from adaptive and fully interactive music. Reactive music is not fully interactive, because the player does not actively participate in modifying the music – rather, the music changes as a result of an unrelated, non-musical action. It is not adaptive because it does not continuously update and modify itself. There is a singular action that has a singular result, and this result remains until another modifying action is made by the player. Reactive music is less common in modern video games than in games of the PlayStation era and before because there are more opportunities for creating smooth musical response within adaptive music. However, there are situations in which composers may seek reactive music for aesthetic reasons.
6.2.1 Reactive Music Examples
Reactive music was widespread during the 8-bit and 16-bit generations of all consoles, and persisted through 32-bit systems. It is a less common form of interactive music today, but still exists in certain game genres. Of all the types of interactivity, reactive music may seem to be the simplest, and this may be a correct interpretation. However, the concept and origins of reactive music indicate that it is innovative, and sets a precedent for adaptive music. It is also important to consider the music as a function of the environment; currently games involve fully immersive, 3d worlds that a player can seamlessly explore, often without fade screens. This was not the case in 8-bit through 32-bit consoles, where continuous motion occurred on a 2-dimensional plane, or players explored a pseudo-3D space, and upon reaching the end of a screen, the screen would fade to the next screen. This fading is a visual analogue to reactive sound, as music fades out and then fades back in. Examples of this are easily noticed in games such as Final Fantasy, where a player reaches the end of a location, the screen fades out, the music fades out, and then a new screen and music fade in. Another very clear example occurs in the original Super Mario Bros. game, when Mario has to go down a pipe into the underworld. In this case, rather than combining a fade of audio and visuals, the overworld music cuts out abruptly as Mario slides down the pipe, during which time a pipe sound effect plays, and as soon as Mario emerges from the pipe, the underworld music begins. Both the fade and the sound effect/action separation are means with which music changes reactively, but incorporates a smooth transition.
6.3 Adaptive Music
Adaptive music is probably the most commonly integrated type of interactive music discussed in this text, steadily increasing in its use since the late 1990s. Adaptive music involves continuous modification of music based on player’s actions, but without the intentional and active correlation between player action and sound. Much like reactive music, the changes that occur in adaptive music happen as a result of non-musical player actions. However, unlike reactive music, this process is more continuous. A definition of adaptive music, taken from the Electroacoustic Resource Site (EARS), states:
“The non-linear medium of computer gaming can lead a player down an enormous number of pathways to an enormous number of resolutions. From the standpoint of music composition, this means that a single piece may resolve in one of an enormous number of ways. Event-driven music engines (or adaptive audio engines) allow music to change along with game state changes. Event-driven music isn’t composed for linear playback; instead, it’s written in such a way as to allow a certain music sequence (ranging in size from one note to several minutes of music) to transition into one or more other music sequences at any point in time.”
Adaptive music has been steadily replacing reactive music in contemporary games because sound engines are more flexible and allow for more incremental changes in music. Adaptive music is also the most diverse; the adaptation can be as subtle as instruments dropping in and out of the mix or the overall volume decreasing slightly, or can involve the entire soundtrack continuously changing from note to note, depending on actions taken by the player.
6.3.1 Adaptive Precedents: Monkey Island 2: LeChuck’s Revenge (1991)
The intention behind the term adaptive is derived due to the process with which the music is affected during gameplay, rather than simply the result. This process originated with the creation of the sound engine iMUSE, developed by Michael Land and Peter McConnell at LucasArts in the early 1990s. This engine improved the interactive capabilities of sound by changing the way that sound is modified during the game. The designers desired the ability to change music fluidly throughout the game, in contrast to the abrupt changes that occur during reactive sound transitions. The iMUSE system was a forerunner in the revolutionizing of game sound, and would leave a lasting impression on the interactivity of game sound. iMUSE uses several techniques to allow for the music to adapt organically to gameplay. One of the ways that iMUSE enables smooth transitions is the playing back of smaller portions of loops at certain points while waiting to check if certain gaming conditions are met. Since early computers had variances in processing power, a cut scene would take longer on one computer than another. This waiting process enabled the music to remain consistent during the cut scene, and adapt to the processing speed of the computer. Another example is demonstrated in Monkey Island 2: LeChuck’s Revenge: when the main character wanders around and different variations of the main theme play, with different instrumentations. This is a precedent of a technique, called vertical re-orchestration, which will be discussed in detail later in the chapter.
6.3.2 Vertical Re-orchestration and Horizontal Re-sequencing
Two techniques that are commonly associated with adaptive music are vertical re-orchestration and horizontal re-sequencing. Horizontal re-sequencing involves the breaking down of a musical composition into several smaller sequences or segments that can be re-arranged to create variant copies of the same work. In A Composer’s Guide to Game Music, Winifred Phillips describes horizontal re-sequencing as an analogue to Mozart’s musical dice game:
“In the musical dice game attributed to Mozart, musical pieces are broken down into segments consisting of the contents of a single measure. These segments are assigned numbers. Rolling the dice results in numbers that are used to determine which of these musical segments comes next in the resulting composition. Mozart composed the segments so that they could be juggled and recombined in nearly endless combinations. His game is, in fact, a low-tech but mathematically complex demonstration of a horizontal re-sequencing method.”
This process described above is similar to those taken during horizontal re-sequencing, as algorithms within the game’s programming decide what order the segments are played in. Instead of rolling dice as in the example above, however, the choices are determined by computer code in the game. There are several ways to approach segment selection, including random selection, selection by probability, selection from a specific set of audio files at a specific time, and many more. The end result is a soundtrack that is continuous, but never repetitive. The composer must keep in mind when writing the music what the combination possibilities are, and compose music/create sounds that are acceptable when combined in many ways. As a result, music that uses horizontal re-sequencing may not always have the individual, distinct voice that through-composed music does. However, discussions on aesthetics of horizontally re-sequenced music, as it becomes more common, may lead to new compositional developments and techniques.
Vertical re-orchestration, which is also sometimes referred to as vertical layering, or interactive stems, involves the breaking down of a musical composition into several smaller components that can be layered on top of one another simultaneously to form different orchestrations of the same composition. Vertical re-orchestration is essentially very similar to horizontal re-sequencing, except that rather than sections of music being re-arranged in time, sections of music are re-arranged vertically by changing the instrumentation. The technique can be used to great effect, especially in situations where a short loop is played over and over. The continuously changing instrumentation gives the loop a dynamism it wouldn’t have otherwise. Vertical re-orchestration is therefore an excellent approach for creating dynamic loops, which will be discussed later in the chapter.
6.3.3 Other Adaptive Musical Changes: Tempo and Volume
Changes in tempo and volume are also used in adaptive game music. A very early example of this is in the game Space Invaders (1978): the music speeds up when the player is getting close to failure, and slows down as the player begins to succeed at the game. Such changes can be very valuable to gameplay, giving gameplay feedback to the player that is related to the current conditions in the game (e.g., player succeeding, player failing, player about to die). This type of adaptive music tends to be more perceptibly linked to the game, as re-sequencing and re-orchestration are often either random techniques (even though they may have their own hierarchical decision making models), or they are more linked to game actions that have little to do with the soundtrack music. Another example of music speeding up is in the game Tetris (1984): as soon as the blocks reach a certain height, music will play back extremely fast to signify that the player is about to lose, and then slows once the player has cleared enough blocks to be safe again. However, unlike in Space Invaders, the Tetris music only speeds up once the blocks have reached a certain point, remains at that speed, and then slows down once the blocks are below that point. Therefore, in Tetris, these speed changes are actually somewhat reactive. Volume changes are also a common element of adaptive game music. Music may fade out when sound effects play, for example, or when the user walks away from or towards a certain area. In the game Bioshock (2007), as the player nears certain objects such as radios that are playing music, their volume increases. Volume and tempo changes do not create as apparent of a change to the music as vertical re-orchestration and horizontal re-sequencing, but they can give valuable aural feedback to the player.
6.3.4 Algorithmically Adaptive Music
Algorithmically adaptive music includes soundtracks in which a majority of the music is generated by algorithms as a result of gameplay. While this may sound a lot like fully interactive music, it is different because the player generates the music passively; there is no active participation in the musical creation. One of the most overt examples of this type of music is in the game Rez (2001), released for the PlayStation 2. Rez is a sci-fi based shooter, in which the player’s movements and actions determine every component of the soundtrack. All of the sound effects are replaced with synthesized musical sounds, and the movements of the player determine the speed of an electronic beat. The result is a soundtrack that is very connected to player performance, but the lack of sound effects, especially because they blend in with the music, can be quite disorienting to those accustomed to shooters that have a large separation between sound effects and music. Nevertheless, the concept is intriguing, and results in unique gameplay. Another example of algorithmically adaptive music is presented in the game Red Dead Redemption, released in 2010. This game involves the use of several different pre-recorded stems, which are then combined and played back based on algorithms that select those stems depending on certain game parameters. The connection between the sound and the gameplay is significantly looser, but it provides a soundtrack that is dynamic and does not get repetitive. The soundtrack is also different every time the game is played. This technique represents a type of horizontal re-sequencing, although it is not considered looping due to the continuously generated nature of the soundtrack. While horizontal re-sequencing contains small audio components that can be re-arranged to create individual pieces within the game, every audio stem in Red Dead Redemption can be combined with any other audio stem in any order.
6.4 Fully Interactive Music
Fully interactive music, as defined by this text, is only possible when a player performs an action within a game intended to have a direct impact on the music. This process does involve a two-way exchange between the player and the music: the player performs an action that actively results in a sound, listens for the result, and performs the subsequent action accordingly. Fully interactive music is present in its most obvious form in music games like Rock Band or Guitar Hero, but also exists in areas within games, in mini-games, and when musical actions are a component of the game.
6.4.1 Music-based Games
Music-based video games enjoyed their peak during the late 2000s, following a rise in popularity of the game Guitar Hero, originally released in 2005, and its subsequent competitor, the multiplayer game Rock Band (2007). Karaoke games also saw popularity during this time. Donkey Konga, a music-based game starring familiar Nintendo Characters Donkey and Diddy Kong, preceded the Guitar Hero and Rock Band series, appearing on GameCube in 2003. In Donkey Konga, the player uses peripheral conga controllers to play along with on screen instructions; these instructions are then presented in a scrolling tablature that instructs the player to either hit the conga or clap. This type of scrolling tablature would persist through later music-based games such as Guitar Hero. Guitar Hero popularized the music-based game genre, eventually becoming one of the best-selling games for the PS2, despite the high retail price of the guitar controller required for gameplay. Rock Band was released following the unexpected success of Guitar Hero, and featured a full band set-up, including vocals, drums, guitar, and bass. The Rock Band instrumental kit retailed at nearly 250 USD upon its release, but it would still proceed to be a popular game, and inspired Guitar Hero to release a full band equivalent as well. Sequels for both have been continuously released (the most recent Rock Band iteration hit shelves in late 2015). However, the popularity has waned slightly over time, likely due to the expensive peripherals, other musical party games, such as Dance Central, being released, and a general decline in interest in party games. Nevertheless, these music-based games continue to be produced and released to good reception, indicating that music-based games provide a solid platform for gameplay.
6.4.2 In-game Activities
Sometimes interactive music does not exist as part of the overall gameplay, but within small mini-games, or tasks the player has to complete during the game. An extended example of this would be in Ocarina of Time or Wind Waker, in which the player has to press certain buttons to play certain learned melodies (as in Ocarina of Time), or conduct (as in Wind Waker). These tasks create fully interactive music because the user is actively engaging with the sound to achieve a desired musical result. These results also happen to have an impact on the gameplay. In both Zelda games, for example, they are integral components to the storyline and advancement. Sometimes in-game interactive music does not have an effect on the overall gameplay, such as in Grand Theft Auto. In GTA, players will hear a radio when they are in a car, and they have the option to change the station, or turn the radio off entirely. Again, this is an instance in which the player intentionally changes a musical component of the game, rendering it fully interactive. This element of GTA is one of the components that give the game the open world, sandbox setting. Interactive in-game music is also present in another sandbox game, Minecraft, as the player can change what music they play on a jukebox. These in-game musical activities can also present as very small mini-games or components on larger quests, such as in Eternal Sonata (2007), when Allegro and his team have to replicate a Chopin melody on a large floor piano in order to proceed. In-game interactive music, therefore, can have a large-scale impact on the game, or no real impact on gameplay at all, but it always gives the player a feeling of more control over the musical environment of the game.
6.4.3 Music as Primary Game Component: Sound Shapes (2012), Various Composers
Sound Shapes (2012), developed by Queasy studios in Toronto, is a side-scrolling game in which the creation of music becomes a primary component of the game. The game follows the player, who is represented by a ball on the screen, as they proceed through the adventure, avoiding obstacles and attempting to collect notes. Collecting notes results in the build-up of the music during gameplay. There are two effects that result when a player collects a note: first, the player receives immediate pitched musical feedback that a note has been collected, and this is followed by an increase in the musical density as a layer is added to the soundscape. Essentially, proceeding through the levels creates a song. There is also a gameplay type called creation mode, similar to a looping sequencer, in which the player places notes on a screen as a scroll bar continuously loops through the sequence. This enables the player to create their own levels and shapes, and essentially, their own music. While some of the elements may not seem to be precisely one-to-one correlated as in the other fully interactive music we have discussed, this game involves an intentional player-driven musical result. The music is the object of the gameplay, not a passive result of gameplay that has other, non-musical goals. Therefore, for the purposes of the classification in this text, we will refer to a game such as Sound Shapes as having fully interactive sound.
6.5 Not All Music
As you play through games on your own, you may realize that not all of the music in games fits nicely into these categories. Sound Shapes, for example, is described here as interactive, but upon examining the background music only, one could also make a case for it being adaptive. Especially as the possibilities in games increases, music in games will continuously evolve. However, the important message to take away from this is that we cannot group all video game music together as interactive simply because gameplay is interactive, and, if we are to properly study video game music, we must assess it as its own entity rather than an extension of gameplay. It also brings to the forefront the importance of music in games, either as a passive reactor of player actions that enhances player responsiveness, or as a component of the gameplay. Unlike films, video games include interactive involvement on part of the player, and feedback is essential to situations containing human-computer interaction. Sound is just one of the feedback systems that games use (in addition to visual and now, haptic feedback), and it is therefore important to evaluate the function of the sound within the player experience.
6.6 Unending Music
One of the elements of gameplay that sets it apart from film is that it is not a linear media that will exist for the same duration with the same events every time it is played. It is not even possible to determine the length of specific levels and areas within a game, because every player will spend a different time at each task. Therefore, composers and sound designers must create a music that can be played indefinitely during gameplay. The easiest way to do this is to have music that loops repeatedly so long as some condition is met during the game (i.e., you are in a specific area). Looping music became a standard for video game music, although other types of music would emerge, such as linear music, in response to FMVs, and generative music, which became possible as sound engines became more advanced and allowed music to be generated in real time. We will examine in this section how the loop has evolved, especially with regards to the ways that sound teams implement variations in loops to make the music sound similar but not tedious. We will also examine the involvement of linear and generative music, and examples of the use of both within games.
6.7 Looping Music
Looped music exists dating as far back as some of the early arcade games, with Rally-X (1980) being the first example of a truly musical looping soundtrack (Space Invaders had a continuous background soundtrack that looped, but it consisted of only four notes repeated over and over). Loops were originally static, unchanging, and very short, due to space, processing, and programming restrictions. Looping length and dynamism increased over time, giving rise to the extended linear loop, and dynamic loops that are continuously changing.
6.7.1 Linear Loops
The majority of early looping music is linear; the loop progresses, and once it reaches a certain point, the loop simply repeats, exactly the way it was played the first time. Linear loops do not change based on the gameplay, unless the gameplay directs the music to change to a different linear loop. Composers sought some diversity within linear loops, so that they wouldn’t get boring, but they do repeat endlessly, without variation. An example of a linear looping piece of music is the Mario Overworld theme. Every time it is completed, it repeats, exactly as it was stated before. However, Koji Kondo creates some variation within this loop by structuring it in a less predictable way: he reorders smaller segments of the composition so that the themes are not always played in the same order. This resulting pattern is not simply a repeating back and forth of two or three different themes, and the adjustments to the order of the sections suggests that keeping the loop interesting was a concern for the composer. During the 16-bit and 32-bit eras, it was possible to make longer linear loops, which allowed for more variance. However, these loops would still repeat endlessly, over and over. And while this may be the desire of a composer, especially in situations where character or location themes are at the forefront of the compositional intention, rather than interactivity, sound teams increasingly sought ways to vary these loops and add dynamism to the music.
6.7.2 Dynamic Loops
It is also possible to have music that loops or repeats itself in the game with minor variations. One example of such variation would be in a piece of music that is both looping and uses vertical re-orchestration. While the loop is occurring continuously, the instruments change, making the loop dynamic. This method of looping is very common in games with very short loops, especially mobile games and casual games. Transposition can also be used to change loops; an early example of using transposition to add dynamism to a loop is in the game Rally-X, as the 2-bar theme is transposed to a lower pitch the second time. The soundtrack for Rally-X itself, however, is not a dynamic loop. In order for such a technique to be dynamic, the transposition would have to happen differently each time the loop occurs, or at least during some of the times the loop occurs. This is possible currently, and adding a slight adjustment such as a transposition during loops can be subtle but effective. The general intent behind dynamic loops is that the music is allowed to remain the same, but not become tedious. Martin O’Donnell stated, regarding the Halo soundtrack, that:
“The most important feature… is that it contains enough permutations and the proper randomization so that players do not feel like they’re hearing the same thing repeated over and over. Even the greatest and most satisfying sound, dialog or music will be diminished with too much repetition. It is also important to have the ability to randomize the interval of any repetition. It might be difficult to get the sound of one crow caw to be vastly different from another, but the biggest tip off to the listener that something is artificial is when the crow always caws after the leaf rustle and before the frog croak every thirty seconds or so. The exception to that rule are specific game play sounds that need to give the player immediate and unequivocal information, such as a health meter.”
6.8 Linear Music
Not all situations in games are nonlinear, and therefore not all music is required to loop. Especially with the rise of FMVs, linear music is needed at certain times in games. FMV cut scenes that break out of gameplay mode generally use linear music. Title sequences can be another example of linear music. While the title music does loop, the video loops along with it, creating a picture lock. The difference between this and looping music is the function and compositional intent; rather than the experience of interactive gameplay and music that will match the visuals differently as it loops, the music and the picture are always the same, and it is essentially as if you are replaying the same movie over and over. The exception to this is title sequences in which the screen locks on the menu select once the sequence is finished, but for the purposes of this book, I will still term this music linear because of its function and intent. Another important clarification to make regarding linear music is that it is not the same as linear looped music. While the two terms do sound very similar, looping linear music is designed to accompany gameplay, creating a never-ending musical background to ever-changing visuals. As the need for FMVs to cut away from gameplay decreases due to higher processing power in consoles, linear music within the game (away from the title sequence) becomes less common.
6.9 Generative Music
Generative music includes any type of music that is generated in game, based on pre-written algorithms that determine what musical sounds to play next. An example of generative music that we already examined above would (loosely) be the game Rez, where the player generates a soundtrack during gameplay. Another example of generative music is the game Spore (2008), an online game that uses the interactive music software Pure Data (PD) to generate the score based on pre-written algorithms. The game follows the player as they develop a species from the very beginnings of single-celled life through complex civilization. This method of gameplay is essentially generative, with the results of the development dependent on player actions and pre-written game algorithms. Therefore, generative music presents an appropriate backdrop to the gameplay, as the music is generated based on the actions of the game, which are generated by the actions of the player. While generative music can sound a lot like dynamic looping, the difference between the two is that generative music progresses endlessly, with no components that behave in a looping manner. Horizontal re-sequencing can be an element used in a type of generative music, but horizontal re-sequencing is conceptually a type of dynamic looping music. However, there are situations in which a large number of pre-recorded stems are played back throughout the game based on a pre-designed algorithm in which these stems are not part of a specific composition in the game. An example of this is Red Dead Redemption, which consists of a multitude of stems, recorded by the music team, which are selected for playback throughout the game algorithmically. In order to make it possible for any of the stems to accompany any others, the musicians intentionally recorded all of the stems in the same key. When evaluating the music based solely on this concept, one could evaluate the soundtrack as one singular large dynamic loop. However, the composers created the stems with the knowledge that there would be several different tracks within the game. Therefore, just as in the distinctions between interactive/reactive/adaptive music, the distinction between generative music and dynamic looping music lies in the musical intention: one is designed to provide variance to looping music, the other is designed to create music that is continuously new.
6.10 Character Perception of Music
For the most part in games, soundtracks exist outside of the world that the character is experiencing. However, there is sometimes music and/or sound that a character in the game can hear. These types of player-experienced sounds are called diegetic sounds, whereas soundtrack elements that a player cannot hear within the game are called non-diegetic. These terms are derived from film music theory, but are applicable to game music as well. Non-diegetic music in video games generally consists of soundtrack elements, although some soundtrack elements (such as a radio playing) may be diegetic. Diegetic music is present in many games, consisting of music that the player may create, as well as music that is heard within the environment. This may include music on radios and TVs in the background, or music that is performed by other characters during the game. Diegetic music is becoming more common in games as environments become more immersive, either as a means to enhance the environment, or to provide an interactive element. Radios are often used to add gravity to game sound environments, such as in Portal and in Bioshock. As the player approaches a lift in Bioshock, for example, a melody plays inside the lift. Because of the setting and the way the sound is used within the game to enhance the environment, these radios add a dimension to the realness. Portal contains no background music, only sound effects, and occasional dication from GLaDOS. However, there are radios throughout the game that do contain music. The presence or absence of these radios, as well as the lack of music elsewhere, gives their appearance more gravity and impact. Diegetic sounds can also be used for interactive elements, such as in Minecraft, where a player encounters radios and can select the sound. This sound selection also gives the player a feel of control over the environment.
The interactive nature of gameplay results in the need for music that has special qualities. Most of these qualities are derived from the unique requirement of game music to be unending, and for the lack of sound-picture lock that is present in film. Video games are just not capable of producing picture lock, except in FMV sequences, and even in those situations, it is not always precise. Terminology surrounding video game music is emergent, and varies depending on whether the source is academic, or industry-based. Unlike electroacoustic concert music, video game music lacks a unified body of scholarship surrounding it, and this text serves to provide a terminology that is accessible, and describes the music as it behaves functionally in game. Game interactivity does not mean automatically that the music is interactive, nor does infinite music mean that it is simply looping. As interactive media persists and continues to comprise a large portion of media consumption, this terminology may evolve and change. Many years ago most music was reactive, and generative music was not prevalent. Each generation of music presents its own series of challenges and musical needs, and this is also something that is unique to interactive media. In film, even though musical styles have vastly changed, musical function and purpose has not. It is also important to remember that not all of these terms are designed to describe video game music immutably, as there are continuums between them, and new games are being released that continuously challenge standards, especially in sound. However, knowing these terms allows you to apply them to music you are hearing in the games you play, and to further understand why there even may be a question of whether a piece of music is, for example, adaptive or interactive. These terms can also assist in the beginning of a conversation surrounding the unification between video game music scholarship and industry-focused publications.
 Drummond, Jon. “Understanding interactive systems.” Organised Sound 14.02 (2009): 124-133.
 Rowe, Robert. Interactive music systems: machine listening and composing. MIT press, 1992.
 EARS ElectroAcoustic Resource Site, “Definition of Adaptive Sound.” http://ears.pierrecouprie.fr/spip.php?article17, Accessed May 21, 2016.
 Collins, Karen. Game Sound, p. 51
 Mackey, Bob, “Day of the Tentacle Composer Peter McConnell on Communicating Cartoniness.” US Gamer online, Mar 7, 2016, http://www.usgamer.net/articles/day-of-the-tentacle-composer-peter-mcconnell-on-communicating-cartooniness, accessed May 6, 2017.
 Phillips, Winifred, A Composer’s Guide to Game Music, MIT Press, 2014.
 Parkin, Simon. “Oral history of Rez recounts a marriage of game and music.” Gamasutra online, March 17, 2016, http://www.gamasutra.com/view/news/268364/Oral_history_of_Rez_recounts_a_marriage_of_game_and_music.php, accessed May 6, 2017.
 See video online at: http://www.rockstargames.com/newswire/article/7361/behind-the-scenes-of-the-red-dead-redemption-soundtrack.html, accessed May 6, 2017.
 Zezima, Katie. “Virtual Frets, Actual Sweat.” NY Times online, July 15, 2007, http://www.nytimes.com/2007/07/15/fashion/15guitar.html, accessed May 6, 2017.
 Sound Shapes, Sony Computer Entertainment, 2013.
 Collins, Karen, Playing With Sound: A Theory of Interacting With Sound and Music in Video Games, MIT press, 2013.
 Stuart, Keith, “Redemption Songs: the Making of the Red Dead Redemption Soundtrack.” The Guardian online, May 26, 2010, https://www.theguardian.com/technology/gamesblog/2010/may/26/red-dead-redemption-soundtrack, accessed May 6, 2017.