Archive for the ‘butter’ Category
Former Disney audio experience engineer, Mr. Q, reveals how he assisted in developing a complex algorithm to arrange over 15,000 speakers around the Disney World theme park. All to achieve the ideal ambient music for “manufacturing emotion.”
The last time I visited Disney World, I was a bit distracted by the nausea that followed one too many rides after five too many scoops of ice cream. The visits before that though, I was entirely clearheaded. Yet not a single time did I notice the always present background music switch tunes.
Mr. Q would be laughing maniacally if he read those words. That’s because those words mean that his baby, the project he worked on in the 1990s, grew up to be a success.
Apparently the original Disney World speaker system, set up in 1968, had an unnoticeable flaw: minuscule variations in sound volume along pathways. As someone walked closer to a speaker, music would seem louder than a few steps away. Despite not a single visitor ever complaining about this common sound effect, twenty years later good ol’ Mickey decided to do something about it. Some work and a team effort later, they had Mr. Q’s system and algorithm:
The system he built can slowly change the style of the music across a distance without the visitor noticing. As a person walks from Tomorrowland to Fantasyland, for example, each of the hundreds of speakers slowly fades in different melodies at different frequencies so that at any point you can stop and enjoy a fully accurate piece of music, but by the time you walk 400 feet, the entire song has changed and no one has noticed.
So how is a system which strives to be unnoticed manufacturing emotion? According to Mr. Q, the “life is sucked out of” the park when the speakers fail. Even a slightly flawed speaker system could lead to frowns, while perfect music ambiance only leaves Goofy’s creepiness to achieve that.
Invented by Michael Callahan: Three pill-size electrodes on the throat pick up electrical signals generated between the brain and the vocal cords. A processor in the device then filters and amplifies the signals and sends them to an adjacent PC, where software decodes them and turns them into words spoken through the PC’s speakers. By placing the electrodes on the neck and “speaking” silently through vocal-cord movements (but without moving the mouth), the wearer generates enough neural activity to trigger this chain of events.
Audeo is capable of more than just giving a voice to those physically impaired though. It could be used to speak on the phone without ever actually vocalizing anything, opening up the possibilities to fantastical spy or military applications. That and it could one day get rid of that is-he-talking-to-me-or-someone-on-the-phone confusion around people wearing BlueTooth headsets.
Frozen is an exhibition of experiments in the representation of sound in media beyond the auditory. It examines the sound signal as a virtual space, presenting possible mappings that visualize or interpret the structures contained within the soundwaves. The representation of sound as spatial structures, realized as physical objects through the use of digital fabrication technologies.
Frozen pulls the plug and presents audio art, prints, and sculptures as independent, but interconnected works of art.
Multi-channel sound pieces can be experienced over an advanced speaker setup, accompanied by sound in a “frozen” form: Images and sculptural objects made using sound as input. These artworks use audio analysis and custom software processes to extract meaningful data from the sound signal, creating a mapping between audio and other media. Frozen features digital prints as well as four “sound sculptures” created using digital fabrication technology such as rapid prototyping, CNC and laser cutting, which allow for the direct translation of a digital model into physical form.
With our eyes we see the world in perspective and as such we look only at a fragment of the world surrounding us. Our vision is selective, since we have to look in the direction of that, we wish to see. Also, vision places us in the periphery of our world. From here we look into our environment finding ourselves at a distance from what we see. This distance is a necessary one, since we cannot see or survey things that are too close. On the contrary, our ears situate us in the middle of our environment. The sounding space is anthropocentric (regarding humankind as the central of existence) in nature. Literally speaking, we cannot close our ear the way we can close our eye. Therefore sound, as a consequence of our spherical hearing, informs us about actions and sounding phenomena taking place outside of our visual perspective. In other words, sound enhances the visual space and the form of our ears and the distance between them, enables us to position the sounding object quite precisely in that space. Furthermore the eye cannot see through objects. Since we do not have an x-ray vision, the eye only reaches the surface of things. Sound on the other hand can be heard through solid objects like a wall and it runs around corners due to the way sound waves diffuse. Consequently we are able to hear what is happening behind a closed door. As formulated by Victor Zuckerkandl “the eye discloses space to me in that it excludes me form it” while the ear “disclose space in that it lets me participate in it” [1, p. 291] Also, we participate in space in that the sound moves us. Not just in the psychological sense but also in the physical sense. Sound grasps the body and shakes it (so to speak). In short sound immerses the listener into the world. It makes the environment come alive.