Sound, Movement, and Imagination
In a recent post, I discussed my findings and understanding of Laban Movement Analysis (LMA) and Labnotation (LN) and their relationship with dance composition. In addition to their applications in dance, LMA and LN can also be used in conjunction with sound/music and computer interactions, offering novel ways of exploring the relationship between movement, perception, and sonic narratives. Allowing for the creation of new possibilities for understanding and co-authoring sonic experiences through body movement and generative sound.
One major part of what I wanted to accomplish with the research, recently, was to explore the viability and potentiality of the dance, procedural audio, and narrative as a research topic. This lead me to dig deeper into research and literature on the subject matter. The Oxford Handbook of Sound and Imagination, volumes one and two, explores the creative potential of these techniques (and many other topics and areas of research), which allow artists, designers, and researchers to create dynamic and ever-evolving sonic environments. Various chapters from these two volumes were the main focus of my recent exploration.
Procedural and generative audio techniques involve the creation of sound through algorithmic processes, as opposed to traditional sound recording and synthesis methods. Although, electronic synthesis and DSP can be done generatively or procedurally (Unreal Engine MetaSounds as an example). Auditory imagery refers to the mental representation of sound, which plays a critical role in how we perceive and engage with audio experiences.
Part of what needed to be accomplished was to identify how we could begin to form relationships between procedural sound, auditory imagery, and movement in a way that can begin to constitute the formation of narratives. By using algorithms that respond to external inputs, procedural audio can adapt to changes in a user's environment or behavior, creating unique and immersive experiences. And by understanding how the brain processes and recreates sound, we can develop techniques that better engage listeners' imaginations and create more immersive narrative experiences.
Perception and Understanding
The principles of LMA can be combined with the analysis of sound and music, offering insights into the perception of auditory events and their relationship with human movement. By examining the Effort components of LMA – Space, Weight, Time, and Flow – we can better understand the dynamics, articulation, and expressiveness of sound and music. We are combining these two concepts and mapping them to parameters that can be manipulated in real-time.
For example, a heavy, sustained sound may be associated with a strong Weight effort, while a light, staccato sound may be linked to a light Weight effort. Similarly, the Time effort factor can be used to describe the rhythmic aspects of music or its perceived duration, while the Space and Flow factors can be employed to explore the spatial aspects of sound.
I recently put some of these correlations into practice through a small experimental prototype project using Unreal Engine 5.1 and Kinect Body Tracking. Testing how a pipeline in a game engine could support this form of interaction. These posts here are about this exploration.
While this type of experimental media project is not new, look no further than the performative works of composer, artist, and performer, Pamela Z, there is still a lot of room in the field(s) of research around the general topics. For instance, using LMA concepts to analyze sound and music can help us better understand the expressive qualities and emotional content of auditory experiences, and how they relate to our movements and perceptions of shared events and spaces. Potentially creating novel forms of play and interaction in contexts of gaming, XR, and immersive installations, just to name a few.
Co-authoring Sonic Narratives through Body Movement and Generative Sound
Findings published in a chapter of The Oxford Handbook of Sound and Imagination, by authors Tuuri and Peltola have shown that listening to and imagining sounds together, in real or virtual shared worlds, is a form of pretend play presented through embodied self-agency. The acts of experiencing sounds together present opportunities for users/listeners to engage in the formation of identities and narratives through self-reasoning and shared expression.
The integration of concepts of movement with sound and music concepts (specifically western traditions of music theory/notation, and electronic music) opens up new possibilities for co-authoring sonic narratives using body movement and generative sound technologies together. By mapping the parameters of LMA to the control of sound synthesis or processing algorithms, we can create interactive systems that allow for the real-time generation of soundscapes and musical compositions based on the user's movements.
As a theoretical example, a performer's movement qualities, as analyzed through LMA components, can be used to control the pitch, timbre, dynamics, and spatialization of sound. This enables the performer to "shape" the sonic narrative through their physical movements, effectively "composing" music or soundscapes in real-time.
If a multi-user application were to be made that utilized a similar system, a form of improvisational or play-like interaction could arise. One where two or more individuals, along with the system, are creating soundscapes and narratives together, playing off each other’s manipulations of real-time auditory data manipulation. By empowering users to explore and manipulate sound through movement, these systems can foster a deeper connection between the body and the sonic environment and facilitate a more intuitive understanding of the relationship between movement, sound, and music.