This is my first "performance" test of MetaSounds for Airports.
This first working prototype, experimentation test was achieved by combining the 3D Labanotaitno for visual feedback of audio parameters (not to mention the pivot arrows from the Kinect) and the PD patch turned MS source discussed in earlier posts.
There are some fundamental differences between the earlier version and this new one. mainly, the audio playback has been broken into parts with the left hand controlling note randomization triggers and the right hand controlling random chord selection.
There is one big difference with the basic node structure. Before there was an issue with calling audio parameters outside of the same execution stack as the audio spawning and the fact that things were triggering every frame. The solution to this two-part problem was to have the MS control the "once only"triggering of sounds using a Trigger Once node. The second part of the fix was to pump the hand velocity data into the MS and let the MS determine when to play, rather than having the BP determine the playback and stopping of audio stream data. See the example below.
Updated triggers in the MS Source
Making these changes allowed for parameters in the BP to be created and executed on different stacks. Meaning the audio source could be spawned with a Do Once node on an execution stack with a custom event. While audio parameters that need to be updated/checked every frame can be updated with the On Event Tick execution stack. See the updated node network below.
Comments