Bringing the A.O.S.C. project to the next level – the agents play their music!
In the previous setup the agents functioned only as the ears of the system, listening to the acoustic environment and sending the data of the audio spectrum to the main server. Everything else, the analysis of the data streams, the resulting calculation of the parameters for the sound synthesis as well as playing back the music, all that happened on the server side.
Now we bring the music back to the agents!
The server only does the calculations and sends the stream of parameters back to the agents where the music is generated by a Pd patch.
That means, you can sit beside the agent, listen to your actual acoustic environment and – by using headphones – to the collective electronic soundscape composed by all the networked agents.
Now this system offers the possibility to directly observe the impact of the acoustic events at the agent’s place as well as direct acustic interaction with the system.
In the moment we have four agents running: Toronto, Lviv, Sydney and Bonaforth.
Currently we are working on the algorithm which is receiveing the data stream and calculates the control data for the music.
Here are some snippets.
aosc170731 is a YouTube video with syncronized visualisation of the data streams.