bird_12-9-7b

bird_12-9-7b

      bird_12-9-7b - blackhole-factory

recorded in the installation at Haus der Braunschweigischen Stiftungen – im Garten in the range of sensor units 12, 9 and 7
Each of these three sensor units is equipped with two piezo sensors to detect the movements of the branches of the plants. Unit #7 is in a tree, unit #9 on a bush and unit #12 on a hedge. The sensor data is sent via a radio network to the central computer, where it is converted into the control data for the sound synthesis of these three locations. Via Wi-Fi, the data is sent to the user’s smartphone at this location and, together with the smartphone’s motion sensors, controls the sound generated in the app.

The Agents Play Their Music

DisplayAOSCWeb

Bringing the A.O.S.C. project to the next level – the agents play their music!

In the previous setup the agents functioned only as the ears of the system, listening to the acoustic environment and sending the data of the audio spectrum to the main server. Everything else, the analysis of the data streams, the resulting calculation of the parameters for the sound synthesis as well as playing back the music, all that happened on the server side.

Now we bring the music back to the agents!

The server only does the calculations and sends the stream of parameters back to the agents where the music is generated by a Pd patch.
That means, you can sit beside the agent, listen to your actual acoustic environment and – by using headphones – to the collective electronic soundscape composed by all the networked agents.
Now this system offers the possibility to directly observe the impact of the acoustic events at the agent’s place as well as direct acustic interaction with the system.

In the moment we have four agents running: Toronto, Lviv, Sydney and Bonaforth.

Currently we are working on the algorithm which is receiveing the data stream and calculates the control data for the music.

Here are some snippets.

aosc170731 is a YouTube video with syncronized visualisation of the data streams.

Dance_Code

 

Dance_Code
Photo: blackhole-factory

Dance_Code is a project by the dancer Agnetha Jaunich (Kassel) in collaboration with blackhole-factory, exploring the possibilities of transforming movement and sound into 3d graphics in an improvisation.

The movements of the dancer are tracked by a kinect sensor and mapped to the position and shape of the 3d object. The frequency spectrum and amplitude of the sound are changing the texture and distortion of the object.

Agnetha Jaunich – dance
Elke Utermöhlen – voice
Martin Slawig – percussion + live processing, programming

The project is supported by Kulturinstitut der Stadt Braunschweig

Watch an excerpt on Vimeo:

SeaSwallow Project – Network

overview of the network built for the SeaSwallow Project
blackhole-factory

 

For the SeaSwallow project we set up a network of Max/MSP patches connected locally at the performance space in Braunschweig and over the internet with the 2 remote places.

The tasks of these patches are to get sensor data from the 3 places and to visualize them, to manage a database as a shared memory of the performers, to display this as an OpenGl graphical user interface (SwallowWorld) and to playback the audio and video files from the database while using this 3D interface.

In addition to this Max/MSP network we used the eJamming platform for realtime audio networking.
The original plan was to do this also in Max but bandwidth limitations (upload speed of 450 kbit/s at one place) forced us to switch over to eJamming.

Each of the three places is equipped with the SeaSwallow SensorKit, a Max patch called SwallowWorldNy / Syd / Bs and eJamming.
All the other Max patches are running on computers in Braunschweig.

The SeaSwallowSensorKit
It consists of an Arduino UNO board running a sketch for reading analog sensor data and sending it to a serial object in the SwallowWorld… Max patch.
Attached to the Arduino are 3 different sensors:
1. a LDR (light dependent resistor) to mesure the amount of light in the environment
2. a temperature sensor (DS1820 Dallas 1-Wire Digital Thermometer which requires the OneWire.h and DallasTemperature.h libraries on the Arduino)
3. a 3-axis accelerometer (MMA7361LC) which is fixed to a wrist band. Each performer is wearing one of this motion sensors, sending out a permanent stream of his handposition to the other places.
In combination with a button in the Max patch to switch navigation mode on or off the performer can navigate in the OpenGl world (SwallowWorld), moving the arm for looking around, fast shake to move forward or stop, double shake to move backwards.

the SeaSwallowSensorKit
Photo: blackhole-factory

 

SwallowWorldNy / Syd / Bs – Max patch
This patch is managing the peer-to-peer network connections to the other places. It displays the SwallowWorld and the 3 hand models as OpenGl graphics and containes the button to switch one’s own navigation on or off.
The sensor data coming from the Arduino is send to the other places using the udpsend Max external.
The camera image from New York and Sydney is send to Braunschweig as a mpeg-4 compressed video using the vipr external  from Benjamin Day Smith in combination with jit.net.send.
The patch receives the motion data of all 3 performers and displays it by controlling the OpenGl hand models: moving it and changing it’s color for navigationOnOff, move forwards, backwards or stop.
It also receives the data for view point and lookat in the SwallowWorld calculated in the NavigationControl patch in Braunschweig to syncronize the movements of all three SwallowWorld graphics.
At last it receives the processed video stream from the DataBase in Braunschweig.

SwallowWorldSyd Max patch
Photo: blackhole-factory

 

The Max patches running only in Braunschweig, connected over the internet to the remote places:

NavigationControl – Max patch to receive and manage the incoming motion data and use them to control the OpenGl world and the playback from the files in the data base.
It receives the data from the 3 motion sensors and the information about who is in navigation mode.
As soon as one performer is in navigation mode the patch uses his motion data to control the view point and lookat parameter in the SwallowWorld.
For this we modified the z.glNav abstraction from Zachary Seldess.
If more than 1 performer switches on navigation mode the patch calculates the average of the data. This can be used for group flights (what we call swarm naviagtion).
The SwallowWorld functions as a 3D sound/ video map.
An algorithm (Pythagoras in 3D) is permanently calculating the distance from the view point to the position of the files in the SwallowWorld to decide which files to play. Files will be chosen when the viewer comes close. In case of audio the distance defines volume and the mix of the closest points. In case of video files the distance defines the grade of distortion using alpha masking.

DataBase – Max patch containing the coordinates of all places and files integrated in the SwallowWorld
The files are sorted by different types like basic point (geographical place positioned on the edge of the globe by using it’s GPS coordinates), video, interview, music, filed recording. Each type has a different color in the 3D world.

DataBase view from outside
Photo: blackhole-factory

SeaSwallow Database view from inside
Photo: blackhole-factory

 

RemoteCameras – Max patch to project the camera images coming from the remote places onto 2 balloons on stage.

SeaSwallow Project RemoteCams
Photo: blackhole-factory

 

StageLight – Max patch, receives the temperature and light data from the 3 places to control color and intensity of 3 LED spots on stage by the use of a LAN Box.

SeaSwallow Project RemoteCams
Photo: blackhole-factory

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close