SeaSwallow Project – Network

[vc_row full_width=”” section_separator_top_height=”” section_separator_top_height_tablet=”” section_separator_top_height_mobile=”” section_separator_bottom_height=”” section_separator_bottom_height_tablet=”” section_separator_bottom_height_mobile=”” bklyn_section_anchor_id=””][vc_column][vc_column_text]

overview of the network built for the SeaSwallow Project


For the SeaSwallow project we set up a network of Max/MSP patches connected locally at the performance space in Braunschweig and over the internet with the 2 remote places.

The tasks of these patches are to get sensor data from the 3 places and to visualize them, to manage a database as a shared memory of the performers, to display this as an OpenGl graphical user interface (SwallowWorld) and to playback the audio and video files from the database while using this 3D interface.

In addition to this Max/MSP network we used the eJamming platform for realtime audio networking.
The original plan was to do this also in Max but bandwidth limitations (upload speed of 450 kbit/s at one place) forced us to switch over to eJamming.

Each of the three places is equipped with the SeaSwallow SensorKit, a Max patch called SwallowWorldNy / Syd / Bs and eJamming.
All the other Max patches are running on computers in Braunschweig.

The SeaSwallowSensorKit
It consists of an Arduino UNO board running a sketch for reading analog sensor data and sending it to a serial object in the SwallowWorld… Max patch.
Attached to the Arduino are 3 different sensors:
1. a LDR (light dependent resistor) to mesure the amount of light in the environment
2. a temperature sensor (DS1820 Dallas 1-Wire Digital Thermometer which requires the OneWire.h and DallasTemperature.h libraries on the Arduino)
3. a 3-axis accelerometer (MMA7361LC) which is fixed to a wrist band. Each performer is wearing one of this motion sensors, sending out a permanent stream of his handposition to the other places.
In combination with a button in the Max patch to switch navigation mode on or off the performer can navigate in the OpenGl world (SwallowWorld), moving the arm for looking around, fast shake to move forward or stop, double shake to move backwards.

the SeaSwallowSensorKit
Photo: blackhole-factory


SwallowWorldNy / Syd / Bs – Max patch
This patch is managing the peer-to-peer network connections to the other places. It displays the SwallowWorld and the 3 hand models as OpenGl graphics and containes the button to switch one’s own navigation on or off.
The sensor data coming from the Arduino is send to the other places using the udpsend Max external.
The camera image from New York and Sydney is send to Braunschweig as a mpeg-4 compressed video using the vipr external  from Benjamin Day Smith in combination with
The patch receives the motion data of all 3 performers and displays it by controlling the OpenGl hand models: moving it and changing it’s color for navigationOnOff, move forwards, backwards or stop.
It also receives the data for view point and lookat in the SwallowWorld calculated in the NavigationControl patch in Braunschweig to syncronize the movements of all three SwallowWorld graphics.
At last it receives the processed video stream from the DataBase in Braunschweig.

SwallowWorldSyd Max patch
Photo: blackhole-factory


The Max patches running only in Braunschweig, connected over the internet to the remote places:

NavigationControl – Max patch to receive and manage the incoming motion data and use them to control the OpenGl world and the playback from the files in the data base.
It receives the data from the 3 motion sensors and the information about who is in navigation mode.
As soon as one performer is in navigation mode the patch uses his motion data to control the view point and lookat parameter in the SwallowWorld.
For this we modified the z.glNav abstraction from Zachary Seldess.
If more than 1 performer switches on navigation mode the patch calculates the average of the data. This can be used for group flights (what we call swarm naviagtion).
The SwallowWorld functions as a 3D sound/ video map.
An algorithm (Pythagoras in 3D) is permanently calculating the distance from the view point to the position of the files in the SwallowWorld to decide which files to play. Files will be chosen when the viewer comes close. In case of audio the distance defines volume and the mix of the closest points. In case of video files the distance defines the grade of distortion using alpha masking.

DataBase – Max patch containing the coordinates of all places and files integrated in the SwallowWorld
The files are sorted by different types like basic point (geographical place positioned on the edge of the globe by using it’s GPS coordinates), video, interview, music, filed recording. Each type has a different color in the 3D world.

DataBase view from outside
Photo: blackhole-factory
SeaSwallow Database view from inside
Photo: blackhole-factory


RemoteCameras – Max patch to project the camera images coming from the remote places onto 2 balloons on stage.

SeaSwallow Project RemoteCams
Photo: blackhole-factory


StageLight – Max patch, receives the temperature and light data from the 3 places to control color and intensity of 3 LED spots on stage by the use of a LAN Box.

SeaSwallow Project RemoteCams
Photo: blackhole-factory