A standalone intelligent multiplayer instrument

The massive improvements in performance and the combined price drop of small computing devices enable the implementation of highly integrated and intelligent sound tools. It is now possible to realize sound interaction solutions combining sound analysis, sound generation, user feedback and user inputs by the means of sensors. This pa- per describes a possible standalone combination of these aspects using a RaspberryPi running pure data, different sensors connected using micro-controllers and a compact sound interface. It allows to musically accompany a sound source by applying frequency content analysis and then generating suited additional sounds through a Markov chain. Due to the external and variable connectible sensors it is possible to vary the results and sound-scape through the different value patterns produced by different physical sensors and hence provide a platform for musical interaction.

With the progress of small integrated circuits electronic mu- sic interfaces have been playing a very important role in music creation and live performance. The affordability of very powerful microprocessors in the last years have opened ways for new concepts and innovation in this domain. Innovative control elements and methods allow for a total abstraction from conventional control methods for musical instruments and sound synthesis systems. Instead of the usual potentiometers, switches and touch sensors, other controls are also thinkable. The use of such different sensors as input elements may make musical interaction and creative signal generation much more intuitive in the future. This is the reason why only these kind of input elements were chosen for this project. The use of this instrument should be intuitive yet unusual.

In order to achieve this multiple sensors are installed out- side of the main system and can be easily connected and interchanged. Visual feedback is built into the main system and also the sensor extensions to make the sensor function and output values more comprehensive. The reason for this is to make sure the system always stays clear despite the multitude of possible setups. At the same time Cubertus is not only a musical instrument based on physical control but also intelligent sound analysis and autonomous synthesis system. It is able to analyze incoming sounds and generate matching patterns and melodies to accompany the user musically.

Below is our product video, aiming to get people exited about. Video was recorded and cut in the University by Sebastian Pieper, which also made the sound for it. At the middle of the video, you can hear a recorded part of Cubertus

As we wanted to ensure that the system is working out- of-the-box with no extra device needed we used a RaspberryPI 3 Model B as basic framework. With the ”Raspbian Jessie with Pixel” as operating software and the monolithic Unix-like Linux kernel setup and operation are easy. Us- ing the integrated Wireless LAN adapter of the RaspberryPi it is possible to set up a ad-hoc connection using any wireless-enabled device. This allows any user to view the Desktop using a standard VNC client thus making the PureData patch and configuration available to everyone. In addition to the operating system the Raspberry is delivering power through USB to the Arduino Mega and the Audio Interface which features a XLR input and a stereo head- phone output. Consequently only one single power source is needed to supply the entire system including the external sensors.

Patch structure

Signal flow

Final presentation

used software

work progress