Aktuell werden nur Beiträge der Kategorie Code gezeigt. Alle anzeigen »
← ZukunftVergangenheit →

Victoria is a basic Raspberry Pi audio sampler that can play back audio samples from a USB thumb drive. It is named after the 2015 movie by Sebastian Schipper.


It uses Pimoronis Piano HAT, Drum HAT, pHAT Stack and Adafruits I2S Audio Bonnet (Pimoronis pHAT DAC also works; for direct output to speakers the HiFiBerry MiniAmp is a good option). A Raspberry Pi Zero was soldered directly onto the pHAT Stack with enough spacing to put the Audio Bonnet on top. Only one micro USB cable is needed to power the sampler, so a power bank can be used to play on the go. The Transcend JetFlash 880 is the perfect thumb drive for this project because it plugs directly into the Pi Zeros micro USB interface, while also providing a USB Type-A connector.

On the USB thumb drive are two directories: drums and piano. The drums folder can hold up to 8 samples which can be played via the Drum HAT, the piano folder can hold hundreds of samples which can be played via the Piano HAT; with the Octave Up / Down buttons one can cycle through the samples in batches of 13.

When holding the Instrument button and pressing either Octave Up or Down one can change the output volume. Holding Instrument and pressing the Drum HAT pad #8 two times will shut down the Pi.

The code is written in Python and based on the Piano HATs simple-piano.py example and the Drum HATs drums.py example.

If you want to use this script make sure that:

  • you have pygame, pianohat and drumhat installed
  • you have the Adafruit I2S Audio Bonnet script or the pHAT DAC script installed
  • the directory /mnt/victoria_usb exists so the USB thumb drive can be mounted (you can change this with the MOUNT_PATH variable)
  • the script assumes the path to thumb drive is /dev/sda1 (if not, change the MOUNT_VOLUME variable)
  • you can create a config.txt file in the root directory of the thumb drive to overwrite some options; it must look like this:
    samplerate = 48000
    folder_piano = /piano
    folder_drums = /drums
  • the samples need to be .wav or .ogg files and mono channel; stereo is not possible because of a limitation of the PyGames Sound object which is used to play the samples
  • on the thumb drive there must be a folder called drums with up to 8 samples and a folder called piano with 13 or more samples; the folder names can be changed in the options variable or via the config.txt file
  • the default sample rate is 44100 but can be changed in the options variable or via the config.txt file
  • there a two folders (piano and drums2) with default samples if the USB thumb drive is not present, in the same folder as the script in a subfolder called sounds; change the variables BANK_PIANO and BANK_DRUMS if you want to move them
  • set the Pi to Autologin (Console) and auto start the script on log in
  • you can activate the Overlay FS option in raspi-config to protect the SD-card from data loss
  • the thumb drive gets mounted as read only (-o ro argument)
  • the I2S Audio Bonnet or pHAT DAC have a line out, not a headphone jack plug; if you use headphones the output volume will be too loud
  • to find out which HATs work together you can use the pinout.xyz pHAT Stack Configurator

Thanks to zimoshka for sound & performance.

» Veröffentlicht am 29.05.2020 in den Kategorien Code, Interaktiv, Musik & Sound.

When the DAF – Dynamische Akustische Forschung searched for a way to organize an algorave over the internet during the pandemic, I started to work on an audiostream mixer that would allow multiple people to stream high quality audio with low latency over the internet, directly in the browser.

There would be multiple senders, people who would code music on their machine and send it via a WebRTC audio stream. One (or more people) would act as a mixer, mixing all incoming streams and sending one final mix to every sender. Every sender could listen to the final mix and their own audiostream at the same time.

There is some delay between the audio streams because internet, so this would not be suitable for performing traditional music where synchronicity really matters. But for the type of music the DAF produces at their algorave it would work.

Here are some screenshots of the development version:

a basic first prototype, using HTML audio elements for playback
the first version using the web audio api; there was no sender/mixer concept yet, everyone would receive all audio streams and could do their own mix
the start screen with lots of options so I could test which combination works best for music streaming
the sender just sees their own stream and the final mix
the mixer can manipulate and mix all streams; they also send out the final mix to all senders

I started by using this WebRTC example by anoek. It provides basic functionality to send audio streams over WebRTC and play them via the HTML audio element. I then replaced the audio elements with the web audio api and custom designed controls so that I could add high/mid/low sliders and other things.

To get better audio quality, I disabled some speech enhancements that getUserMedia automatically adds (which make sense for speech streaming, but reduces audio quality for music):

    audio: {
        autoGainControl: false,
        echoCancellation: false,
        noiseSuppression: false,
        volume: 1.0,
        # ...
    video: false

I also experimented with playoutDelayHint and jitterBufferDelayHint (both Chrome only) in WebRTC audio streams to maybe add some buffer to the audio stream and get better streaming quality (with the drawback of more delay), but that did not work. I also tried to use TCP instead of UDP to eliminate packet loss but couldn’t get WebRTC to only use TCP connections.

It is also possible to set the maxaveragebitrate in the SDP description to a higher value:

let answer = await peer.conn.createAnswer(offerOptions);
answer.sdp = answer.sdp.replace('useinbandfec=1', 'useinbandfec=1; stereo=1; maxaveragebitrate=510000');
await peer.conn.setLocalDescription(answer);

and manually set the audio codec to Opus (which should yield the highest audio quality while maintaining a relatively low latency).

In the end, I couldn’t get the audio quality high enough for this to really work, so the DAF ended up using audiomovers in their live performance (which you can watch on the BR KulturBühne).

Nevertheless, I learned a lot about the Web Audio API and WebRTC. I have plans to try and use RTCDataChannels for the audio streams so I can build and control my own audio buffer. There are also some proposals for WebRTC that would allow the control of the audio buffer so one could balance audio quality and latency but it may take some time for this to get implemented in the browsers.

» Veröffentlicht am 09.05.2020 in den Kategorien Code, Interaktiv, Musik & Sound.

Videodrome is a small Raspberry Pi videoplayer that plays random segments of random videos from the archive.org VHS Vault. It is named after the 1983 movie by David Cronenberg.


To download the movies I used wget, based on this article. The command itself is just a oneliner, executed in multiple shells to download in parallel:

wget -r -H -nc -np -nH --cut-dirs=1 -e robots=off -A .mp4,.m4v,.mov,.webm,.avi -l1 -i ./source.txt -B 'http://archive.org/download/'

the source.txt has about 20000 lines and looks something like this:


I let the download run for about 30 hours and that gave me 320 videos. Thats 120 GB or 24 hours playtime in total. The complete VHS Vault is much larger.

The Raspberry Pi is a model 3B and uses a small 2GB microSD card with Raspbian Buster Lite. I installed some dependecies with

sudo apt install omxplayer mediainfo timelimit

I use omxplayer as the mediaplayer, because it is hardware accelerated on the Pi. mediainfo is for getting the length of a videofile and timelimit ends the omxplayer process after a specific time if the video is still playing.

The script to play the videos is just a small bash script that mounts an usb thumb drive, selects a random video from the drive, gets its length, selects a random position between 0 and the video length and generates a random clip length. It then uses omxplayer to play the videofile. The Pi 3B is fast enough to play the videos from archive.org without the need to convert them to a more suitable format. The Pi is set to „Autologin (Console)“ and starts the bash script on log in. I also set the „Overlay FS“ option in raspi-config and mount the usb-drive as read only to reduce data loss when unplugging the Pi. Because I use an old tube television with a scart connector I use the Pis composite output.

Long live the new flesh!

» Veröffentlicht am 03.05.2020 in der Kategorie Code.

Für Beppos Masterarbeit programmierte ich einen Raspberry Pi so, dass man mit zwei Buttons Bilder bewerten kann und anschließend eine Zusammenfassung über seine Abstimmung (und die der anderen Personen) auf einem Kassenzetteldrucker ausgedruckt bekommt. Umsetzung mit Python & OpenCV.

Fotos: Tim Böhmerle

» Veröffentlicht am 08.08.2019 in den Kategorien Code & Interaktiv.

Beim Nürnberg Digital Festival 2019 hielt ich einen Workshop bei bayern design zum Thema „Live Visuals mit p5js“; innerhalb von 2 Tagen brachte ich den Teilnehmern bei, wie sie mit p5js Live-Visuals programmieren können, die auf die Musik reagieren. Die Ergebnisse waren am Ende des Workshops bei der Abschlussparty des Bachelor Design Sommersemester 2019 zu sehen.

Danke an Basti und bayern design für die Workshop-Organisation, Beppo und High Life Low Budget für die Party-Organisation, Tim Böhmerle für den Design-Support, Paris Potis für die Live-Musik beim Workshop-Endspurt, Sebastian Lock für die Fotos und alle Workshop-Teilnehmer für zwei super spannende und produktive Tage.

Fotos: Sebastian Lock

» Veröffentlicht am 29.07.2019 in den Kategorien Code, Interaktiv & Visuals.

» Veröffentlicht am 26.07.2019 in den Kategorien Code, Interaktiv & Visuals.
← ZukunftVergangenheit →
Kategorien: Momente, Fotografie, Film, Making Of, Code, Setfotos, Interaktiv, Kurz Dazwischengerufen, Musik, Studium, NPIRE & Spiele