Aktuell werden nur Beiträge der Kategorie Sound gezeigt. Alle anzeigen »

Victoria is a basic Raspberry Pi audio sampler that can play back audio samples from a USB thumb drive. It is named after the 2015 movie by Sebastian Schipper.


It uses Pimoronis Piano HAT, Drum HAT, pHAT Stack and Adafruits I2S Audio Bonnet (Pimoronis pHAT DAC also works; for direct output to speakers the HiFiBerry MiniAmp is a good option). A Raspberry Pi Zero was soldered directly onto the pHAT Stack with enough spacing to put the Audio Bonnet on top. Only one micro USB cable is needed to power the sampler, so a power bank can be used to play on the go. The Transcend JetFlash 880 is the perfect thumb drive for this project because it plugs directly into the Pi Zeros micro USB interface, while also providing a USB Type-A connector.

On the USB thumb drive are two directories: drums and piano. The drums folder can hold up to 8 samples which can be played via the Drum HAT, the piano folder can hold hundreds of samples which can be played via the Piano HAT; with the Octave Up / Down buttons one can cycle through the samples in batches of 13.

When holding the Instrument button and pressing either Octave Up or Down one can change the output volume. Holding Instrument and pressing the Drum HAT pad #8 two times will shut down the Pi.

The code is written in Python and based on the Piano HATs simple-piano.py example and the Drum HATs drums.py example.

If you want to use this script make sure that:

  • you have pygame, pianohat and drumhat installed
  • you have the Adafruit I2S Audio Bonnet script or the pHAT DAC script installed
  • the directory /mnt/victoria_usb exists so the USB thumb drive can be mounted (you can change this with the MOUNT_PATH variable)
  • the script assumes the path to thumb drive is /dev/sda1 (if not, change the MOUNT_VOLUME variable)
  • you can create a config.txt file in the root directory of the thumb drive to overwrite some options; it must look like this:
    samplerate = 48000
    folder_piano = /piano
    folder_drums = /drums
  • the samples need to be .wav or .ogg files and mono channel; stereo is not possible because of a limitation of the PyGames Sound object which is used to play the samples
  • on the thumb drive there must be a folder called drums with up to 8 samples and a folder called piano with 13 or more samples; the folder names can be changed in the options variable or via the config.txt file
  • the default sample rate is 44100 but can be changed in the options variable or via the config.txt file
  • there a two folders (piano and drums2) with default samples if the USB thumb drive is not present, in the same folder as the script in a subfolder called sounds; change the variables BANK_PIANO and BANK_DRUMS if you want to move them
  • set the Pi to Autologin (Console) and auto start the script on log in
  • you can activate the Overlay FS option in raspi-config to protect the SD-card from data loss
  • the thumb drive gets mounted as read only (-o ro argument)
  • the I2S Audio Bonnet or pHAT DAC have a line out, not a headphone jack plug; if you use headphones the output volume will be too loud
  • to find out which HATs work together you can use the pinout.xyz pHAT Stack Configurator

Thanks to zimoshka for sound & performance.

» Veröffentlicht am 29.05.2020 in den Kategorien Code, Interaktiv, Musik & Sound.

When the DAF – Dynamische Akustische Forschung searched for a way to organize an algorave over the internet during the pandemic, I started to work on an audiostream mixer that would allow multiple people to stream high quality audio with low latency over the internet, directly in the browser.

There would be multiple senders, people who would code music on their machine and send it via a WebRTC audio stream. One (or more people) would act as a mixer, mixing all incoming streams and sending one final mix to every sender. Every sender could listen to the final mix and their own audiostream at the same time.

There is some delay between the audio streams because internet, so this would not be suitable for performing traditional music where synchronicity really matters. But for the type of music the DAF produces at their algorave it would work.

Here are some screenshots of the development version:

a basic first prototype, using HTML audio elements for playback
the first version using the web audio api; there was no sender/mixer concept yet, everyone would receive all audio streams and could do their own mix
the start screen with lots of options so I could test which combination works best for music streaming
the sender just sees their own stream and the final mix
the mixer can manipulate and mix all streams; they also send out the final mix to all senders

I started by using this WebRTC example by anoek. It provides basic functionality to send audio streams over WebRTC and play them via the HTML audio element. I then replaced the audio elements with the web audio api and custom designed controls so that I could add high/mid/low sliders and other things.

To get better audio quality, I disabled some speech enhancements that getUserMedia automatically adds (which make sense for speech streaming, but reduces audio quality for music):

    audio: {
        autoGainControl: false,
        echoCancellation: false,
        noiseSuppression: false,
        volume: 1.0,
        # ...
    video: false

I also experimented with playoutDelayHint and jitterBufferDelayHint (both Chrome only) in WebRTC audio streams to maybe add some buffer to the audio stream and get better streaming quality (with the drawback of more delay), but that did not work. I also tried to use TCP instead of UDP to eliminate packet loss but couldn’t get WebRTC to only use TCP connections.

It is also possible to set the maxaveragebitrate in the SDP description to a higher value:

let answer = await peer.conn.createAnswer(offerOptions);
answer.sdp = answer.sdp.replace('useinbandfec=1', 'useinbandfec=1; stereo=1; maxaveragebitrate=510000');
await peer.conn.setLocalDescription(answer);

and manually set the audio codec to Opus (which should yield the highest audio quality while maintaining a relatively low latency).

In the end, I couldn’t get the audio quality high enough for this to really work, so the DAF ended up using audiomovers in their live performance (which you can watch on the BR KulturBühne).

Nevertheless, I learned a lot about the Web Audio API and WebRTC. I have plans to try and use RTCDataChannels for the audio streams so I can build and control my own audio buffer. There are also some proposals for WebRTC that would allow the control of the audio buffer so one could balance audio quality and latency but it may take some time for this to get implemented in the browsers.

» Veröffentlicht am 09.05.2020 in den Kategorien Code, Interaktiv, Musik & Sound.
» Veröffentlicht am 22.04.2015 in der Kategorie Sound.
Kategorien: Momente, Fotografie, Film, Making Of, Code, Setfotos, Interaktiv, Kurz Dazwischengerufen, Musik, Studium, NPIRE & Spiele