Newer
Older

Bornhack retrospective (including shader jam)

Date: 2024-08-04 16:08
Tags: bornhack, shader-jam, demoscene

At Bornhack 2024 I intentionally ran a shader jam, accidentally got involved in Pixelflut, and did not find time to do anything with DN42 or Infiniband.

Shader jam

Since moods plateau made a cool trailer (download via pouet) for the Shader Showdown event at Revision demoparty, I thought it seemed pretty cool. More importantly, shader coding is probably the demoscene event with the lowest barrier to entry, if you don't care about winning a competition, which makes it great to bring to non-demoscene parties.

Various people recommended that I use Bonzomatic, the same tool used at Revision's Shader Showdown and Shader Jam, but I wasn't a fan for two reasons. Bonzomatic seems to be intended for live competitions, but that can be worked around. I'd just been playing with Shader Toy's multiple buffers to make recursive effects and wanted the same possibilities in my system. So I decided to write my own system, a knockoff of Shader Toy (as it isn't open source). This went okay. The visual design is lacking compared to Shader Toy, but it is functional.

After the first night, I ported the system to C++ and ran it on a projector on the side of our tent. This worked well. At night the picture was clearly visible through the tent. After the second night, Pixelflut was set up at the bar, and since I was already connected to Pixelflut and hadn't set up my own bar stream yet, I quickly set up a process to copy the projector image out to Pixelflut. This worked well (when I was the only person broadcasting on Pixelflut :) ). The image was transferred to the Pixelflut process by a shared memory file; a single thread converting it to Pixelflut format and sending it out with sendmmsg achieved about 7.5 Gbps. I thought that was satisfactory and chose not to add multithreading - that left a margin of 2.5Gbps on my shared link for actual Internet traffic.

All published shaders are available here as self-contained HTML files including the editor. Note: cubemap images are not included here because nobody used cubemaps.

Partial success: network video stream

The original plan was to use the camp network to stream a shader display to the bar. I thought I could easily use ffmpeg or VLC to send pixels over the network and receive them somewhere else. This worked but not as well as desired.

It was halfway through the camp before I had a stream working at all, but that was mostly my fault for not chasing down the bar team sooner. After setting it up, I discovered that the stream had a disappointing amount of latency. This would make, making live music reactivity impossible. I was able to write a program with libavcodec/libavformat to transmit video with zero latency - meaning that each frame was encoded and sent immediately as it's generated, without buffering more frames first. For some reason this wasn't easy to do with the ffmpeg command-line tool.

The receiver was vlc running on a Raspberry Pi (Pi 3 model B+) and this also added latency on the receive side. It started in real time (easily visible by comparing the shader stream to the Pixelflut monitor next to it) but accumulated buffered frames as it played farther and farther behind realtime. This might have been an issue with incorrect PTS (presentation timestamps) generated by my transmitter program, although I didn't find a problem. I looked for an option for VLC to ignore PTS and display frames as soon as it received them, but didn't find one, and didn't have time to investigate further.

The Pixelflut group used HDMI over fiber, to transmit video in real time and full quality. For a shader jam this is likely a much better alternative than a network stream, since no codec is involved. Some shaders (though none that were published in this event) may generate a lot of flashing pixels or noisy patterns that are poorly encoded. Next time I run a similar event I will need a direct video connection instead of using the network, or to validate the network stream performance in advance.

Failure: music reactivity

The same Raspberry Pi that was placed at the bar to receive the video stream was also plugged into the bar's sound mixer to receive the live audio from there. Receiving it over the network was as simple as running ssh arecord (but with more options) and the latency was no problem. This part was successful.

However, it turns out there's no good way to estimate the tempo of some arbitrary sound input. I didn't achieve my goal of locking the shader animation speed to the music tempo.

I also tried inputting some kind of perceptual 'energy' measurement, which worked, but didn't play nicely with the overall design, because there's no audio input in the web app. Perhaps with more time to prepare, the bar audio stream (or some other internet radio for example) could have also been supplied to the web app for testing. The way it was actually implemented, the audio energy variable just didn't exist in the web app, so you'd have to supply a dummy one, then comment it out before saving, which is very inconvenient. If the proper design is figured out, it's a no-brainer to add an FFT input as well.

At the suggestion of someone (I don't remember who) I also added an audio-energy-weighted-time variable. When audio energy is 20%, it increases by 0.2 per second, and so on.

Failure: Uptime

The power at the camp was much less reliable than last year, and caused in a lot of downtime - more than just the time the power was out. At first I had to go and restart the system after there was a power outage (1-2 times a day). Then I adjusted my firmware (BIOS) settings to automatically boot the computer when power comes up. Then, the power team had to cycle the power many times in a row to troubleshoot something, causing the firmware to revert to failsafe default settings as it thought it was failing to boot. It took several people 8 hours to figure out which default BIOS setting prevented the machine from booting. (It was "Display SMART errors during POST". Without this enabled, the firmware doesn't see any SATA hard drives. Seems like a bug.)

This was acceptable for the projector, Pixelflut, and network stream transmitter, but not for the web server since that is the main point of access to the system. It's okay if there aren't shaders on the screens at the bar, but without the web server you can't edit the shaders you previously made or make new ones.

Lesson learned: the most critical, user-facing component of the system should be running on a device with a battery? Or not! During the 8-hour outage, I moved the web server over to my PineTab2, and later, someone had knocked out the USB cable and the battery was flat. Sometimes you just can't win. A cloud server would have been the most reliable option.

At least I achieved nine fives. I think.

Fiber-to-the-tent and Pixelflut

At the last Bornhack in 2023 I brought a bag full of small electronic components, soldering tools and solar panels. In 2024 I brought I brought a bag full of networking equipment and my flight case PC (TODO: more information on the PC).

The camp network team provided gigabit networking throughout the camp. 10-gigabit ports ("fiber to the tent") were provided at a central location, a large village with room to host the equipment, which happened to be close to my tent. As I got hooked up I found out the same switch also ran a Pixelflut VLAN, so I got a VLAN trunk service. My PC would have to send and receive packets with 802.1Q VLAN tags. A certain tag number indicates the packet is for/from the camp network (including the Internet) and a different tag number indicates the Pixelflut network. I also had a third tag number for DN42 which I didn't end up using. Since I use Linux without NetworkManager, this was easy to set up on my end.

10-gigabit fiber-optic equipment mostly uses the SFP+ transceiver module standard. The switch or network card includes a long rectangular space which fits an optical transceiver that you buy separately. This allows the same cards and switches to be used with different kinds of fiber optics. In my case I used the 10GBASE-LR standard, which runs up to 10km on a pair of single-mode fibers. At higher bitrates, single-mode transceivers are more expensive and multi-mode transceivers are more commonly used for shorter links. However, at 10Gbps the price difference was minimal. The transceivers cost about 20 euros each (versus 10 for multi-mode), and the cable about 50 cents per meter. The dual-port NICs were 40 euros each on eBay.

SFP+ modules have metadata ROM chips to identify them. Some switches are very picky about the modules they will accept, so vendors sell identical modules with different ROM contents. The camp network team recommended Juniper metadata for their end. I realized after I had all the equipment together and tested it at home, that my NICs are from Intel and only want Intel transceiver modules. Luckily, this check can be easily disabled with a driver option: sudo rmmod ixgbe; sudo modprobe ixgbe allow_unsupported_sfp=1,1

Traffic usage

The Bornhack network team always asks campers to Use More Bandwidth! The camp gets a 10Gbps dedicated fiber uplink to a data center in Copenhagen.

I managed to sustain about 500Mbps up and 500Mbps down of useful traffic, so really, I didn't need the 10G except for Pixelflut. Another camper managed to exceed this level, but I found out he was using several times more hardware, for only 60% more traffic, so that means I did just fine in the traffic "competition".

The highest level of useful traffic I managed was about 4500Mbps, but it wasn't sustainable.

I installed netdata (open source edition) to have a dashboard. It's a bit weird, as upload traffic counts as negative numbers.

DN42

The plan was to set up a network people could connect to, to be present on DN42. The network team allocated me a VLAN and a "fake SSID" - credentials to log onto the wifi with to get onto that VLAN. My tent neighbour brought a bunch of Mikrotik routers supporting BGP. The shader jam took more time than anticipated and I didn't end up doing any of this.

Infiniband

I brought 15 kilograms of Infiniband switches, as well as compatible NICs and cables, on the train, at the expense of a proper sleeping bag, in the hope that someone would be interested in experimenting with it. Nobody was.