After a month of trying to run clojure REPLs and trying to get quil to talk to overtone I was ready for my first livecoding performance. Trix algorave was a great event to meet like minded people talking about code, music and creativity.
Using quil means I was doing graphics and I was supporting the music of bohrbug. The hardest part was getting two computers to be in sync on both sound and music. I could use some insights on that.
For now we were using OSC and two separate SC servers. I have a hunch that running one SC-server for both music and visuals could greatly improve the sync and the amount of data that can be shared realtime between the two interfaces. Any ideas on that are likely to be moderated very quickly 🙂
The evening was continued with Insomniark en Exoterrism both using pure supercollider and hardware controllers for quickly interfacing with the code to be concluded with a streamed set from Alex McLean. That last set showed of an impressive coding style, both fast and well executed clearly showing experience is important…
onto the next algorave Here’s the full code repo for the visuals and the full set on youtube below (atm without sound, hope to get the recording ASAP)
For the mix office redesign I designed a magnetic and writable wall (9.6 by 2.4m) but to just have an empty white board, well that would be a bit boring so I added a cityscape at the border, generated by processing, and the “oblique sky” clouds formed by the words of Brian Eno’s Oblique Strategies, also generated by processing. Both ideas were developed through trying and testing and getting immediate (visual) feedback while coding. At each run of the code, I took a snapshot. Next Time, I’ll look into autocommiting with a tag in github so I can link snapshots back to the right code. If you put the snapshots in a little movie, you can see the evolution of the idea and also the mistakes along the way 🙂
Feedback was given from the oblique strategies themselves as they hinted me to try other things when I saw them rendered on the screen.
I quickly added real data to the visualisation so now what you see actually means something. Every circle represents 20 persons either older or younger then 20 living in muide-meulestede-afrikalaan, a district in Ghent, Belgium.
Then I made a toggle button to make kids take up more space. Also in areas with less open space an extra factor will make them even bigger. The results seem readable to me and the toggle makes it easy to compare between not taking additional parameters (open space &demographics) into account and taking the extra info into account.
But who am I to decide over the importance of kids and open space… let’s add sliders.
The sliders let the user decide which parameters are important and get immediate feedback. neat!
Data. We’re creating more of it every day then ever before. Big companies are using your data it and you might (not) be happy with what you get in return. anyway they are crunching your big data come up with big numbers or even a score for your profile.
Besides not being very transparent about the algorithms most of the time, they create a currency you need to trust based on a gut feeling, but that’s another story.
A good trend is more and more governments and institutions are opening up their datasets to let the public access their data. Whether it is because a governments is legally obliged or a company/institute believes it can benefit from opening up their sources, we have access to some data to play with.
I’ve been reading up lately on the subject of data, and I went to resonate.io where I was inspired. @flowingdata also compiled a nice list of blogs/sites on the subject on his site.
Working with visual data has always been of interest:
Armed with a Silicon Graphics Indigo 2 Extreme desktop workstation with R4400 processor, four gigabyte hard drive, and 128 megabytes of RAM, Borchers uses IRIS Explorer an interactive 3D data visualization system to analyze data. The team’s datasets range in size from four to 20 megabytes (based on foot scans producing 300,000 points in x-y-z spatial datasets). 1999 web 1.0 link
Now that I have 128Mb of ram I thought I should give it a go. I chose processing as I used it before for different tasks and it’s is a good protoyping tool used by dataviz pro’s to make stuff like cascade (built by @blprnt at @nytlabs). Besides that, I recently had a nice re-intro to processing by @vormplus and I followed a workshop @p5Ghent.
Having just missed the Ghent appsforghent challenge it felt a bit odd at first to use the data.gent.be sets, but I’m a local, and you should think local(but act global, which is why this post in in English).
An interactive explorative tool that allows you to visualise data on a map (of (part of) Ghent)
I wanted to express the fact that this particular place where I live is crowded, densely populated and this is why I started thinking about using physics. Particles bumping into each other, fighting for space.
This is far from finished but I thought I’d share this already.
It’s on github, next up is giving kids more space by mapping the demographic pyramids on this data and making the particles representing kids bigger. With the press of a button I could show what impact building a new high-rise flat will have in a certain area.
The user could set the desired sq meters of outdoor area per capita and see which neighborhoods are comfy and which are crowded. Much more thing a possible, but also a little more time is needed.
Right now it reads in map data from an SVG file, converts those shapes to box2D entities and populates them with particles.
The screenshot isn’t using the correct data yet but you don’t need a lot of imagination to see that it has more potential then the original
just using a map to display map related is a start and while that is an option on the site (which is a great source of info btw, well presented and all that) this static mapping doesn’t really say a lot. It’s crowded. period.
I’m just using this example to explore the possibilities of using physics for making sense to data. This doesn’t necessarily mean you need to map phyisical properties to your data that in real life exhibits the same properties. I can imagine that if you need to sort certain types of data it could make sense to let them have a different density so some data floats while other would sink, while you’re actually visualizing eg. twitter vs. facebook.