What if patterns and themes in online discussions made sound? What would it be like? How would it look like?

https://www.youtube.com/articonf/live

‘Contemporary cultural meanings about social and technical dimensions of digital technologies circulate in social media’, says Pedro Jacobetty of the University of Edinburgh. ‘In Twitter, different actors express their understandings and imaginaries in constant digital exchanges. The goal of Soundchain is to raise awareness and foster debate around blockchain technologies through what amounts to both an aesthetic intervention and a form of social commentary.’  — adds Jacobetty. Soundchain is part of the public action dimension of ARTICONF, an European project led by universities and European startups centered on the democratisation of social media. Soundchain is a way to help everyone hear what the buzz about blockchain really is.

Soundchain uses topic modelling, a machine learning model for natural language processing (NLP). This is useful for analysing large amounts of documents, treating their content as a mixture of meaningful latent structures (“topics”). Its objective is to uncover the semantic structures, those latent structures of meaning that organise the distributions of words (into topics, present throughout the documents. The Soundchain music generator monitors online discourse about blockchain technology in real-time using the Twitter streaming API by filtering according to the hashtag #blockchain. Tweets are interpreted in real time, while the “genesis tweet”, the first tweet captured when the generator starts, is used to create the melody and the rhythm. Topics are then mapped to parameters of the music generator, thus transforming the Twitter stream into a never-ending sound piece.

ARTICONF proudly presents Pedro Jacobetty’s work from the sociology department of the University of Edinburgh. Hi Pedro, can you introduce Sounchain to us?

Hello there. So Soundchain seeks to sonify discourse about blockchain on social media. I’ve used the Twitter Live API to capture the stream of all tweets using the blockchain hashtag. I capture them in real-time as they are tweeted, and then I apply a machine learning technique, which is called topic-modelling, to identify which topics or themes are present in each of the tweets. Finally, I map the presence of different topics to different parameters of the music generator. 

Many different digital art projects are focused on very technical aspects of technology. As a sociologist I’m particularly interested in the meanings that circulate in our society and guide our behaviour. That’s why I decided to use Natural Language Processing, social media discourse and topic modelling to create Soundchain.

That sounds pretty cool. Would you be able to show us how it works?

Soundchain runs on a Linux server. And it is written in Phyton. I can show you how it works inside. So here is the code. It uses Foxdot, which is a Phyton live coding library, that actually operates another open source software, which is a software synth, called SuperCollider. They work together on the same server. So here you see bits of the code that generates the music. And now I show you how things go together. As you can see, there is a script that starts Foxdot and Phyton. Then I use another open source streaming software called FFFMpeg which streams to video streaming platforms in real-time. Supercollider generates the sound — which also runs on the server. And finally, there is the Twitter Live API which monitors new tweets that uses the blockchain hashtag. So one machine gathers the tweets, generates the sounds and streams to the video streaming platform. 

How can this project have a positive impact on society?

I think it’s important right now to raise public awareness and start collective debates about the role of new technologies. They are currently being developed and maybe they will shape our lives and our collective futures. It’s high time to debate what kind of technologies do we want. It’s high time for people to get familiar with them and to understand what kind of impact they may have so they can make better decisions.

Thank you Pedro so much for showing us Soundchain. For all the people out there, make sure to watch this livestream on ARTICONF’s YouTube channel — and subscribe whilst you are there.

ARTICONF addresses issues of trust, time-criticality and democratisation for a new generation of federated infrastructure, to fulfil the privacy, robustness and autonomy-related promises that proprietary social media platforms have failed to deliver so far.

Soundchain was also supported by the mur.at artist collective (Graz, Austria).

For more information please contact Pedro Jacobetty at p.jacobetty@ed.ac.uk or Media Coordinator David Sarlos at david@vialog.io.

26 March 2020