Interactive Media Technology and the Arts
Interactive Media Technology and the Arts
I have FINALLY finished composing all of my PhD music portfolio works, and writing the technical report up about how these works were composed. You can now see them on the portfolio page on this website!
Please let me know what you think - and feel free to ask me any questions about how I did any of it!
My practice-led research centred around using biometrics, from wearable (Myo) sensors, to gesturally control virtual instruments and spaces (including VR/MR), in real-time, towards new avenues and methods for musical creativity. This was made possible using a number of software tools, but Wekinator (an interactive machine learning tool) was particularly useful and pertintent for marrying the technical enquiry with the artistic side of things.
I've just published a new work of mine, Membrana Neopermeable, which you can see via the Portfolio page on this site.
It uses the Oculus Quest 2 to create a Mixed Reality experience, allowing the performer to play both an acoustic guitar - and a virtual one. Check it out and let me know what you think! It has recently been selected for performance at NIME2022.
Hi, the internet!
It has been a long time since I posted anything new, but watch this space. I am in the process of completing a PhD within the realm of interactive sound technology, hence the radio silence!...
For now, check out my research and music work within the site - enjoy and see you soon.
Real-time audio technology is powerful. Very powerful.
Imagine being able to explore a digital world where the environment (where this article is concerned, sound!) is that realistic that we can't differentiate between the real world and the digital - i.e. the Matrix (but, perhaps, without the agents...). That is the power of real-time audio technology.
The gaming industry, at the moment, is responsible for the commercial pioneering of interactive audio technologies. In building such audio technologies, they should mimic how we listen to our world, to make the digital world somewhat 'believable' and much like our own. This is done so that we directly relate to the game we are playing, or feel better immersed within it, and video games try and do this to their best ability (e.g. Grand Theft Auto). So what audio technologies are being used at the moment within video games and how far away is real-time audio technology?
To date, the gaming industry uses a type of audio technology that mimics the behaviour of real-life sound through the use of computer algorithms. This is called procedural audio. Procedural audio allows the computer to instantly generate convincing music - or sound effects - that have been pre-recorded at source. Simply put, a procedural audio system self-generates its audio output (music or sound effects) by making clever use of previously recorded (the important point, here) sound files; I'll briefly explain the technical points of how this works, later. All in all, the most famous example of procedural audio is Hello Games' No Man's Sky.
So, what is procedural audio and why was it created?
How do we make gaming, or interactive art, more immersive? As briefly mentioned earlier, we simply mimic how we perceive real-life through our senses (sight and hearing - in the case of this article, the latter). Procedural audio was a step closer to achieving that, through allowing the computer to fool us in to thinking what we are hearing is unique every time we hear it (think of this in the context of an explosion sound effect).
Technically, this is done via randomising the crucial components (ADSR) of a sound wave, using a finite number of sound files.
- To start, I have taken 5 recordings of an explosion
- I then cut each explosion sound wave in terms of each component of an ADSR envelope (Attack, Decay, Sustain, Release)
- I then import all cut components in to my game engine
- I feed each cut component of a specific envelope (e.g. Attack) in to a randomisation mixer for each individual envelope (A, D, S, R); in this way, the engine will randomise, for example, which Attack of a sound wave (explosion) will be played every time I trigger it.
- Finally, I feed each envelope randomisation mixer in to a main randomisation mixer for output
- The output is then a randomly generated, convincing, sound - or explosion.
This system would therefore create the potential for huge variation in what we are hearing through solely using a finite number of raw recordings - and it does a good job (see image below for an example of how it works in Unreal Engine 4). This method would therefore save strain on the CPU and would create a bullet-proof system of creating 'convincing' audio. But the sky is the limit and, naturally, we need more immersion. Procedural audio can only get us so far. The answer is real-time audio.
What is Real-time audio?
Real-time audio is the natural response and successor to procedural audio because it allows us to create better user-immersion (aurally) within video games. This is achieved through the instant generation of sound and, unlike procedural audio, through using a system that will generate an infinite number of sounds, within video games, in response to human interaction. Through being able to use an infinite and self-generative audio system we can therefore move closer to how we perceive real-world sounds, as the sounds we encounter in life are never finite or sound the same each time we perceive them.
Think of the complexity of how we perceive sounds in real-life through the following question:
'If I place a glass down upon a table, will it always sound the same?'
The answer is quite clearly 'no'.
This is because there are so many real-world (physical) variables at play when we hear the glass being placed, e.g.:
All of these elements would affect our aural perception of sound - something which procedural audio systems do not have the capability to consider. Real-time audio technology, however, does.
Therefore, real-time audio's mission is to create a convincing and sophisticated system that will allow us to hear digital worlds that feel and behave just like our own. Then, and only then, will we be able to feel more immersed within digital worlds.
So, how do we achieve real-time audio?
The process of creating real-time audio systems is currently undergoing research and is not quite as simple as procedural audio systems implementation for a number of reasons. Mainly, the potential strain real-time audio systems would have on CPU power and, perhaps more importantly, aesthetic considerations; i.e. how do we create convincing and aesthetically pleasing audio via computer generation and, ultimately, artificial intelligence? Can computers be artfully intelligent? Watch this space for a better detailed discussion these questions and the technical nature of real-time audio mechanics.
So what can we achieve from real-time audio within video games? Simply put, the potential for better and total immersion.
We have always been obsessed with escapism, visiting or feeling a part of another world or space (see image below for an example of this in popular culture) and real-time audio technology is our next step in achieving that.
Thanks for reading.
Remember follow this blog for more discussion on interactive audio technology!
A friend, quite recently, has indirectly elucidated the fact that I am ever increasingly at danger of using the world, 'real-time', so what is it and why it is important, within the world of sound technology, today?
I like to think of the term real-time as the instantaneous supply of something that is, well, needed; in the world of sound technology (and, more specifically, gaming), a cyclic supply and demand chain within coding systems/environments. Currently, the gaming industry today, as we know it, is compromised of procedural audio systems (which you can read about in detail here) which try their best to mimic the physical world behaviour of sound. So, for a more immersive and progressive level of playing, how can we take this further, past mimicking? As future technologies are becoming more and more sophisticated, funded and accessible, so must other areas of the gaming industry; here, sound. Furthermore, how do we achieve the notion of real-time audio and what are the key benefits?
Real-time audio is currently in a process of development and several leading institutions, and gaming companies (behind closed doors, for now!), are looking at ways to create, design and implement real-time systems within video games. This is to achieve, as previously mentioned, a completely immersive experience for the player and a way of reducing vanilla digital environments. Moreover, the research I have successfully conducted has focused on the cross-communication between Unreal Engine 4 (UE4) and Max, for the real-time transfer of visual data-to-sound synthesis (UE4 <-> Max). The research I am currently shaping, however, is an extension of this and is about looking at a more sophisticated version of what I have already conducted; concerning the real-time production of visuals and audio, synchronously. As for a more elaborate response to the integral questions within this field, which I have highlighted above - I'm afraid you'll have to wait.
So, furthermore, why should we be interested in real-time audio? Because, simply, there is no shred of doubt that real-time mechanics, within the gaming industry (and beyond!), is the future for immersive and interactive technologies; as elucidated, through a brief search (here) , within the industry itself. Here's to the future.
Welcome to my (hopefully, up to date) blog about all things Sound Technology; within the realm, mostly, of interactive sound technology.
...watch this space!
Astounding piece of technology and exciting to see how audio, in the not too distant future, will follow suit!