Looking Ahead: The Future of The Ring

As The Ring continues to evolve, reflecting on its development reveals opportunities to refine, expand, and elevate its impact as an immersive audio-visual installation. From overcoming technical challenges to exploring new creative dimensions and exhibiting in more suitable environments, the future of The Ring holds potential for key adjustments and growth. Below, I discuss key aspects of its ongoing transformation and how these align with its ambitions, including the possibility of showcasing the installation in a multi-channel format at events like Amoneus.

Troubleshooting and Technical Refinement

One of the persistent challenges during the installation’s journey has been technical reliability, particularly regarding its LED animation and control systems. Previous iterations revealed vulnerabilities in the hardware, such as microcontroller malfunctions, which impacted the intended immersive experience. For The Ring to reach its full potential, addressing these challenges is a priority.

Future iterations will benefit from proper testing protocols and improved design architectures. For example, implementing modular systems that allow for easy troubleshooting and replacement on-site will minimise downtime. Additionally, having backup hardware readily available can mitigate unforeseen failures during transportation or setup. Such refinements will ensure the installation operates seamlessly, allowing the creative vision to shine through without technical disruptions.

Expanding the Scope: New Audio-Visual Games

A key element of The Ring’s appeal is its ability to transform participants into performers, using their movements to shape soundscapes and visuals. Building on the foundation of the Entry Scene and Scale Game, I aim to develop a broader series of audio-visual games. These new games will enhance interactivity and expand the creative possibilities for participants.

One concept in development involves introducing percussive elements that respond to gestures, enabling participants to create dynamic, rhythm-based compositions. Additionally, integrating adaptive visuals that react to movement speed and proximity can deepen the connection between sound and light, resulting in a more immersive experience. These new games will challenge participants to explore not just the installation’s features, but also their own creativity and physical expression.

Extending the Sound Design

Sound design is central to the immersive experience of The Ring, and there is significant potential to make it even more sophisticated and diverse. Incorporating percussive elements is a natural next step, enabling participants to trigger beats or rhythmic patterns through specific gestures. The tactile quality of percussive sounds can enhance engagement, adding a visceral dimension to the installation.

Beyond percussion, expanding the range of MIDI CC parameters controlled by the gloves will provide more nuanced sonic manipulation. For instance, allowing participants to adjust reverb, delay, and distortion dynamically will give them greater creative agency. Pairing this with harmonic complexity, such as customisable scales and tonal palettes, will enrich the auditory experience, making it more versatile and expressive.

Finding the Right Environment

Exhibiting The Ring in environments that support its immersive qualities is crucial. While previous showcases in club settings offered energy and spontaneity, the chaotic soundscape and limited control over space created challenges for both the installation and the audience experience. Moving forward, The Ring would thrive in venues that allow for a more focused interaction, such as gallery spaces or dedicated areas within larger events.

Gallery spaces offer controlled acoustics and lighting, enabling the intricate details of The Ring to be fully appreciated. If presented at high-energy events, having a dedicated, quieter area would allow participants to engage with the installation without external distractions. By carefully selecting exhibition environments, The Ring can deliver its intended impact more effectively.

Taking The Ring to Multi-Channel Formats

One of the most exciting opportunities lies in showcasing The Ring in a multi-channel audio setup, where sound can move spatially around the audience for a fully immersive experience. Multi-channel formats elevate the auditory dimension, creating a 360-degree sound field that interacts with the visual elements in real time.

Platforms like Amoneus provide an ideal stage for this next phase. Known for its focus on innovative and immersive installations, Amoneus fosters a space for cutting-edge art to thrive. Submitting The Ring to such platforms opens the possibility of exhibiting it in a format that aligns with its ambitious goals. A multi-channel setup would allow participants to experience sound and light in an entirely new way, enveloping them in a dynamic interplay of movement, sound, and visuals.

Challenges of ‘The Ring’ Installation

Renowned industrial and goth venue Electrowerkz in Angel, London

1. Experimentation: Trying New Things

One of the core goals of this iteration of The Ring was to integrate new features, particularly synchronizing LED animations with the sonic elements. This was an ambitious addition that aimed to enhance the audience’s immersive experience, creating a seamless connection between sound and visuals. However, as with any new feature, it came with significant technical challenges. The animation relied on a network of microcontrollers to drive the LED strips, a system I had never tested under the constraints of a live club environment.

2. Technical Malfunctions and a Cyberpunk Rescue

The day of the exhibition turned into a chaotic rush to fix critical failures. Several microcontrollers burned out during last-minute adjustments to the power supply. This forced me to drastically scale down the visual aspect of the installation, reducing it to only eight LED strips. On top of that, during transportation, the remaining microcontroller responsible for the animations was damaged, rendering the visuals almost entirely dysfunctional.

In a dramatic turn, a friend saved the day by delivering spare Teensy boards I had ordered as a contingency. His arrival at the club on a bike, handing over the parts while I wore a gimp mask as part of my costume, attracted the attention of the club’s security. The situation, as surreal as it was stressful, felt like something out of a cyberpunk novel. After explaining that the “suspicious” bag contained microchips, we were allowed through. Despite our efforts, the animations remained glitchy and erratic throughout the night. The sound elements, however, worked beautifully, salvaging the overall performance.

3. Budget Constraints

The exhibition proved to be an expensive endeavor. The cost of replacing burned microcontrollers and purchasing spare components quickly added up to hundreds of pounds. Budget overruns due to technical malfunctions underscored the importance of contingency planning and financial flexibility when working with complex installations.

4. Navigating the Stressful Club Environment

Electrowerkz, with its multi-floor layout and pulsing energy, was an exhilarating but challenging venue. The sheer volume of attendees made it nearly impossible to track the order of audio and video recordings during the event. The constant movement of people added a layer of unpredictability to the interaction with the installation. While the chaotic environment suited the experimental nature of The Ring, it also highlighted the difficulty of maintaining control over the documentation process.

5. Overemphasis on Technical Aspects

In hindsight, I recognize that I placed too much focus on the technical and visual components of the installation, to the detriment of the sonic elements. While the soundscapes and interactive scales worked well, they didn’t receive the same level of attention during development, which might have enhanced the overall experience. Striking a balance between the auditory and visual aspects is a key takeaway for future iterations.

6. The Impact of Deadlines

The tight deadline compounded the stress of preparing The Ring. Late nights and last-minute fixes led to technical oversights, such as the power supply issues that caused microcontroller failures. The pressure of time emphasized the importance of thorough testing and preparation well ahead of an event.

7. Loudness Interference

As mentioned in a previous blog post, the loud club environment created a significant challenge for the installation. Despite being placed in a chill-out area, sound from a nearby speaker interfered with the experience. Wireless headphones helped mitigate this issue, but it was a far cry from the intended multichannel audio setup.

Execution of ‘The Ring’

The Ring was exhibited on the 22nd of November 2024 during the queer art rave Riposte at Electrowerkz, a vibrant multi-floor venue located in the Angel area of London. This exhibition provided a unique opportunity to test audience engagement and interaction with the installation within the dynamic context of a club night. My objective was to explore diverse behaviours, degrees of individual engagement, and the overall impact of the piece on attendees who, through their interaction, effectively became performers themselves.

Positioning The Ring within the bustling, energetic environment of a club presented a mix of challenges and insights. The goal was to transform the typical club-goer experience, immersing participants in a blend of sonic and visual stimuli while encouraging them to engage beyond passive observation. By doing so, the installation sought to blur conventional boundaries—not just between body and sound, but also between audience and artwork.

Implementing The Ring in this particular setting proved both stressful and complex. The venue’s acoustically congested environment necessitated significant adaptations to the original design. Although the installation was conceived as a multichannel setup, the loud music from the surrounding rooms made this approach impractical. Instead, wireless headphones were utilized to create a more focused auditory experience. Additionally, the installation was placed in the club’s designated chill-out area, yet it was not immune to external noise interference, as a nearby loudspeaker directed music from one of the other rooms toward the installation space.

Despite these challenges, the installation successfully garnered a notable amount of interest and participation. Attendees engaged with The Ring in varied and often unexpected ways. Some individuals immersed themselves fully in the audiovisual (AV) games, exploring the interactivity and engaging with the installation’s sonic and visual elements as intended. Others, however, treated the installation more superficially, using it primarily as a backdrop for social media photos, with minimal interest in its interactive or auditory components.

Observing these interactions highlighted intriguing patterns in audience behaviour. The majority of participants seemed compelled to engage with the scales and explore the installation’s interactive features, demonstrating curiosity and playfulness. This response was particularly encouraging given the auditory challenges posed by the nearby loudspeaker.

This context-specific performance of The Ring offered valuable insights into how interactive installations function in non-traditional art spaces. The club night setting, with its inherent distractions and competing sensory stimuli, presented an entirely different dynamic than a gallery or more controlled environment might. These challenges underscored the importance of adaptability in interactive art and revealed fascinating tensions between audience intentions, environmental constraints, and the installation’s immersive potential. By inviting the audience to become co-creators, The Ring succeeded in fostering a participatory atmosphere, even if the depth of engagement varied widely among attendees. This experience will undoubtedly inform future iterations of the project, particularly in balancing accessibility, interactivity, and the intended impact of the installation.

Technology behind ‘The Ring’

This blog post will be dedicated to the hard ware part of the installation. The final execution differed from the intended form. I will be talking now about its intended form. The difference from the final executed version and reasons behind will be explained in another blogpost. Intended form of installation consists from:

  • 16 programable LED strips approx 190cm tall (107 programmable LEDs) positioned in the circle.
  • Pair of WiFi motion tracking gloves
  • Control Station
  • Pole with ToF sensor and Bluetooth for triggering the animation (or AV Game)
Programmable LED strip WS2812B – The building block of the visual interface for The
Ring

Gloves contain micro-controller ESP32, IMU sensor BNO055 for tracking the motion, DC-DC buck converter to bring small LiPo battery voltage of 3.7V down to desired 3.3V (operational voltage for both, ESP32 and BNO055). Raw x, y, z data from sensor are processed in micro-controllers and sent via WiFi on separate channels into Control Station.

Control Station

Control Station contains two ESP32 receivers for each glove. Data those are being split and sent into Teensy 4.0 containing logic which converts specific angles from the 360 degrees radius into specific MIDI note of pre-programmed scales, as well as one MIDI CC control. Each MIDI note covers 45 degrees of the circle. Teensy 4.0 can be directly connected with any DAW. Second avenue from the data split is into two Teensy 4.1 which controls LED interface. Each Teensy 4.1 is handling 8 strips. I have chosen this way because I found necessary to use library octows2811. This library enables very fast LED animations simultaneously in comparison to other libraries like FastLED or NeoPixel. I tried those too and found them very inefficient for real-time and fast applications. Control Station also contains a Bluetooth module which receives data from the pole’s ToF sensor.

Pole with ToF sensor and the roest of the Control Station

The Pole contains ToF (Time of Flight) sensor which essentially measures the proximity from and object. Distance data are being sent via Bluetooth module to the Control Station. In the installation it is used to trigger the initial animation and sound upon entering The Ring.

Body as Musical Instrument by Atau Tanak and Marco Donnarumma

The chapter “The Body as Musical Instrument” explores the concept of the human body serving as an integral musical instrument through embodied interaction, gesture, and physiological engagement. This framework synthesizes phenomenology, body theory, and human-computer interaction, examining the physical and technological extensions of the human form in musical performance.

Body and Gesture in Musical Contexts

The body’s involvement in music extends beyond tactile manipulation of instruments to a profound interplay between physicality, sound, and space. For example, brass instruments engage the player in a feedback loop, where acoustic resistance informs and adapts the performer’s physiological response, creating an interactive system of sound production and embodiment (p. 2). This phenomenon is tied closely to proprioception, the body’s innate sense of position and movement. Proprioception bridges conscious and unconscious motor control, allowing musicians to refine gestures and adapt their performance dynamically, as seen in how instrumentalists use diaphragmatic control to modulate tone or avoid injury (pp. 3–4).

The concept of body schemata, as discussed by Merleau-Ponty, highlights how the body integrates tools and instruments into its sensory and motor systems. For instance, the example of an organist illustrates how performers do not rely on the objective positions of pedals or stops but incorporate these elements into their extended proprioceptive field, creating a seamless interaction between body and instrument (p. 5). Musicians thus engage instruments affectively, using gestures that are intrinsically tied to their expressive intent, rather than merely mechanical actions (p. 6). The concept of Body schemata with involvement of digital technology, as I understood it, can be explicitly spotted in the video below.

Atau Tanaka, Suspensions for Piano & Myo Armband performed by Giusy Caruso – a

Embodied Interaction and Technological Extensions

Technological advancements have amplified the role of gesture and the body in music, creating opportunities for innovative embodied interactions. Biosensors, such as EMG (electromyogram) and EEG (electroencephalogram), detect physiological signals directly from the body, transforming muscle movements or brain activity into musical control inputs. These devices exemplify the transformation of the body into a musical medium, a development highlighted by early gestural electronic instruments like the Theremin (pp. 7–9). I found particularly interesting the note about posthuman hybridisation of the body with technology. These advancements align with Donna Haraway’s concept of the cyborg, where human and machine interact to form hybrid entities, expanding the expressive potential of the human body beyond traditional boundaries (p. 6).

Paul Dourish’s perspective on embodied interaction further situates these developments, emphasizing that interfaces should not merely represent physical interaction but actively become mediums of interaction (p. 8). In this context, technologies like biosensors and motion capture systems enable performers to seamlessly integrate their physiological and gestural inputs into musical creation, fostering more profound connections between body, instrument, and sound.

Gestural and Physiological Performance Practices

Recent works demonstrate the evolving interplay between body and technology. Atau Tanaka’s Kagami (1991) transformed muscle tension, detected via EMG signals, into MIDI data to control digital sound, establishing a direct and intuitive connection between gesture and sonic output (p. 13). Marco Donnarumma’s Ominous (2013) extended this approach, using mechanomyogram signals to create interactive soundscapes shaped by whole-body gestures, effectively molding sound like a sculptural material in space (p. 14). These examples emphasize the transition from static instrument manipulation to adaptive systems where performer and instrument co-evolve (pp. 16–17).

These practices challenge traditional control paradigms by fostering adaptive configurations in which the instrument responds dynamically to the performer’s physiological and gestural inputs. For instance, in Ominous, the interplay between the performer’s muscular activity and the neural networks driving the instrument illustrates a symbiotic relationship, blurring the boundaries between human control and technological agency (p. 16).

The integration of gesture, body, and technology redefines the concept of musical instruments, positioning the human body as a central, adaptable, and dynamic component in sound creation. Through physiological processes and technological extensions, performers achieve novel interactions with space, sound, and audience. As this chapter demonstrates, the body as a musical instrument not only adapts to evolving technologies but also transforms them, extending the boundaries of human expression in music (pp. 17–18).

This synthesis of embodied interaction, gesture, and physiological integration creates emergent musical forms, aligning with the posthuman notion of hybridized entities that merge physical and digital realms in artistic practices (p. 18).

Atau Tanaka has been a significant inspiration for my practice, particularly as I reflect on the similarities and differences between our approaches, especially in relation to The Ring. While we both explore the concept of the human body as a musical instrument, our perspectives diverge. Tanaka primarily focuses on internal aspects, such as muscle tension and physiological signals, whereas my work emphasizes external bodily movements. Additionally, The Ring seeks to extend this exploration by engaging the audience, aiming to dissolve another layer of duality—not only between body and sound performance but also between the audience and the art piece itself.

Bibliography:

Tanaka, A. and Donnarumma, M., 2018. The Body as Musical Instrument. In Y. Kim and S. Gilman (eds.), The Oxford Handbook of Music and the Body. [online] Oxford University Press. Available at: https://doi.org/10.1093/oxfordhb/9780190636234.013.2 [Accessed 27 Nov. 2024]

Sound design for ‘The Ring’ installation

At this stage of the installation’s development, I have created two distinct “sonic situations” that are integral to the experience: the Entry Scene (also known as the Entry AV Game) and the Scales (or Scale Game).

The Entry Scene: Adjustments and Execution

The Entry Scene was initially designed to create visual and auditory transition for participants as they stepped into the circle. My original concept was to have 16 LED strips progressively light up from left and right to the middle in the front, converging to form a complete ring that encloses the participant. However, due to a severe hardware malfunction the day before the exhibition, I had to scale back the animation to utilise only 8 LED strips. Despite this limitation, the adjustment preserved the core concept of creating an engaging and immersive entry point for the installation.

The sonic aspect of the Entry Scene complements the visual elements by employing rhythmic drum steps that align with the LED animation. As the LED strips transition step by step from white to red, the accompanying drum sounds build intensity, shrinking visually and aurally into a thin ring of light. The drum sound itself is a processed sample, crafted from a recording of a rusty metal tank in my basement. The raw, industrial quality of the sample adds a tactile and somewhat primal atmosphere to the scene, reinforcing the visceral nature of the installation’s aesthetic.

The initial 2×8-step Entry Scene featured a distinct spatial imaging compared to the final 8-step version. In the original setup, 8 drum steps moved to the left and right, converging in the center at a distance from the listener. Reverb was applied to enhance the perception of depth, while panning emphasized the directional movement, creating a more immersive spatial experience.

The final 8-step Entry Scene progresses from left to right, featuring a different panning approach. The reverb, used to convey a sense of distance, is also applied differently, resulting in a distinct spatial perception compared to the original version.

Final 8 drum step arrangement of the Entry Scene in Ableton Live. I am using my favourite Reverb Valhalla VintageVerb.

A few seconds after the Entry Scene, The Scale AV Game begins. Each glove is programmed to control a different musical scale—G Minor for the left glove and D Major for the right. The tilt of the left glove also controls a MIDI CC parameter. Currently, only one parameter is implemented to keep the setup simple, but I plan to expand this functionality by adding more MIDI CC parameters to control additional effects in future iterations.

I’ve chosen a simple Saw64 wave in Operator, with a touch of reverb and delay. When the left hand is tilted, the pitch of the note shifts by up to 100 cents, creating a “pulling” or “tuning string” effect in the sound.

Playing scales by an audience / performer

Inspiration for a New Project: A MIDI Theremin with Visual Feedback

After extensive work with LED strips, particularly the WS2812B, I found myself inspired to create something new while working on The Ring. One improvised idea that emerged during the development of the scales in The Ring was to design a Theremin-like MIDI controller. This device would allow the user to trigger MIDI notes from a pre-programmed scale without physical contact, relying on motion sensors and providing visual feedback through LEDs. The concept was a natural extension of my work, utilising my growing expertise in coding, microcontrollers, and sensor integration.

To bring this idea to life, I built the MIDI controller using two ultrasonic sensors (HC-SR04), a 60 cm programmable WS2812B LED strip, and an Arduino Micro. The Arduino Micro, equipped with the Atmega 32u4 chip, was particularly suitable for this project as it supports direct MIDI communication with DAWs (Digital Audio Workstations) and other MIDI-compatible instruments. This eliminated the need for additional hardware or software bridges, making the device streamlined and efficient.

I utilized the MIDIUSB and NeoPixel libraries in C++ to program the device. The ultrasonic sensors were configured to detect hand movements within a certain range, triggering MIDI notes based on the distance of the user’s hands from the sensors. Each sensor was assigned to a different musical scale, similarly like gloves in The Ring, creating a dual-layered experience. To add a layer of visual feedback, I programmed the LED strip to light up in distinct colors corresponding to each scale. This ensured that users could easily distinguish between the two scales, enhancing both the functionality and the aesthetic appeal of the device.

The result was a responsive and visually striking MIDI instrument that combined gesture-based control with dynamic lighting. The experience of using this MIDI Theremin went beyond sound; it became a multisensory interaction where movement, sound, and light converged seamlessly.

The MIDI Theremin was successfully performed during the Chronic Illness XXIII event, showcasing its potential in a live performance setting. Watching it in action during the event confirmed its versatility, not just as a standalone instrument but also as a tool for enhancing interactive installations or live sets. I definitely plan to incorporate this MIDI Theremin as a permanent feature in my setup for live musical performances.

Performing MIDI Theremine at Chronic Illness XXIII
Performing MIDI Theremine at Chronic Illness XXIII

Creating ‘AV’ Games

As I touched on briefly in my previous blog post, the Circle series is centered around the concept of audio-visual games, where a designated “conductor” takes control. Positioned within the circle and equipped with motion-tracking gloves, the conductor manipulates sound and visuals in real time, creating an immersive, interactive experience. The LED interface, consisting of 16 LED strips arranged in a ring, serves as the visual canvas for this dynamic interplay.

Building on my prior experience with creating interactive gloves and using motion to control sound, I feel confident in generating and manipulating audio elements through hand gestures. This familiarity has allowed me to focus more intently on exploring and refining the visual components of the installation. My goal is to design an engaging and intuitive system where light and sound not only complement but also amplify each other.

The “Entry Game”: A Gateway to Interaction

The first element I’ve programmed for The Circle series is the “Entry Game.” This game is designed to trigger automatically as the conductor steps into the circle. The concept behind the Entry Game is to provide an immediate, engaging introduction to the system. Upon entry, the motion-tracking gloves activate a sequence of lights on the LED strips, signaling that the conductor has entered a new interactive domain. This game acts as a gateway, setting the stage for deeper levels of interaction while ensuring the conductor feels immersed from the outset.

“Digital Hula Hoop”: A Work in Progress

Another game currently in development is the “Digital Hula Hoop.” This element focuses on creating a visual and sonic interplay that responds dynamically to the conductor’s movements. The idea is to program two light circles in different colors, representing the conductor’s hands. These circles will move and tilt within the LED ring based on the motion data captured by the gloves.

At this stage, the animation for the Digital Hula Hoop is automated and does not yet include sound integration. However, the visual elements are being refined to ensure smooth and intuitive responsiveness. The next step involves linking the motion-tracking data to control the position and orientation of the light circles dynamically.

On the auditory side, I envision pairing the light movements with evolving drone sounds. The amplitude and distortion of the sound would change in response to swift horizontal hand movements, creating a sense of energy and tension. Additionally, vertical hand motions could modulate the pitch, adding depth and variety to the soundscape. The ultimate goal is to achieve seamless synchronization between sound and visuals, where each gesture transforms the conductor into a performer and the LED ring into a living, reactive instrument.

I added dramatic sound design which is suppose to evoke entering to the cybernetic liminal space.

Arrangement of samples in Ableton Live – 8 drum hits slowly panning to the left corresponds with the animation movement of 8 LED strips with the ‘shrink’ drone in the end.
Test of the ‘Entry and the shrink drone in limited light sequence 1, 2 and 8 only – Amount of lights is this time limited due space restrictions in the studio.

While the The Ring series is still in its early stages, the progress so far has been exciting and illuminating. The combination of intuitive hand-controlled soundscapes and visually dynamic LED animations offers immense creative potential. Moving forward, I aim to refine the interaction mechanics, ensuring that the system is not only responsive but also rewarding for both the conductor and the audience. Each game in the series will build on the others, gradually increasing in complexity and encouraging deeper engagement with the installation.

Automated Digital ‘Hula Hoop’ animation on LED interface recently expanded in length.

Interactive Audio-Visual Installation ‘ The Ring’

I have decided to incorporate my Sonokinetic Arduino gloves to complement the programmable LED strips in The Ring. This marks an initial step toward a broader interactive audio-visual installation.

The Ring explores the convergence of audience and artist roles within club culture through sonic and visual mediums. It examines how the commercialization of DJ culture and the rise of social media impact immersive experiences in nightclubs, where the focus often shifts from artistry to consumerism. The installation also considers whether the constant evolution of digital technologies might provide an alternative solution through the creation of hybrid art forms.

The piece aims to democratize and decolonize club spaces by encouraging direct audience participation, disrupting the traditional dynamic where the DJ serves as the central focus while the crowd remains passive. Instead, The Ring invites attendees to actively engage, turning them into co-creators of the experience.

Drawing on Haraway’s (Haraway, 2016) cyborgian narrative, The Ring integrates the communal aspects of a club night, affective immersion, and the blurring of boundaries between artist, artwork, and audience. It creates a semi-virtual space where sonic, visual, and social elements converge into a hybrid form, challenging conventional distinctions and offering a reimagined experience of club culture.

Bibliography:

Donna Jeanne Haraway (2016). Manifestly Haraway. University of Minnesota Press.

Commission for motion tracking gloves

The inspiration for The Ring began in an unexpected and somewhat serendipitous way. I came across a striking visual of an unknown installation while scrolling through social media. The image captured my imagination, and I immediately thought, “I’d like to replicate something like this.” What began as a visual exercise—a simple attempt to recreate the aesthetic appeal of the installation—soon evolved into a much more ambitious project. As I delved deeper, I realized the potential to expand the concept by incorporating sonic and interactive elements. These additions aligned with my broader interest in creating immersive, multisensory experiences.

The idea began to take form after a shift in priorities around a separate commissioned piece I was working on at the time. Suddenly free to explore my own creative directions, I decided to use this opportunity to build upon the initial inspiration. What started as a purely visual experiment grew into an exploration of audience interaction, embodiment, and the integration of sound and motion.

In September, my friend Matteo Chiarenza Santini approached me with an intriguing request. Matteo was collaborating on a live performance for FKA Twigs and had been tasked with sourcing a pair of simple, interactive motion-tracking gloves for the performance. He reached out to me, asking if I could create a prototype that would meet the technical requirements.

Excited by the challenge, I began working on the gloves. Using my experience with Arduino and similar technologies, I designed a simplified version of an earlier prototype. The gloves featured BNO055 IMU sensors for precise motion tracking and ESP32 microcontrollers for data collection and Wi-Fi transmission. Each glove was capable of sending raw x, y, z axis motion data to a Teensy board, which interfaced with Max For Live, enabling users to control parameters in real-time. Additionally, the gloves supported direct MIDI communication, making them compatible with Ableton Live and other DAWs.

Although the gloves were completed, they were ultimately not used in FKA Twigs’ performance. Initially, this was disappointing. However, the experience of building the gloves and the creative potential they represented sparked a new wave of ideas for me. What if these gloves became the foundation for the interactive installation I had been contemplating? The thought of integrating motion-tracking gloves into an installation seemed like the perfect opportunity to explore the interplay between movement, sound, and interactivity on a deeper level.

This unexpected twist marked a turning point in the development of The Ring. The gloves became the starting point for an installation that not only reflected my fascination with visual aesthetics but also pushed me to explore how movement could shape soundscapes and create immersive environments. What began as a technical experiment transformed into a project driven by the potential to blur boundaries between performer, audience, and artwork.

Custom LED system for music performance of Ona Tzar

One idea which emerged from my growing interest in creative technology was creating custom light system which could ad to a live music performance strong visual aspect.

Based on my previous work with programable LED strips and micro-controllers I have made MIDI reactive strips. Strips are circuit-bended floor lamps. Lamps themselves initially contained a chip for automated animations however I found them not very attractive as well as audio reactivity was quite basic. The design on lamps, portability and fact that they contain easily programmable LED lights inspired me to create something new. I have removed the chip and replaced it with 3.5mm female jack so the lamp becomes a part of a “screen” of 7 lamps.

Separate lamps are connected to separate outputs from Arduino Leonardo (contains chip Atmega 32u4 which allows direct MIDI control). Arduino Leonardo is programmed to receive MIDI notes from a DAW or MIDI instrument. Each lamp is a single MIDI note on a scale from 0 to 127. A single MIDI note can contain different colour, different pattern or animation. With 7 lamps we then have possibility of 18 unique series of animated colour patterns on a single MIDI channel. If we need more we can simply program more animations on different MIDI channel (16 in total, so in this case we can get 16 x 18 = 288 variation)

Light patterns can be either played live from MIDI controllers via Ableton Live sending MIDI note messages to the Arduino Leonardo or hand-written in piano roll.

For the live performance of Ona tzar we have decided to create hand-written piano roll MIDI clips so they can be as live light loops together with audio clips.

Light pattern hand written in piano roll – allows the exact timing and respons to the music or sound adequately.
Test of various animations with Ona Tzar’s single Hypnagogia
Test of various animations with Ona Tzar’s single Hypnagogia with aditional strobe lights connected to relays triggered also by MIDI notes
Final video of Ona Tzar’s live performance of Hypnagogia. The first video from coming up triptych of live performances .

Sonokinetic gloves 2.0 – Laying out ideas for improvements

Creating the prototype of Sonokinetic gloves was an important step into figuring out all possibilities and limits of wireless wearable MIDI controllers. After further research and obtaining new knowledge about digital technologies, I decided to build new gloves, which would be more stable, faster and accurate within its performance. Crutial elements of improvment are switching to WiFi from Bluetooth and utilising more electronic textiles. Making the whole piece of gloves from the scratch seems now as more logical outcome.

Test performance of the sonokinetic crown and limb stretch sensor prototypes

Final outcome of the collaboration was going to be the short performance of all assembled pieces (core station assembled to the crop-top, latex based sensors and 3D printed crown) in the octophonic ring executed by the choreographer and kinaesthetic artist Ona Tzar following the composition and sound design by Declan Agrippa.

Unfortunately, Ona Tzar couldn’t join due to travelling and work obligations. Latex based sensors by Ella Powell aren’t at this stage finished yet, therefore we have used prototypes made of orthopedic support sleeves.

Performance was recorded on ambisonic microphone Sennheiser Ambeo VR Mic and MixPre-6 II Audio Recorder. First you can listen to the export of the Declan’s abstract composition made of field recordings and then the ambisonic recording together with the video.

Tilt of the head to the left controls the reverb. Both elbows control the amount of high pass filter on the two main tracks, therefore those sometimes completely go silent. Knees control the Brownian Delay, the ambisonic delay device from ambisonic package of Max For Live plugins called Envelope4Live.

Work-in-progress V: Latex Sensor prototypes and attaching lights to the Crown

Ella has been working on the prototype of a sensor based on similarity to the AI sketch presented earlier. She has made 3D latex tube which probably will be attached to the conductive rubber together with other latex pieces glued to the conductive rubber. The piece is still quite abstract at this stage, however further ideas how sort out right functionality are emerging.

LED strips were attached to the inside support structure of the crown. To attach them I have used resin glue and UV light for curing. There will be the total of four strips, two short on sides and two forks on the top. Each strip or fork will be possible to control and program separately. At the moment strips were tested with the external Arduino only to mainly test the functionality after attachment, but the idea is to connect them to the Core Station on the performers back. The light pattern will react to the data motion from the BNO055 sensor on the Crown.

Field recording of the Sonokinesis performance in Brno

I decided to record the performance with the help of my friend Martin Janda by field recorder as well as binaural recording in Ableton Live using Envelope4Live 4D sound Max For Live plugins. Unfortunately, binaural recording failed during the performance process, therefore I was left with the field recording only.

In both performances, Brno and Ostrava, I have used two rubber sensors, crown with an IMU sensor and one Arduino glove.

Arduino glove containing 5 flex sensors for control of effect levels and IMU sensor for surround panning.
Knee sensor made of conductive rubber controlling effect levels
3D printed crown designed by @Elixir31012 with IMU sensor

During the first performance in Brno, I have encountered technical issues with one of the knee sensors. I found myself needing to engage with the laptop to level up certain effect, which somewhat undermined the initial intent of exclusively using bodily movements to control the sound. Nevertheless, I’ve come to understand that the issue at hand is primarily an engineering challenge. By modifying the assembly method of the conductive rubber, I should be able to prevent malfunctions in the future.

https://drive.google.com/file/d/1pJnEyY8OSx1JGoNOESh9QmsGVUmJhLTq/view?usp=sharing

Sonokinesis performance and collaboration in Czech Republic

For the composition I would like to create something using the wireless cybernetic MIDI controller, which i have been developing from September 2023. Since then I achieved various advancements and figured out a lot of flaws.

Last week I travelled to Czech Republic and I joined two collaborations. I wanted to experiment with, how it is to play with other musicians. Sonokinesis suit and gloves are at this stage simply MIDI controllers. They don’t generate any sound itself but only wirelessly control parameters within the DAW Ableton Live.

In Brno I have performed on 9th May 2024 at Rello Il Torrefattore, Rybarska 13, Brno. The venue is primarily coffee roasterty but it resides in the same building as Bastl Instruments, worldwide renown developer of Eurorack modules. A lot of experimental musicians and sound artists naturally gravitates towards this quite hidden and not very known venue.

We have built improvised octophonic ring in the garden and two friends of mine, Valentino and Jakub, were playing their modular systems and sending me two stereo signals. Movements of my limbs, head and fingers were effecting and modulating their audio signals and overall we have created improvised noise-ambient spatial soundscape.

Sonokinesis Performance at Rello Il Torrefattore, Brno, Czech Republic

During the performance I have realised the importance of spatial setup for the project. It could certainly be used in stereo or in ambisonics, however I believe that since the body exists in 3D space, the whole performance should exist in the space sonically as well. Therefore I will aim for spatial setup as much as it will be in future possible.

Sonokinesis Performance at Rello Il Torrefattore, Brno, Czech Republic

Next performance took place in small bar KMart in Ostrava on Friday 10th May. This time I joined with my cybernetic wireless MIDI controller noise and techno musician Jeronym Bartos aka Laboratorium Expiravit and violinist, vocalist Samuela aka Soleom.

The situation was fairly similar. They were sending me their audio signals and I was modulating them with the movement of my body, this time in quadrophonic setup. Due to nature of their musical taste we have created an improvised dark ambient soundscape.

Sonokinesis Performance at KMart, Ostrava, Czech Republic

At this stage I have been thinking how different is this approach from what I have been used to up until now. My body is becoming musical instrument in the sphere of electronically produced live music. I have never been a dancer however during both performances I have been acting and then also reacting back on sonic situations which emerged. From the choreographic point of view I have no idea how the performance could be perceived as I feel I am in the stage getting to know my new instrument and also getting to know what my own body can do in the sonic context.

Sonokinesis Performance at KMart, Ostrava, Czech Republic

WORK-IN-PROGRESS IV: Prototype of the Central Station done and taking measurements for latex sensors

Last weekend I have finished assembling the central station which will collect data from all motion sensors in the real time and transfer them via WiFi into the computer. I have attached it to an old crop-top of mine until later there will be created more elaborated piece.

Prototype of the central station for sensors attached to the crop-top
Future idea for the central station piece (A.I. sketch)

Measurements of limbs were taken and sent to the latex maker Ella, who is going to be working on the piece this week

A.I. sketch of the latex based motion sensors using conductive rubber
Example of motion sensors prototypes controlling effects

Inspiration for the new piece

3D printing process raised few questions for me. I have learned that the resin material is not recyclable. It made me feel quite uncomfortable and I started to think how I could use the waste from the resin support structure, which I found quite beautiful, interesting and eventually gave me some inspiration .

Resin waste from 3D printing

I got an idea for the audio-visual installation using primarily the resin waste. I would like to create a ‘plant cyborg’. The plant will consist from resin waste crystals and natural materials arranged into some structure. Crystals will be glowing from the bottom with the use of LEDs. Speakers will be hidden in the sculpture.

The plant will work as a clock. Different crystals will express different time frames – hours, minutes, seconds and sound will occur at specific time frames too.

The idea is to experiment with a change the perception of the time. Cyborg artist Neil Harbisson, who was born with achromatic vision got implanted in 2003 the antenna into his skull, which allows him to hear colours as sine wave notes. Over the time he described this new sensations becoming the perception. My assumption is that having such a clock around for long enough may eventually change the way how we perceive the time – in this particular case specific hour or time of the day may become in our mind associated with a specific colour and/or sound. this could maybe result new ways of thinking and having inspirations.

Work-in-progress III: Preparation and 3D printing of the headpiece design

Jing Xu and Tsunagi sent me ready final piece in the file of .stl format. I have uploaded final .stl file to the PreForm to get it ready for 3D printing. Since I haven’t done 3D ever before, I encountered few problems like fitting the piece into the right printer in virtual space and creating the supporting structure. All these were quickly resolved with help of the technician in the 3D Workshop at LCC.

PreForm for printing – created support structure.

Tuesday 23rd April 2024: I booked 3D workshop last week for today at 12 pm. 3D printing process takes about 11 hours and 35 min. Then I will need to remove the support structure with a snips, sand the surface and provide UV curing.

Freshly 3D printed headpiece before removing the support structure
Sanding the headpiece
Final headpiece

The headpiece came out from the 3D printer exceeding our

zexpectations. Measurements were madero fit Ona’s head however it comfortably fits the head of everyone who tried to wear it. The next stage will be attaching the sensor. Original idea was to paint it white but we agreed to keep it transparent.

Since the headpiece will remain transparent, I decided to attach on the insides programable LED strips WS2812B which will create light patterns based on accelerometer data from the sensor and work in parallel with the sound control aspect.

Experimenting with WS2812B RGB and Arduino

Site-situated practices

The task we got during the class was to position bluetooth speaker somewhere in the space of LCC, play some sound and observe people’s reactions. We had to make sure that sound is not causing any distress.

Baria have chosen sound of the glass being broken on the loop. We have positioned the speaker firstly on the glass table in the main gallery of the first floor. The sound certainly drawn some attention and I noticed one person being diverted from their previous trajectory. After that we pot the speaker in the middle of the corridor just next to the art work which was made of glass. This also drawn attention and people at first couldn’t figure out where the sound is coming from. Some people told us later that they thought it might be the part of the art piece itself. This means that we could temporarily completely change the perception of the art piece and also drawn more attention to it than it would get under normal ‘no sound involved’ circumstances. I noticed some people even started dancing to the song of breaking glass.

Overall reaction was somewhere between amusement, curiosity to just getting a mild attention. This made me think about the context of this particular sound being present inside of the art school. The sound of glass could be normally considered distressing or dangerous situation happening near by. My assumption is that reaction and emotional response would be probably different if the sound was positioned in for example shopping centre or train station. Considering we were playing sound inside of the art school and next to the art piece made of glass reactions were mild. The third location we chose was a narrow glass window with no art work around but reactions were still mild. My theory is that people inside of the art school are somehow cognitively desensitised to normally to ‘weird’ or ‘uncommon’ events of sound which under other circumstances would trigger different and more acute response. It is predictable to expect that anything ‘weird’ occurring will be more likely some sort of art work rather than dangerous event happening. Of course the amplitude played the role as well. I believe that if we put the speaker louder the response could be also different.

Work-in-progress II

@xiaji2tsunagi2 and @abadteeth have been in past few weeks designing the headpiece together. After presenting the first sketch (see previous post) @ona.tzar and I had few notes regarding the wearability of the design.

Original sketch – front part of the headpiece
Original sketch – back part of the headpiece

Both of us agreed that the front part looks very interesting as an idea, however the reality of wearing could become uncomfortable or maybe even dangerous for eyes. We proposed to remove the endings which are covering eyes. Ona pointed out also that the back part V split could be lower in order to create space for her hair ponytail.

Improved sketch of the headpiece
Improved sketch of the headpiece placed on the head model

The next stage will be creating a 3D scan and taking measurements of Ona’s head. The 3D scan and head measurements will be sent back to @abadteeth and @xiaji2tsunagi2 so they can make an appropriate size fitting in the software and prepare the project for 3D printing.

Work-in-progress I

At my end I have started to build the ‘Core Station’ which will process data from all sensors and transmit them via WiFi. It will be attached to the back of the performer. Based on the research I did, I have decided to upgrade the micro-controller from the previous prototype based on ESP32 to Teensy 4.1. ESP32s are still being used but only for the WiFi connection. Teensy 4.1 contains microprocessor ARM Cortex-M7 clocking at 600 MHz which is in comparison to ESP32’s microprocessor Tensilica Xtensa LX6 clocking at 240MHz significant improvement allowing fast and real-time data transfer from multiple sensors in the same time.

Teensy 4.1

Teensy will be gathering data from two accelerometer sensors GY-521 (MPU6050) attached to the feet, two elbow and two knee stretch sensors and BNO055 (9-DOF absolute orientation sensor) which will be situated in the headpiece. Data from sensors are going to be sent via UART connection into ESP32 WROOM-32U Access Point. I have been considering SPI connection but I struggled to find appropriate libraries at Arduino IDE and learnt that it will require to learn different IDE. I have tested UART which I am familiar with and it proved itself sufficient enough, however I still consider sorting out SPI connection in the future.

On the receiving end there is another ESP32 WROOM-32U which is connected to the computer and sends raw numerical data to the Max MSP. ESP32 WROOM-32U has a specific feature – possibility to attach external WiFi antenna. This significantly improved the data transmission and range.

ESP32 WROOM-32U with the antenna.
Prototyping the Core Station on the breadboard – Teensy board, ESP32 Access Point device (Sender) and IMU sensors.
ESP32 Client device (Receiver)
Testing the speed of data transfer
Testing the range of the WiFi connection

Emerging team

I have been seeking for a while a fashion designer or maker, someone with appreciation of similar aesthetics as I do therefore the result could become a common effort rather than an order.

After discovering the conductive rubber I realised that it could be efficiently combined with the latex. I approached a friend of a friend, latex maker and designer Ella Powell @exlatex.

Ella Powell have been working with creating latex clothes and sheeting for the past two years. She studied a short course in latex making at Central Saint Martins over the summer 2022. Currently she is studying a master’s degree in computer science and AI.

After initial meeting we have drafted some ideas about creating latex based organic-like futuristic looking sensors which will efficiently collect data from the bending knees and elbows. Below you can see AI generated idea of the direction in which the piece might be evolving.

Other artists which are joining the team are @Elixir31012. Elixir31012 is a multimedia artist duo formed by digital artist Jing Xu @abadteeth and sound artist Jiyun Xia @xiaji2tsunagi2 in 2023. Both graduated from the Royal College of Art with a degree in Fashion in 2023. Elixir31012 creates an otherworldly landscape of non-linear time through digital animation, experimental music making, wearable technology, and performance. Cyborg study, myth, ritual, and feminist narratives are intertwined in their work. Elixir31012 takes its name from the Chinese Taoist vocabulary for “elixir”. 3 represent “Synthesis”, 10 for the “Sacred”, and 12 for the “Eternal Return”. Their sound art performance at the the event Chronic Illness XX very intrigued me and we started talking. The idea to collaborate emerged very soon and organically based on similar interests in creative technology, cyborg art and sound art. Elixir31012 proposed that they will make a headpiece which would carry the motion sensor for the Sonokinesis performance.

Elixir31012 performing at IKLECTIK Art Lab
Elixir31012 performing at IKLECTIK Art Lab

Declan Agrippa @hiyahelix, student of the second year of Sound Arts at University of the Arts London, London College of Communication, is going to create a sound design using the virtual wavetable synthesiser Serum.

Below you can see the work in progress sketches of the sensor headpiece in Zbrush.

This image has an empty alt attribute; its file name is IMG_6456.jpg

Multi-disciplinary and kinaesthetic artist Ona Tzar @ona.tzar is joining the team as a performer. Her creative input is being very important for developing the whole system because we would like to have the garments as ‘user friendly’ as possible. We have been actively discussing materials, positions of sensors, shape of garments and the headpiece trying to find the right balance between ‘experimentalist aesthetics’ whilst keeping the usability of all pieces for performance comfortable, functional and reliable.

Sonokinesis: Part II – Drafting ideas for the collaboration

Last term I have introduced foundations of the project Sonokinesis – the idea of controlling the sound by the movement and other aspects of human body. I have made a pair of wireless interactive Arduino based gloves which allows to control the sound in visual programming language Max MSP and map them into Ableton Live via Max For Live. The piece has been performed so far on two occasions, at the Gallery 46 in Whitechapel and at the underground event featuring performance art and experimental sound Chronic Illness XXI. During those performances I have revealed many flaws which occurred and started to troubleshoot and upgrade the project – mainly unstable Bluetooth connection and the significant fragility of assembled pieces.

The idea of Sonokinesis certainly doesn’t stop at the pair of Arduino gloves and I aim to develop more stable and durable version of gloves followed by other garments allowing to the performer to encompass other parts of their body.

I have been experimenting with flexible pads for knees and elbows and created simple accelerometer based headpiece triggering MIDI notes or samples. All those are connected to the central mini-computer attached to the lower back with the belt. Central mini-computer is this time based on different micro-controller ESP32 Wroom-32 and wireless connectivity is sorted with WiFi connection which proved itself more stable and faster than Bluetooth.

Assembling wearable station ESP32 Wroom-32
Headpiece carrying sensor MPU6050 (accelerometer and gyroscope)

For knees and elbows I firstly assembled wearable pads based on the same flex sensors which I used for fingers of gloves. Unfortunately they appeared to be highly un-efficient when it comes to durability. Their function was limited by fragility and sensors started to break and rip after even single use which needs to be avoided at any cost since the piece must remain stable during performance and reusable. Also the cost of flex sensors is quite high considering its fragility (about £15 for one sensor).

Not long ago I have discovered conductive rubber which changes its conductive properties based on the stretch. I have tested a strip cut from the sheet attached to the knee pad and it proved itself very efficient, durable and in comparison to flex sensors also way more cheaper.

A strip cut from the sheet attached to the knee pad changing its electric resistance based on the stretch applied by the knee bending.

Case Study: Neil Harbisson

Neil Harbisson (born 27th July 1982) is a Catalan-raised British-Irish-American cyborg artist and activist for transpecies rights. He was born with a rare condition, achromatic vision – total colour blindness. When he grew older he recognised that lack of ability to sense colour was giving him a lot of disadvantages because surrounding society is inherently designed for people able to perceive the colours. In 2003, him and Adam Montandon, started to develop antenna which would allow him to hear colours as musical notes generated by simple sine waves. Antenna over the time evolved from connection to headphones and 5 kg backpack worn on his back into a small chip eventually implanted into his skull conducting the sound into ears via bone conduction in 2004.

The surgery was performed by an anonymous surgeon since the bio-ethical committee didn’t approve the surgery considering it non-ethical. Neil Harbisson got into another legal obstacles since he had to convince British offices to allow him appearing with the antenna on the passport. At first they denied his request as unacceptable since it is not allowed to appear on passport with electronic devices but after many months he convinced them that the antenna isn’t a device but a new organ and he cannot simply remove it. He is also in touch Swedish offices claiming Swedish citizenship since the antennas parts were made in Sweden. According to Swedish law, one can become Swedish after living in Sweden at least 5 years. Neil Harbisson argue his claim by the fact the part of Sweden has been living in him way much longer than 5 years.

Neil Harbisson described the process of becoming a cyborg more as psychological. Firstly, he memorised notes as colours, then later they became perception as he didn’t need to think about them and the information of “sound-colour” became automatic. When he started to have dreams in colours, Neil realised, that he truly became a cyborg as his consciousness merged with the software on a deeper level.

Antenna can perceive colours beyond the range of human perception. It can sense infra-red and ultra-violet colours. It can be also connected to internet and sense colours being send from another parts of the world, even from deep space when connected to satellites. From this point of view Neil consider himself ‘a sensetronaut’ as he can explore outer space with his newly obtained sense whist physically remaining on this planet.

Neil consider himself as a cyborg artist. He enjoys to create sound portraits of human faces or dressing himself in a way which not necessarily ‘looks good’ but according to if it ‘sounds good’. In 2010, him and Moon Ribas, another cyborg artist, Cyborg Foundation. It is an institution helping to people to become cyborgs. They believe that extending humans perception by creating new sense new forms of intelligence and wisdom and often has very ecological aspect to it. Neil Harbisson for example said that if everyone had a night vision, cities wouldn’t need to consume vast amounts of electric energy at night, which would be beneficial for the environment.

Essay Ideas

At this stage I still need to narrow down the research to more specific question however I would like to explore the topic of post-humanism and trans-humanism interleaping with Sound Arts. Firstly, I would like to focus on various definitions of what post-humanism and trans-humanism generally means, establish the difference between the terms and explore the critique of them.

Essay will follow with the case study of the cyborg artist Neil Harrbison. Neil Harbisson is a Catalan-raised British-Irish-American cyborg artist and activist for transpecies rights. He was been born with achromat vision and is the most known for permanently implementing antenna into his skull. The antenna allows him to hear hear colours even beyond classical human vision spectrum and translate them into hearable notes generated by sine waves.

Next part will explore the idea of humans merging and enhancing with technology in more non-invasive way from the point of view my own practice which I call Sonokinesis and artists which I have been taking great inspiration from like for example Onyx Ashanti and Imogen Heap.

Selected bibliography:

Robert Ranish and Dtefan Lorenz Sorgner – Beyond humanism: Trans- and posthumanism  

Donna Harraway – A Cybrog Manifesto 

Andrew Pilsch – Evolutionary Futurism and the Human Technologies of Utopia

Oliver Krüger – Virtual Immortality: God, Immortality and the Singularity in Post- and Transhumanism

N. Katherine Hailes – How We Became Posthuman: Virtual Bodies in Cubernetics, Literature and Informatics

Neil Harrbison – TED Talk 

Lecture by Berk Yagli about Acousmatic Music

Acousmatic music is conceptual genre which stands mainly for the sound being separated from its origins. The source of the sound should ideally not be possible to identify by listener – both, in recorded and produced pieces as well as during performances.

The composition is often based on several prominent elements. It usually has long forms and contains so called “gestures” and “textures”. Gestures are usually very prominent sounds standing more upfront. To me they reminded forms of glitches. Textures are usually ambiences, something which we could commonly call “pads”, although they aren’t necessarily melodic but it could be. Sound objects within composition also changes their pitch. Compositions are often interrupted by dramatic hits or booms following quieter or silent parts bringing to it the element of negative space.

After listening to several examples found particularly interesting the element of separation. Separation is not only achieved by concealing the sound source but also from my observation acousmatic music appears to be somehow separated from its creator to especially in comparison to commonly listened music. Like in performances, thus in recordings, there is often kept the focus on the artist. There is an inherent element of artist’s ego in the most of the music. Acousmatic music is usually very abstract and my mental and emotional processes remained quite different in comparison to listening of “usual” music. I kept myself being intellectually stimulated by its abstraction and sort of mathematical precision in the apparent chaos which was in fact sophistically constructed order rather then being entertained or emotionally drifted away, however this doesn’t mean that acousmatic music does not or cannot contain an emotional aspect. The significant abstraction of the acousmatic music led me as a listener to the thoughts about the artist or composer being rather lacking or non-existent creating another level of the separation from the source – even from the artist themselves.

The piece called ‘Klang’ was little bit confusing because the source of the sound is quite obvious considering even the cover of the piece. It almost feels like a game at this point since we still cannot say for sure if the sound source is the clang of the pot or something else. This suggest another important characteristic of the acousmatic music which is creative freedom. Considering the certain boundaries which makes music being acousmatic, there is not many other limits how to compose or perform.

Acousmatic piece of Berk Yagli and the other example like Murmures by Robert Normandeau reminded me my very favourite experimental electronic music composer Roly Porter. Roly Porter’s music probably isn’t inherently acousmatic but to me it seems that it overlaps and carries a lot of its elements.

Troubleshooting and rethinking various types of Bluetooth connection

Previous way of transferring data from Arduino to computer with Bluetooth turned out to be not effective and, in fact, worked only once. I couldn’t figure out why I wasn’t able to connect Arduino and laptop via HC-05 ever again. I started to research about the history of Bluetooth and its various types over years and resulted to Bluetooth connection based on Central and Peripheral (Master and slave in older terminology) devices.

I kept experimenting with module HC-05, but as I learnt, it is not ideal for every application. It is an older technology based on Bluetooth 2.0, which was introduced on market in 2005, consumes more energy and generally is slower than Bluetooth 4.0 aka BLE (Bluetooth Low Energy) introduced on market in 2010. HM-10 is BLE module which I experimented with until I discovered Arduino Nano 33 BLE which has, as the name suggests, Bluetooth Low Energy built in the microcontroller itself. Arduino Nano BLE also has built in sensor LSM9DS1, which has accelerometer, gyroscope and magnetometer similarly like BNO055.

I experimented with LSM9DS1 as well but I realised that for my purpose I will need to use BNO055. Why I decided so? BNO055 is able to produce Euler angles absolute orientation data based only on one syntax within the code. I didn’t find anything like that at LSM9DS1. I found out that it is possible to program LSM9DS1 with an algorithm based on specific equations in order to obtain Euler angles values, but implementing them into the functional C++ code goes currently beyond my beginner’s abilities. However it is definitely something which I will have look into in near future, because getting Euler angles values from an in-built sensor means overall optimisation by getting rid off potentially redundant external sensor.

For now, I stuck to the BNO055. I have found codes for Peripheral and Central devices for Arduino Nano 33 BLE on Github and modified them to get Euler angles values from BNO055.

CODE FOR PERIPHERAL DEVICE:

CODE FOR CENTRAL DEVICE:

The result of these codes is getting x, y, z data from BNO055 connected to the Peripheral Arduino device and sending them via BLE into Central Arduino device, which is connected to the computer and prints pure x, y, z numerical values of Euler angles in serial monitor.

Getting numerical values into serial monitor in the specific format and speed will allow me to work with them further in Max For Live. I have created a Max device which is mappable to anything in Ableton Live but with a specific focus that this device in particular will control Surround Panner connected to the octaphonic ring.

Technical parts of the process of making the audio paper

The audio paper about The Chronic Illness and The Dungeon Of Polymorphous Pan consists of three main parts, made in different ways and at different times. The most interesting and humorous part was recording during the event itself, which happened on 1st December 2023. I interviewed random attendants and some performers about the Polymorphous Pan. All performances and interview with people during the event were done with Zoom H6.

The introduction and the interview with the curator, Neo Fung, were recorded in my room, upstairs above The Dungeon Of Polymorphous Pan (I am not sure if I mentioned that I have been living in the building since 2016) on condenser microphone Tonor TC20.

I have recorded enough material to create a podcast lasting 25-30 minutes. I had to really think about which 3 questions from the 7 ones recorded I would use for the interview. Although I wanted to keep the pace and form of the interview as naturally flowing as possible, cutting out a lot of stuttering made to fit more information for sure. I will definitely make an extended version over the Christmas break. I have resulted to modulate my own voice with pitch down and flanger on several occasions in order to emulate the voice of a mutant – mutation is after all an important part of the topic discussed in the interview.

Mixing layout in Ableton Live
Time-lapse of the LUFS metering

LUFS metering was a tedious but helpful process. It pushed me to make levels right, and the whole piece sounds more cohesive than before mixing. I kept the loudness target under -23, which the free version of YOULEAN Loudness Meter 2 allowed me.

The Final Audio Paper

Thoughts on ‘Ways of Hearing’ by Damon Kurowski

The podcast series Ways of Hearing explores shaping of our perceptions of the sound and recording in era of analog following by the digital era and switching in between them. The first episode of the podcast talks about differences of how our perception has changed from the point of view of the time.

After listening to the podcast I realised few differences comparing both eras. Analog time feels more present (‘real time’ recording), digital time provides certain realm of timelessness (easier possibility of endless back and forth editing). After listening to the podcast I realised that recording in the time of analog often required necessity of the musical skills and and way more precision in executing them. Recording must have been done precisely in order to safe material (recording tapes) and therefore money. Of course in the digital world the same skill is still appreciated but doesn’t seem to be so essential as on computers we can edit much easier than cutting tapes or re-recording the whole takes. In the end of the day it always breaks down to the preference and there are people who these days prefer to edit as well as to record the whole takes precisely. However, and that was another important point mentioned in the podcast, in analog era this wasn’t an option.

Coming back to analog-present versus digital-timeless. Digital technology also entered the music composition. Drum machines put music more onto the grid together with evolving but constant repetition. A lot of popular electronic music sort of endeavour to reach some sort of timelessness – being able to tap in at any moment possible. This extends into ways how we nowadays reproduce and stream the sound.

Of course both eras has pros and cons. Digital technology made sound and media in general more affordable and attainable to almost everyone, but according to Damon Kurowski, this came with the cost:

“We give up on the opportunity to experience time together – in the same instant – through our media.”

This extends also to the nowadays common way of communication via texting and social media. In analog time the conversation often had a start and the end and somehow more often encompassed the goal of its content. These we have an option to step out of the texting at any time possible and come back to it whenever later – but later the momentum of the conversation or the necessity of communication of of agent involved in the conversation with another one, might be long pass.

Connecting BNO055 sensor with Arduino and sending data to the computer via Bluetooth

The other week, I progressed with connecting to the Arduino Nano with BNO055 via Bluetooth module HC-05. As described in a previous post, BNO055 is an accelerometer, gyroscope and magnetometer in one device. I intend to implement BNO055 into the glove and use it to control the spatial panning. Basically sending the sound into particular spot in the certain direction by pointing out into that direction inside of the octaphonic ring.

I used the help of ChatGPT to generate the code for Arduino Nano, which will wirelessly send raw data of the position of three axes, x, y, and z, to the computer.

The video below shows that the Serial Monitor of Arduino IDE receives raw data in the form of numbers from three axes: x, y and z. The cable is used here only to power the Arduino. This will be, of course, replaced by the battery power.

The next stage will be creating a Max device so I can implement the x, y and z data and transcript them into MIDI, which will be possible to map to a Max For Life device called Surround Panner.

Creating a Calibration device for sensors in Max and implementing it into Ableton Live

In the previous post I have described the problem which I encountered , a flex sensor not entirely covering the whole MIDI scale from 0-127 and having an initial point somewhere in the middle of the scale. My idea is that the flex sensor will be carefully positioned on the fingers of the gloves as well as in the bent points of the suit (under the knee, in the crook of an arm, potentially on the other bending parts of the body). I want the flex sensor to control the entire MIDI scale from 0-127 and make it mappable to any parameter in the Ableton in cooperation with Arduino Max For Live device.

Non calibrated flex sensor mapped to a Dry/Wet parameter of the reverb via Arduino device from the Max For Live’s Connection Kit

In the video above, you can see that the initial point of the flex sensor is somewhere around 40% (approx. number 43 on the MIDI scale) in of the Dry/Wet parameter and reaches somewhere around 80% (approx. number 86 on the MIDI scale). I will set the same input values into the calibrator to demonstrate its function in the video below. This time I mapped the output to the decay of the reverb.

As you can see, once the parameter sent by the sensor reaches the threshold set on the MIDI input (number 42), it triggers the output mapped to the reverb’s decay. It controls its full scale until it reaches the top possible value of the sensor (number 86).

Calibrated flex sensor ‘attached’ on the elbow

This calibration device has the potential to calibrate any other sensors sending any unstable values and stabilising them into desired MIDI parameters. It can be, for example, used vice versa as well when the input MIDI information sends the full scale of 0-127 and the output has a particular threshold and limit.

The core of the device is a zMap object, which maps the input range of values to the output range, and map button patch. Schematics for the patch in Max is below.

The calibration device will eventually be extended to accommodate multiple parameters (for example, controlling five different flex sensors attached to the fingers of the hand) with map buttons added to its inputs.

Honourable mention for the film ‘Indochina Jungle’

Festival de Cinema de Girona 2023 awarded ‘Indochina Jungle’ with Honourable Mention for the creative approach to animation as a documetary film medium.

Film by Lucie Trinephi and Piotr Bockowski aka Fung Neo

Sound by Vit Tzar Trojanovsky

Concept, drawings, text and voice by Lucie Trinephi

Editing by Piotr Bockowski

Commissioned by Chronic Illness XIX and presented by Morbid Books and Girona Film Festival

‘A Quiet Place’ – Organising the sound

I have separated the sound design for the film into three main groups – foley, effects and atmospheres.

Foley was recorded on one microphone AKG C414. Unfortunately I encountered a lot of the background noise but I have removed it by using the plugin iZotope RX 7 Spectral De-noise. Atmospheric drones are layered from recordings of various electromagnetic waves, for example from the interior of Overground train and random by SOMA Ether and run through the reverb with a long decay and horns are a preset of the virtual instrument called Analog from Ableton Live 11.

You can see the arrangement of all tracks in Ableton Live into three groups below:

Most of the scenes from the clip take place inside of the building, possibly former grocery store. To create the sense of large empty store I have applied reverb of small size and with very short decay on all the foley:

For one of the atmospheres which were mimicking the sound of refrigerator I used slightly larger reverb to create some distinction in the perception of the space:

I was originally experimenting with creating the effect of sensing the physical movement, especially in the scenes with very fast tip toe running, by using the phaser and phase shift but desired effect didn’t work. What appeared to be working to achieve the similar effect was simply automation of hard panning following the direction of the runner from left to right.

Mixing was done in the composition studio at London College of Communication.

Thoughts on ‘The Walkman Effect’ by Shuhei Hosokawa

When the Walkman emerged in 1980, it started a revolution in how we consume music and other sonic media. There were, of course, appearances of negative and fearful opinions predicting that Walkman would make people psychotic and disconnected from the surrounding environment. There have always been fears and uncertainties attached to new inventions throughout history, not only from the media realm. For example, the printing press invented by Johannes Gutenberg in 1440 was, by certain groups, considered a threat to public morality because the sudden access and democratisation of wider knowledge provided by books to common people allegedly could cause chaos in what we are being taught. Indeed, at the time, it caused the acceleration in questioning of certain status quo, and the invention of the print press played a significant role in enabling Christian reformation in Europe in the 16th century. Nowadays, we encounter similar discussions regarding Virtual Reality headsets with fears of people becoming utterly detached from reality. Did the Walkman become comparatively influential politically? It may have become another expression or by-product of individualism.

When we discussed in class how the invention of portable audio devices (we can extend Walkman followed by Discman, MP3 players, iPods, and smartphones these days) is influencing our interactions with the environment and our reasons for using them, people were describing similar things. The Walkman became a predominantly urban audio device, and many people use it to conceal or shield themselves from the city environment’s surrounding chaos.

Portable audio devices also change our imagination induced by music (or other sonic art forms). Before, it would be exclusively tied to a place produced live at the gig or at home, reproduced by a record or a radio station. The Walkman suddenly opened the door to the ‘scoring of the life’ on a different level. New sceneries (from urban to natural ones) supported by our choice of favourite music suddenly provided new scales of imagination and possibly even inspiration.

It also changed the way we walk in the environment. I found myself extending my journeys on streets to wherever I was heading on many occasions so I could listen to my favourite music for longer. Occasionally, it became an unconscious ritual reminding me that ‘the journey is the goal’.

All this suggests what is written in the article about the Walkman providing certain autonomy (Hosokawa, 1984: 166). Having the possibility to opt in and out from whatever is happening around you is undoubtedly an advantage, but there are also negatives coming with all of this. I certainly haven’t encountered anybody losing their mind and getting psychotic attacks based on prolonged headphone-wearing. However, hearing damage from quite a young age is real – reducing sensitivity to specific frequency ranges contributes to developing severe and long-term tinnitus (both based on my own experience).

Bibliography:

Hosokawa, S. (1984) ‘The walkman effect,’ Popular Music, 4, pp. 165–180. https://doi.org/10.1017/s0261143000006218.

Recording Foley for the scene from the film ’A Quite Place’

Here is few photos and videos from the first foley recording session which happened on 11th November 2023. During the recording I have encountered several problems, which I more or less resolved. Firstly, th emain problem was that monitoring room wasn’t available so all the recording happened in the actual live room. I have used my favourite condenser microphone AKG C414 and sound-interface Focusrite 2i4.

Ropes used for recording of the dry grass and leaves
Recording of pill bottles and bottle of water
Recording of paper posters in the wind

The fan from the laptop was quite loud therefore I had to improvised with building a sound barrier as you can see below.

I have recorded foley for 2 minutes and 30 seconds. The second session is being planned for 13th November. Some of the clips contain quite harsh noise and I am not entirely sure when it comes from. I will either re-record them or will use noise reduction plug-in from Izotope.

Tom Fisher

Tom Fisher, aka Action Pyramyd, is making sample-based music based on tiny sounds from his field recordings. He considers field recording and composition as a mode of thinking and experiencing the world. His experimental audio recordings have a very ecological aspect to their work, often based on the recording of water plants. How does he record plants? Firstly, he listens to the environment and examines the scale and overlaps of sound types. Then, he uses a hydrophone to capture the sound of photosynthesis.

I found it interesting how Tom Fisher can contribute to biological and ecological research by applying sound arts. For example, the mapping of the acoustic diversity of various ponds. Collecting data with a hydrophone could show him a lot about life in the pond in a non-invasive way. You can see that night is sonically dominated by the activity of aquatic insects and early afternoon, just after the solar zenith by aquatic plants due to high amounts of energy received from the sun (this is when he could listen to actual photosynthesis happening in the plants).

Tom Fisher realised in relation to hydrophones that since our ears cannot function in the underwater realm to pick up the same frequencies of sound – so what are we looking to recreate? Even conventional microphones lack the capacity to depict the soundscape in the same manner as our human ears perceive it in the situation.  It is also all temporal, a construct; these moments aren’t happening simultaneously everywhere. He treats recorded material with sensitivity and reverence. He is recreating the ‘realistic’ illusion of an environment/sonic situation and acknowledging that editing and implementing his creative decisions are part of the process, but he is still trying to create an engaging narrative for the listener, raise awareness (about something undervalued like a for example pond) and break down hierarchies. 

Circuit-bending the different strobe lights enabling them to be triggered by the signal from the CV gate

The idea was to have an analog eurorack kick module triggering the strobe light so their rhythm pattern matches on the go during the live performance. Firstly I was researching how to achieve such an effect digitally but in the end I have chosen an analog approach which appeared to be the most simple, straight forward and cheap.

I have used the relay to trigger the light with the gate signal. The most of relays require voltage at least 5V in order to open circuit but I found relay which opens with 3V. Trigger is set as ‘NO’ (normally open) so it switches only temporarily in the similar way like pressing a button.

Battery powered strobe light trigged by gate from Arturia Beatstep Pro. Arduino here is used only as 3.3V power supply.
Arturia Beatstep Pro gate triggers Kick module together with strobe light via signal splitter
Sketch of the schematics
Application of the relay to another strobe light powered by 240V. The discharge lamp is peaking at 400V.
Application of the gate triggered strobe light in the performance – I have positioned the strobe within a coffin chamber of the crypt under the church in St Pancras where I performed live on 29th October 2023.

Thoughts on the podcast ‘Sounding History – Data in the Anthropocene: Carbon Footprint & the Environmental Endgame’

Music historians Chris Smith and Tom Irvine are in their podcast bringing their points of view on environmental impact of the digital media and streaming services in comparison to other musical media of ‘post-consumption era’ as well as ‘pre-consumption era’. Those were marked by transition from purely acoustic music to electronically recorded and reproduced to recording media like shellac records, vinyls, CDs etc.

“Every system of inscription is tied to a system of extraction. Every discourse network is a resource network.” (Devine, 2019)

Chris’s and Tom’s main outcome, based on Kyle Devine’s conclusion, is that every single type of record media which emerged in capitalistic consumer society is somehow linked to extraction of natural resources as well as contributing to environmental problems as many other human activities. An interesting point is highlighting the fact that apparent current streaming services aren’t less damaging than for example CDs, vinyls or shellac records in the past. Behind every streaming service stand huge servers consuming enormous amounts of electric energy so they can continuously work.

They also came with in my opinion very interesting historical analogy comparing this invisible impact or toll to the invisibility of sugar cane business prosperity being inherently rooted and dependent on trans-Atlantic slave trade. There has been happening an apparent progress in British cities like Bristol or Southhampton in 18th century but not many people could actually see that this ‘progress’ has been built on exploitation and enslavement of a large part of humanity. Similarly people cannot see the invisible impact of many many data servers on the carbon imprint and the climate change.

Another interesting topic which Chris and Tom touched is AI generated music, particularly AI Jazz. Is AI Jazz problematic? They came to the conclusion that people who make AI algorithms don’t know that algorithms are working and ‘risk reducers’, in this case reducing risk of ‘wrong’ musical decision, thus they are influencing the composition into something which is less creative and basically getting stuck at the same sounding composition over and over again. The human element of improvisational base of the jazz encompasses remembering what other people had done and picking up what works for the musician and not what note to play at the certain time, which is what the machine does – quickly calculating musical decisions.

Personally I see a problems in other parts of AI music too. I am not fearing of musicians being replaced. AI can be utilised as a part of the performance and compositions but the fully generated AI music might mis-place the historical background of music genres or sort of erasing those backgrounds. This could be extended to the whole ‘genre’ or approach of AI music. AI algorithm doesn’t know where for example jazz comes from and doesn’t know its historical background.

https://www.soundinghistorypodcast.com/episodes/episode-8

Devine, K. (2019). Decomposed : the political ecology of music. Cambridge, Ma: The Mit Press.

‘A Quiet Place’ – sound design research

Directed by: John Krasinski

Composer: Marco Beltrami

Sound Design: Brandon Jones

Supervising Sound Editors: Erik Aadahl, Ethan Van der Ryn

The big part of ‘A Quiet place’ narration through the sound consisted from the foley recording and the score. Both very specifically executed in since the movie is based on groundbreaking lack of sound which of course doesn’t mean the complete absence of it.

Sound designers who participated in creation of sound design for the film are Erik Aadahl and Ethan Van der Ryn. They mentioned that the sound was actually written into the script and played very significant role int the whole story telling, which doesn’t seem to be the usual practice.

Aadahl and Van der Ryn admitted the difficulty of a challenge to work on a movie with this much absolute silence. With barely any spoken dialogue or loud sound effects therefore “the quiet becomes loud and the loud becomes ear-piercing” (Alkhulaifi, 2022). They are mentioning that very interesting part of the post production was really scaling back and be more minimalistic with the use of the sound.

The lack of sound in the movie tells the story in a way that any loud sound inherently means death. This is John Krasinski’s main premise in the building the constant underlying tension. Classical horror movies are often based on ‘jump scares’ which can be criticised that they aren’t in fact scary but only startling and the tension from the scene dissolves quickly since the physiological response of the body doesn’t recognise ‘jump scare’ as an actual threat. You will not find many jump scares in ‘A Quiet Place’. Any causally louder sound either happening suddenly or brought up slowly and building with tension creates different physiological response which can prolong the feeling of the thrill of even the fear because viewer sympathise with characters knowing that the mere existence of the sound means danger.

The story telling with the sound in ‘A Quiet Place doesn’t end only with the building a thrill and tension by utilising the silence. Focus on quiet sounds play big role in the narrative, too, for example checking the hearth beat of unborn child with a stethoscope or romantic moment of sharing the earphone from the walkman whilst mother and father of the family are dancing together or masking quieter sounds by louder by ones when father and son freely talking whilst they are hidden behind the waterfall.

One of the main characters, the daughter, is deaf and played by deaf actress Millicent Simmonds. Film has several scenes which are nearly quiet or totally soundless to highlight her point of view. ‘A Quiet Place’ has been praised for its representation of the Deaf community, its use of American Sign Language (ASL), and being one of the first films to showcase the cochlear implant (Mendoza, 2021). In spite of the positive aspects of featuring deaf community ‘A Quiet Place’ overcomes ableism in the film industry only until certain extend and received also critiques. I will talk about those more in the reflective writing.

https://www.vox.com/2018/5/26/17396174/a-quiet-place-sound-design-loud

https://www.motionpictures.org/2018/04/the-a-quiet-place-sound-design-that-makes-audiences-afraid-of-their-own-noise/

https://digitalcommons.butler.edu/cgi/viewcontent.cgi?article=1419&context=the-mall

Mendoza, A. (2021). How a Quiet Place is Harmful to Those in Quiet Worlds. The Mall, [online] 5(1). Available at: https://digitalcommons.butler.edu/the-mall/vol5/iss1/11 [Accessed 22 Nov. 2023].

sites.northwestern.edu. (n.d.). Analysis of a Sound Design Piece — A Quiet Place 2018 – Mariam Alkhulaifi. [online] Available at: https://sites.northwestern.edu/mariamalkhulaifi/analysis-of-a-sound-design-piece-a-quiet-place-2018/

Foley plan for the clip from the movie ‘A Quiet Place’ – scene breakdown / script extraction

This is the plan for brainstorming initial ideas about the sound design and eventually making a more systematic plan for recording the foley. I have watched the clip with no sound, thinking about possibilities of sound design only based on my brief knowledge of the film without seeing the film before or approaching to the research. I broke down scene by scene describing events and actions happening considering also the movement of the camera. Next step will be to compare my initial thoughts with the vision of the director and how close (or far) I was from his ideas. This plan will be additionally and continuously updated by adding sound design, production and recording ideas.

0:00 – 0:05 (Camera static)
– Wind blowing into the dry grass

0:05 – 0:10 (Camera static)
– Leaves dancing on the quiet empty road when wind blowing

0:10 – 0:15 (Camera static)
– Quiet wind, paper posters on the wall (on the left) quietly rustling

0:15 – 0:19 (Camera static)
– Paper posters still rustling quietly but more loud than in the previous scene

0:19 – 0:23 (Camera static)
– Dark empty store ambience – silence

0:23 – 0:28 (Camera static)
– Dark empty store ambience – silence interrupted by eerie crackling nosies in the far

0:28 – 0:34 (Camera static)
– Kid run quietly tip-toeing from the left to the right in the store’s ambience [use of the phaser within the quiet ambience to create sense of movement by someone’s quiet physical presence]

0:34 – 0:41 (Camera static)
– Teenager walks slowly tip-toeing and looking around from the left to the right in the store’s ambience [use of the phaser…]

0:41 – 1:01 (Camera is slowly moving from the left to the right)
– Slow tip-toeing from the left to the right very close [use of the phaser…]
– Quiet run from behind to the front [use of the phaser…]
– Slow tip-toeing from the left to the right and then approaching closer from the far in the store [use of the phaser…]

1:01 – 1:11 (Camera is slowly moving from the left to the right)
– Quiet slow steps from the left to the right [use of the binaural microphone to capture ‘sensation’ of the person wearing hearing aid; use of the phaser…]

1:12 – 1:13 (Camera is slowly moving from the left to the right)
– Kid runs very fast but quietly from the left to the right [use of the phaser…]

1:13 – 1:18 (Camera is slowly moving from the left to the right)
– Kid runs moderately fast but quietly from the left to the right [use of the phaser…]

1:18 – 1:33 (Camera is slowly moving from the bottom to the top)
– Adult woman is quietly approaching from the back of the store and turning to her right and searching through shelves for the medicine.
Kid is sitting on the floor and quietly sobbing.

1:34 – 1:37 (Camera static)
– Kid quietly sobbing on the floor
Ambience of the store

1:37 – 02:10 (Camera is slowly moving from the top to the right bottom)
– Handling plastic medicine bottles
– Taking one and bringing it to the child walking away from the shelf to the the back
– Administrating a pill to the child
– Opening the bottle of water

02:10 – 02:22 (Camera static)
– Child quietly flushing the pill by water
– Closing the bottle of water
– Person quietly approaching from the right behind the shelf observing quietly woman and child
– Woman is turning around and using the sign language to communicate

02:22 – 02:31 (Camera goes slightly down)
– Walk behind the shelf slightly in the back walks away to the right

02:31 – 02:55 (Camera slowly zooming in)
– Little boy on the floor is drawing something on the floor in the middle of the corridor
– Teenage girl is slowly approaching to him and leaning on her knees to see the drawing
– Girl is asking boy in sign language about the drawing

02:55 – 02:56 (Camera going up following the hand)
– Boy is expressing a rocket in sign language

02:57 – 03:00 (Camera static)
– Girl is responding to boy in the sign language

03:00 – 03:06 (Camera static)
– Boy is responding in sign language

03:06 – 03:12 (Camera static)
– Girl is quietly sitting being concerned and not responding

03:12 – 03:14 (Camera slightly moving to the right)
– Boy is standing up and walking away to the right

03:14 – 03:18 (Camera slightly zooming in)
– Boy walking away to the left not visible, girl is observing sitting in the spot

03:18 – 03:32 (Camera slowly go to the right bottom towards the drawing on the floor)
– Boy quickly turns left behind the the shelf in the back
– Girl stands up and tip-toeing follows the boy to the back

03:33- 03:37 (Camera slowly going up)
– Boy is balancing on the plastic box trying to reach something in the shelf

03:37 – 03:41 (Camera slowly going to the right)
– Boy is awkwardly trying to reach a space shuttle toy in the shelf

03:41 – 03:42 (Camera slowly going up to the right)
– Boy is pulling the shuttle toy from the shelf and it is about to fall on the floor

03:42 – 03:44 (Camera moves shortly but swiftly to the right)
– Girl swiftly but quietly run towards the boy and catches falling shuttle toy landing on the floor with her knees
– Person is bringing basket towards in the very back of the corridor

03:44 – 03:47 (Camera slowly going up)
– Girl is nervously gasping but staying quiet

03:48 – 03:52 (Camera slowly going up)
– Girl is nervously gasping but staying quiet looking up towards the boy and then to the back of the corridor


Making an analog Arduino MIDI controller and testing the flex sensor

I have assembled the controller according to instructions in the video below.

It is a very simple four knob analog controller which can be connected with the Ableton Live via Connection Kit from Max For Live.

The purpose of this exercise is to find out how the flex sensor behaves when it replaces potentiometer. A flex sensor or bend sensor is a sensor that measures the amount of deflection or bending. Usually, the sensor is stuck to the surface, and resistance of sensor element is varied by bending the surface. Flex sensor therefore behaves in the similar way how does the potentiometer. By changing its resistance it changes the amount of electric current in the circuit which changes parameters mapped within Connetion Kit.

Flex sensor
Testing flex sensor replacing potentiometer

Flex sensor reacts however not in the same way how the potentiometer. I mapped the parameter Dry/Wet of the reverb. It only starts at about 30% and reaches up to 60% when bent. Next step will be to figure out how to calibrate flex sensor so it reacts fluently from 0 to 100% of the Dry/Wet parameter.

Sound for the clip from the movie ‘A Quiet Place’

A family struggles for survival in a world where most humans have been killed by blind but noise-sensitive creatures. They are forced to communicate in sign language to keep the creatures at bay.

In a devastated Earth overrun by invincible predators of a possible extraterrestrial origin, the Abbotts are struggling to survive in the desolate urban jungle of New York City: a death trap defined by a new era of utter silence. Indeed, as noise attracts this new type of invader, even the slightest of sounds can be deadly. However, even though it’s already been twelve months since the powerful monsters’ first sightings, the resilient Abbott family still stands strong. In this muted dystopia, learning the rules of survival is crucial. And now, more than ever, the Abbotts must not make a sound. —Nick Riganas (IMDb)

The clip from the film with no sound

I have chosen to make sound for this film from several reasons. 1) I like sci-fi movies with haunting eerie atmosphere 2) Considering the narration of the film, there is a lot of potential on creating tension with the silence and creative use of sound effects to play with psychoacoustics with not necessarily having a loud sounds or any sound at all – for example the phaser – it can create sense of presence and movement without actually hearing it.

Foley Recording

Foley recording for the scene from ‘We need to talk about Kevin’ with a focus on different types of sounds for footsteps

This Foley recording was created by Baria Qureshi, Dani Dasero and Vit Trojanovsky.

For the recording, we have used a large condenser diaphragm microphone AKG C414. First, we used two to capture possible stereo image but then we decided that for the purpose of this exercise, there is no need for that and used only one. All of us tried every role during the recording – performing foley artist and monitoring with recording in the studio.

We have performed folly sounds in the real time while watching the action on the screen. It didn’t always match 100% therefore we were adjusting recorded clips on the grid.

Placement of the microphone AKG C414

Recording outdoor environments

Parabolic Microphone

The parabolic microphone is highly directional and suitable for capturing sounds from very specific directions. A dish-shaped reflector focuses sound waves onto a small microphone located at the focal point of the dish. They are known for their ability to capture sounds from a relatively long distance.

Sound of the busy street – parabolic microphone
Sound of the plane – parabolic microphone
Sound of the plane 2 – parabolic microphone
Truck on the crossroad – parabolic microphone
Cranes of the construction site in Elephant and Castle
Shotgun Microphone

A shotgun microphone is a directional microphone but not as much as a parabolic mic. They are good for capturing sounds from a specific direction whilst also allowing to capture some of the ambience.

Swing in the playground – shotgun microphone
Wheels of the luggage – shotgun microphone
Van approaching on the street – shotgun microphone
Aquarian hydrophone

A hydrophone is a contact microphone specialised for recording under the water. With a special cup, it can be used as a classic contact microphone.

Electric transformer on the street – hydrophone/contact microphone
Fountain on the street

Moushumi Bhowmik 

Moushumi Bhowmik is a singer, writer and practice-led researcher based in Kolkata, India. She collects sounds and recordings outside of the periphery of our listening orbit, sounds unheard, left behind and hard to listen to, featuring questions of borders and displacement. She is drawn to sounds from the area of South Asia, like Nepal and Bengal. As ‘Bengal’ she refers to West Bengal Indian and Bangladesh. In her research and practice of collecting songs she highlights the importance of similarity in between for example of sounds songs from Nepal and Bangladesh. Even if she couldn’t understand Nepalese, she looked for the familiar in there, by this emphasising to look for what is connecting rather than dividing and different but also acknowledging point of view and own perspective.

She participated in the exhibition A Slightly Curving Place at Haus der Kulturen der Welt in Berlin in 2020, exploring acoustic archaeology practice. It was about recording in ‘pre-recording time’ before the recent recording machines came. The practice was based on visiting archaeological sites and trying to listen to them. Uncovering layers of soil on the sites is the analogy for uncovering layers of sounds in the record by constant listening. The technique is, as Moushumi points out, based on speculation and imagination. I very agree with that because I have been very struggling to understand such concept (if I got to understand the actual meaning correctly at all).

Moushumi did a workshop on the sound of memory. She said that people often bringing up memories from childhood based on sounds. She recorded a story of a girl Apple working in the gallery, originally from Philippines, who manages the kitchen. She mentioned how the sounds of the kitchen and cooking instantly remind her of her parents, and then, when she sees or hears planes, it reminds her that she cannot go back because they died. This story brings thoughts back to idea of displacement and it instantly reminded me the piece I was delighted to work on last year. I was scoring the short film based on childhood memories of my friend Lucie Trinephi who as five year old escaped with her parents the war in Vietnam. Lucie got a flashback based on the sound of helicopter many decades later and it was the actual sound memory, which opened the whole chain of many other visual as well as sonic memories.

Even if I could not entirely understand all the concepts Moushumi was talking about, I really appreciated her lecture because it was carried out in a very poetic way, and due to her determination to amplify the sounds and voices of people often coming from the place of struggle as was, for example, the story of my friend Lucie as well.

Industrial Violin (work in progress)

The other week I got myself very cheap violin for beginners. My friend asked me “oh, so are you going to learn how to play a violin now?”, I answered them “No, I will destroy it!”. Of course, I was joking. That would be horrendous thing to do! Although some aspects of what I am going to do with it could be considered destruction but actually I will be reframing the original instrument and using the beautiful resonance of violin’s wooden body for something else.

Project is inspired by the Instagram feed of musician Denis Davydov @davdenmusic. It’s a classical violin with a contact microphone attached inside of the body. I will be attaching various metallic objects like springs, kalimba or reassembled music box to the body of the violin. Some objects haven’t been found or decided yet. It is a fluid work in progress with many variables coming in and out whilst crafting this instrument.

Industrial violin will eventually become a part of my live performance and sound design practice.

Preparing the audio paper – The Chronic Illness of Mysterious Origin and Polymorphous Pan

I have decided to create an audio paper about the underground art event called The Chronic Illness (previously The Chronic Illness of Mysterious Origin) and its venue ‘The Dungeon of Polymorphous Pan’. It will be an interview with the curator Piotr Bockowski aka Fung Neo about the event and the venue in the context of his research about the fungi, post-internet performance art and squatting.

Some possible questions to be asked:

What is The Chronic Illness of Mysterious Origin, when and why it started?

Who or what is Polymorphous Pan?

Where The Dungeon of Polymorphous Pan sits in wider context of squatting in London?

https://research.gold.ac.uk/id/eprint/31708/

https://libsearch.arts.ac.uk/cgi-bin/koha/opac-detail.pl?biblionumber=1450625&shelfbrowse_itemnumber=1774022#shelfbrowser

https://libsearch.arts.ac.uk/cgi-bin/koha/opac-detail.pl?biblionumber=1535659&query_desc=kw%2Cwrdl%3A%20squatting%20in%20london

Wireless Data Streaming Using BNO055 and Bluetooth and Estimating Orientation Using Sensor Fusion

The very first way of translating movement into sound via motion capture which I decided to experiment with is using the Adafruit BNO055 absolute orientation sensor. The sensor fusion algorithms and blend accelerometer, magnetometer and gyroscope data into stable three-axis orientation output processed in Arduino and send them to Ableton Live via Bluetooth.

Bosch Adafruit BNO055

During the programming and calibration, I have encountered several problems. Programming and calibration are done in software MATLAB which at first didn’t allow me to upload the code into the Arduino so it could work standalone. Bluetooth module which I ordered was not supported therefore I ordered different one so programming Arduino and calibration were executed via USB cable. Programming and calibration were somewhat successful and I managed to connect the device with Ableton Live via Max fro Live plugin Arduino although after I disconnected USB I had to write the whole code again and keep the device connected to the laptop. Later the device stopped reacting and after enabling Arduino plugin in Ableton Live, the program from MATLAB started showing an error. At this point I need to figure out how to upload the program from MATLAB into the Arduino chip.

From the top: Bluetooth Module (unsupported type – will be replaced by HC-05), Arduino Uno, BNO055
Simple wireless motion capture data streaming device assembled
Programming and calibration of the device in MATLAB
Various positioning of the device reacts to the Arduino plugin in Ableton Live and changes the parameters of the mapped Low Pass Filter in quite a simple way.

Loading the code to an Arduino chip from MATLAB appeared to me as a very complicated process therefore I decided to try a different route – loading code from the original Arduino software IDE. I managed to load the code and run the calibration test of the program via Chrome browser extension, still connected via USB. Next step will be to figure out how to connect BNO055 and make its principles work with Ableton Live either via existing Connection Kit or by finding or creating a device from Max For Live and figuring out the connection via Bluetooth instead of USB cable.

Sound Suit – Developing the idea and exploring Motion Capture

I have been for a while intrigued by the idea of translating physical movement into sound. The idea, which isn’t new at all, still fascinated me from the point of view of reversed dance. People react to the structured sound in the form of music all the time. What if we will do it the other way around, to compose the music by dancing, turning our bodies into musical instrument?

Probably the most known similar concept was developed by British composer and singer Imogen Heap when she introduced in 2010 her MI.MU Gloves, musical gloves which allowed her to control her music performance on the move.

I am aiming to create an interactive body suit, which will include gloves too, partially inspired by Imogen Heap’s concept, but include and employ different parts of the body like all limbs and possibly neck and hips as well and combine the use of the suit with an interactive laser installation.

I will be developing this concept in collaboration with music producer, singer, performer and kinaesthetic artist Ona Tzar. The creative input of ‘a dancer’ will be an important part of the project in order to accustom its function to the live performance.

Solo Duel by Ona Tzar

What will be used in the final piece and how it will work is at this point unknown but there are several features which I would like to achieve and keep them as basis. I am aiming for actual devices to be as discreet as possible and final suit to be also fashionable and to become an art piece itself whilst maintaining its high functionality. The suit will become an art piece in intersection of sound arts, music performance, dance performance and fashion.

I started to research various technologies and concepts which could be potentially included in the suit. At this early stage, I am exploring wireless data streaming using the absolute orientation sensor BNO055, Bluetooth technology, Arduino and estimating orientation using sensor fusion, ultrasonic ranging module HC – SR04 and flex bend sensors which were by the way used in Imogen Heap’s gloves as well.

Wireless Data Streaming System using Arduino, BNO055 and Bluetooth
Motion Sensing Device Controller using Ultrasonic Ranging Module HC – SR04 
Flex Bend Sensor

CREATIVE SOUND PROJECTS (ELEMENT 2) – PART 6: FINAL COMPOSITION, MIX AND MASTERING 

The piece I made is a rather structured musical composition – we can call it a track – somewhere between electronica and ambient. I did not have a clear idea of the structure or how it will sound initially. The core of the composition is the melody played on chimes, percussion is made from a few objects in my room, and everything else went from there.


I made the sound design and structure in Ableton Live 11. I focused as much as possible to get all sounds and samples balanced at their source so the mixing later would be effortless.

All tracks divided into four groups

As effects create more atmosphere, there have been hybrid reverb from Ableton Live with a long decay, simple delay, moderate long reverb Valhalla and more types of long reverbs – one hybrid and one from an external device of effect processor, and finally grain delay, which is creating a layer of low end in the ambience of the outro.

Return tracks with effects

I mixed all tracks in ProTools. Since I was happy with how tracks already sound on its own in groups, I approached mixing process in more simple way and tried to balance out only the grouped tracks and exports of effects on return tracks.

Mixing session in ProTools
Mixing session in ProTools

For mastering I created separate session in ProTools and only used maximiser to raise overall loudness of the track.

Mastering in ProTools
Final composition

CREATIVE SOUND PROJECTS (ELEMENT 2) – PART 5: Recording the lead part on modular synthesiser 

I started to build my own modular synthesiser in Euro rack format in 2020, mainly based on the open-source schematics from YuSynth designed by French cellular and molecular biologist and synth hobbyist Yves Usson. Modules can be found on his website https://yusynth.net/Modular/index_en.html.


The synth part of the track is made from a simple saw wave shaped by an ADSR envelope, run through the YuSynth’s copy of the Moog low pass filter and modulated by LFO. Tones were played on the CV controller Arturia Beatstep Pro, which perfectly works with modular synthesisers to trigger various modules’ gates.


The lead part has been recorded and modulated by Ableton Live grain delay and hybrid reverb. I also used my digital multi-effect module from Lexicon.

YuSynth VCO
YuSynth VCA
YuSynth’s copy of Moog Low Pass filter
Double LFO from Hampshire Electronics
Digital multi-effect Lexicon MPX100 Dual-Channel processor

CREATIVE SOUND PROJECTS (ELEMENT 2) – PART 4: Field RECORDINGs and natural reverberation in a tunnel at parkland walk

The other week when I was at work outside on the night outreach shift, we had a referral to visit Parkland Walk, an abandoned rail track turned into the green walk connecting Finsbury Park and Highgate. There was a bridge, and I immediately noticed very interesting reverberation underneath the bridge. I like the natural reverberation of various spaces, and this one, in particular, is very intriguing, primarily because of its size. It is a small space, but the reverb’s decay is between 3 and 4 seconds.

Underpass of the bridge in Parkland Walk at 4:30 A. M.

I decided to come on the upcoming night of the same week with a field recorder and a few of my favourite items, which I like to sample (metal wine calyx, metal pot for cooking and metal kitchen tray) and capture their sound within the space. Various items worked better than others in capturing the reverberation. For example, the calyx was not that prominent, like a pot hitting the ground. Capturing the actual reverberation of the space recording on my iPhone 11 sounds much better than from my Zoom H6 for some reason – at least from my point of preference.


As the night progressed towards the morning, birds started to sing and were captured in the background. I will need to come back another night earlier, around 2 A.M., to record the same sounds again, but without birds.


After all, I have used the field recording of the footsteps and the sound of me switching on the recording button on my iPhone in the intro of the track. I pitched down the sample to -12 cents. The sound of the iPhone nicely lingers one of the chime hits and resembles it as it would chime with a delay. They are two samples randomly sitting next to each other, so they work together well. From samples recorded on iPhone, I have used only one, which became the clap at the climax of the track.

Recording sounds at the site with a Zoom H6 and a candle

Sound recording from iPhone 11

CREATIVE SOUND PROJECTS (ELEMENT 2) – PART 3: RECORDING SAMPLES FOR RISERS 

Recording metal tray in stereo

I have always been captivated by metallic and industrial sounds. This piece from a dining set in our kitchen recently became ‘an instrument’ and a standard part of my experimental live sets and compositions.

For the recording of sounds coming out from playing on this fantastic piece of metal, I have used AKG C414 Condenser Stereo Microphone. Sound is induced by hitting it with mallet sticks and ‘slicing’ the edges with a metal knife. The tray is hanging down from another microphone stand to rotate if needed. I was particularly intrigued by the sweeping low end and its movement within the stereo field when captured during the rotation.

Recording metal tray in stereo

By applying low pass filters, pitching down the tone of the sample for even -24 cents and reversing samples, I have achieved a variety of sounds being used as risers and atmospheres on the background of the composition.


AKG C414 mics possess fantastic clarity, but despite that, sometimes I would enhance the metallic feel by applying a small amount of very short reverb.

CREATIVE SOUND PROJECTS (ELEMENT 2) – PART 2: RECORDING SAMPLES FOR PERCUSSION AND CREATING DRUM KIT IN ABLETON LIVE

I was delighted with the results of the percussion for the Creative Sound Projects – Element 1. Since I am playing with the idea of recording other tracks similarly and creating the conceptual album based on those two projects, I have decided to use the same drum kit, with different processing and enriched by a few new samples.

I created quite a minimalistic drum kit from recorded samples of only a few items in my room. Kicks and toms were made by hitting empty laundry baskets with mallet sticks, and hats were made from ceramic teapots. Cymbals come from metal trays from the kitchen, which I also used for creating risers (I will talk about them separately in another blog post).

I recorded the sound of duct tape with no idea of how I will use it, and I enjoyed the sound and texture when the tape was stretched and peeled. I have used the tape sound in the previous track for various glitchy textures throughout the composition, but at this point, I am still determining if I will use it in this one.

I have used the Condenser stereo microphone sets listed below to record every single sample. I have compared how they sound and chosen various samples based on personal preference.

AKG C414 Condenser Stereo Microphone set generally appeared to catch higher mids and frequencies, and the AKG C451 Condenser Mic Stereo set was more affluent in the low end.

The drum kit has been processed with Ableton Live EQs, Drum Buss compression and, in some places, added Valhalla reverb.

Set up for recording laundry basket for kicks and toms

CREATIVE SOUND PROJECTS (Element 2) – PART 1: Recording chimes at the playground

Once during the afternoon walk, I discovered a set of large metal chimes situated at the playground of an estate between Arsenal Stadium and Finsbury Park. I decided to come back again to record them at different times of the day. I did so on 25th April at about 1 am when I expected the site to be quiet and ideally with no people around.

For recording, I used several types of microphones. I compared them during the recording and later decided which sound best suited my liking. The idea was to capture the sound and tones of resonating pipes as clean as possible. I have used 1x condenser microphone Oktava MK-012-01, Zoom H6 and 2x Korg contact microphones. Firstly I recorded each pipe separately to capture single tones with each microphone. Using the condenser microphone outdoors did not turn up to be the best idea due to its high sensitivity. Even with a foam filter, the wind destroyed any sense of clarity with an overwhelming hum. The result from the Zoom H6 stereo microphone was slightly better however surrounding city still had a severe amount of background noise even at night (there was always someone parking the car in the street around the corner or a dog barking). Clip-on contact mics from Korg attached to the frame of chips (not to actual pipes. I tried that too, and it completely dumped the resonance of the pipe into a dull sound with, to me, not likeable character).

After recording every chime separately, I attached contact mics to the frame on the left and right sides to create a stereo image and played a few different melodies. Two melodies captured that night became the core element of the track.

Set of chimes on the playground

Auto-ethnography of audio-visual exhibition ‘Thin Air’

‘Thin Air’ is an immersive large-scale exhibition taking place at East London’s industrial venue ‘The Beams’ and features seven captivating audio-visual installations. Light, sound and space mutually interlaced – examining boundaries of these crucial subjects pulls the attention of visitor’s senses on their possibly introspective journeys whilst wandering through the maze of dark rooms. In this essay I will dedicate my attention to chosen installations ‘3.24’ by duo 404.zero, ‘LINES’ by international studio S E T U P and ‘Cleanse/Mantra (110Hz)’ by James Clar.

The first installation worth mentioning is James Clar’s ‘Cleanse/Mantra (110Hz)’. It is a silent laser installation of lasers in the entry corridor inviting visitors to the whole space. A frequency of 110 Hz is known as ‘human pitch’ stimulating right side of the brain where art, creativity, spirituality and emotion are centred.  Buddhist and Hindu mantras are often chanted in the same frequency (Thin Air – The Beams, 2023). Although the mantra is visual it provides a certain synesthetic experience by visually expressing the sine wave of the frequency 110Hz. In spite of installation itself being sonically ‘silent’, the ominous bass drones from the impending installation ‘3.24’, already enter the sonic space of ‘Cleanse/Mantra’ and somehow become a part of it. It feels like Mantra of creativity is supposed to prepare you for the upcoming feeling of astonishment and immersion. 

404.zero is a collaborative project of artists Kristina Karpysheva and Alexandr Letsius. They specialise in real-time, generative, and code-based art, which is presented in large-scale installations, performance and music. Through combining noise with randomised algorithms they question the power structures of the Anthropocene and global politics, revealing them as invisible yet impregnable environments of the contemporary condition (Thin Air – The Beams, 2023). ‘3.24’ wants to challenge visitors to delve deeper into their own perceptions and explore the depths of their personal experiences. The installation consists of carefully positioned lights flickering across the ceiling beams and pillars throughout the space. A dozen of sound systems are positioned across the humongous warehouse space on each side. The sound design is ominous and deep; light works in juxtaposition with  the darkness of the vast fog encompassed space. The space is sparsely filled with people, which reminded me of entities of unknown origin slowly roaming and lost in the dark.

S E T U P is an international studio operating between multimedia art, lighting & stage design and performance programming. The studio was founded by Dmitry Znamensky, Stepan Novikov, Pavel Zmunchila and Anton Kochnev in 2018. The team explores expressive opportunities provided by the latest digital technologies. The group creates installations that challenge physical perception by working with light, programming and sculpture. They specialize in image and spatial distortion, and by using high-tech media, they transform the spaces they work in (Thin Air – The Beams, 2023). ‘LINES’ is situated in a warehouse smaller than the previous one. Initial impressions created by light systems and sound design seem to share similarities with ‘3.24’, however the space is equipped with beanbags, suggesting to visitors that this space may be suitable for relaxation. Three layers of LED lines are hanging from the ceiling which is spread across the warehouse. 

Both installations, via their atmosphere and soundscapes, inevitably reminded me of the environment of techno clubs, however, deconstructed into its basic elements of immersiveness. The sound in the space varied from deep drones, electric current-like sounds, industrial metallic hits, and heavy bass rumble. The sound, together with the visuals, could be easily found within a variety of techno shows. The immersive aspect may result in the escapism similarly provided by techno clubs and raves. Many modern humans have an urge to escape the stress of the fast paced life in big cities and recover from such stress by enclosing themselves into post-modern cave environments of high-tech techno clubs or abandoned warehouses at underground raves in order to internally reunite with ‘..ritual-spiritual and meditative space [and] to think through affective citizenship of socio-sonic dance space along processes such as belonging/alienation, self/’ (Zebracki, 2016, p. 118). Often loud, hypnotic and heavily percussive electronic music combined with artificial smoke and darkness in juxtaposition with flashing lights may bring a feeling of relaxation or mental restart, via the human mind reaching ecstatic or even psychedelic states. However, soundscapes of installations alongside lights are here generated randomly by coded algorithms and create rather an ambience which encourages the visitor to (maybe) consciously contemple, rather than to an ecstatic dance. 

In spite of the ambiguity and complicated definition of the concept of soundscape I have used this term in order to capture a representation of certain environment with its own sonic reality tied to the specific socio-cultural context (Bull and Cobussen, 2020, p. 37). How could the soundscape of post-modern, post-industrial society sound like and how it inspires and influences humans living in the big city constantly surrounded by technology? Does sounds of traffic, factories and computers in office spaces, all naturally occurring in the city landscape, create inspiration to purposely and with care fold them into patterns of compositions, thus creating industrial and techno music? 

The important element in which both pieces of the exhibition differ from the club and party environment, is what I call “deconstruction of the urban techno soundscape”. The percussive element of electronic dance music played in night clubs derives a long-run inspiration and roots in percussive music from tribal traditions, e.g. West African drumming (Zebracki, 2016, p. 112) then reframed with the use of modern technologies in Detroit and Chicago. ‘…House music was created by Black men in the late 1970s in New York and most famously, Chicago, after the “death of disco”. Techno was born in Belleville, Detroit by young Black men mixing tracks together with drum machines, synthesizers and turntables. After the wall fell in Berlin in 1989, Techno music made its way to Germany as the sound of a new, inclusive future. Detroit and Berlin have since had a symbiotic relationship when it comes to techno…’ (Goodwin, 2023). 

Both exhibitions, ‘3.14’ and ‘LINES’, are highly immersive experiences, and the sound design contains many elements of techno soundscapes, however, they are stripped of the repetitive rhythms and structures of dance music. If percussive moments appear, they are random and unstructured. There is no sense of evolvement, as for example in DJ sets. By stripping urban techno music of its repetitive percussion and compositional structure, yet retaining the elements of raw industrial/techno sound design, the sound installation still induces a similar transcendental effect on the human mind. It brings a different way of creating more clear consciousness in comparison to overly busy club and rave environments, where people may experience affective citizenship, feeling of belonging, and oneness on the dance floor especially with the context of the techno and rave culture.(Zebracki, 2016, p. 111) Here we are getting more into contemplation based on experience of solitude within the vast space (other visitors rarely interact and there is only handful of them as mentioned above).

‘Cleanse/Mantra (110Hz)’ by James Clar
‘3.24’ by 404.zero



Bibliography:

Boudreault-Fournier, Alexandrine. “Sonic Methodologies in Anthropology.” In The Bloomsbury Handbook of Sonic Methodologies, edited by Bull, M. and Cobussen, M. (2020). Bloomsbury Publishing USA.

Goodwin, T. (2023) “A Brief History of EDM’s Black Roots,” iHeartRaves [Preprint]. Available at: https://www.iheartraves.com/en-gb/blogs/post/a-brief-history-of-edms-black-roots.

Thin Air – The Beams (2023). Available at: https://thebeamslondon.com/thin-air/

Zebracki, Martin (2016) “Embodied techno-space: An auto-ethnography on affective citizenship in the techno electronic dance music scene,” Emotion, Space and Society, 20, pp. 111–119. Available at: https://doi.org/10.1016/j.emospa.2016.03.001.

Derek Baron

Derek Baron is a composer, musician, and writer living in New York City. They have released a number of solo recordings of chamber, computer, and concrete music on record labels such as Recital, Pentiments, Penultimate Press, and Regional Bears. (Sound Arts Lecture Series | CRiSAP research centre, UAL, 2023)

I found particularly interesting Derek Baron’s inspiration from very abstract concepts of Jewish mysticism and cosmogonic mythology about ‘sparks and vessels’ scattered across created cosmos, partially because these topics has been always close to me as well. This topic follows him across various art forms into the sound piece To The Planetarium. The piece is made from gathered old family interviews captured on tapes and its aim is to ‘let them be’ at its length and at its space instead of making short version cut. As a result, the piece is extremely long (about 4 hours). Derek Baron realises the difference between the material and the work. He found himself more in the position of a researcher / listener rather than a being in control over the content as a creator. This methodology reminded me an analogy in Jewish mystical concept of creation of the Universe, Tzimtzum, which means literally stepping back to allow for there to be Other, or Else, as in something or someone else, mentioned by Derek Baron earlier.

Derek Baron is sourcing inspiration for his music and compositions from various other art forms especially from paintings and often is putting random ideas and pictures together based on the ‘spark of momentum’ even if later the result won’t make any sense to him anymore. Such a spark he compares to the mythical spark from the creation. 

If I didn’t know about Derek’s fascination by mythology, his inspiration avenues would seems very random to me and wouldn’t make much sense to me either however I can somehow perceive what Derek sees behind all nuances of art pieces which inspired him into creation of his own and the ways presented during the lecture. Although there is an inherent difficulty to comprehend and describe these inspirations into detail because the whole journey seems to be very internal and personal.  

Bibliography:

Sound Arts Lecture Series | CRiSAP research centre, UAL (2023). Available at: https://crisap.org/research/projects/sound-arts-lecture-series/.

Vicky Browne

Vicky Browne is an installation artist who utilises everyday objects such as walkmans, iPods, clothing and furniture to comment on Western systems of consumption and networked relationships to ecologies. (Sound Arts Lecture Series | CRiSAP research centre, UAL, 2023).

She is building sculptures of turntables, CD players and recorders from various materials. Sculptures are interactive and able actually ’to play’ although expectations from what is being played may be very different than from usual players and turntables. Playing sculptures sonically represent the material from which they are created whether it is metal, glass, wood or stick from the forest (which is still wood however at this point the attention is brought to the place of origin – forest).

Some ways how Vicky Browne executes an exhibition in the gallery reminded me approach of Rie Nakajima. She positions various sculptures from various materials across the space and let visitor walk through it and immerse themselves in a cacophony of sound. However she would call such a set up rather installation than sculpture. Similarly as Rie Nakajima, Vicky Browne’s approach is very ecological. She uses a lot of recycled and old material.

Bibliography:

Sound Arts Lecture Series | CRiSAP research centre, UAL (2023). Available at: https://crisap.org/research/projects/sound-arts-lecture-series/.

Mélia Roger

 Mélia Roger (*1996, France) is a sound artist and sound designer for film and installations. Her work explores the sonic poetics of the landscape, through field recordings and active listening performances. Exploring human non-humans relations, she tries to inspire ecological change with environmental and empathic listening (Sound Arts Lecture Series | CRiSAP research centre, UAL, 2023). She works a lot with the voice and very recent technologies 

I found interesting her piece ‘Voice as matter, matter of voice’. She says a sentence to the translator and it translates it to Spanish from her native French. Then she repeats what she hears and translator re-translates. At some point it is becoming a loop and translator is creating new random sentences. Mélia wants to see how the machine reacts with a non-sensical sentences and how the application creates links between two languages. This technique somehow reminds Alvien Lucier’s piece ‘I am sitting in the room’ where he records the sentence over and over again until there is extracted the pure resonance of the space. In the end of Mélia’s exercise with translator there appears a word when the translation stops to change in between languages. Similarly at the end of Lucier’s piece there is only never-ending undistinguishable resonating hum all over and over again.

Another piece of hers ‘The voice is voices’ explores vocal cloning with online tools and IRCAM TTS, program which synthesises speech. The artificial voice has been constructed from many hours of voice recordings and each word is generated completely by the machine, via text-to-text speech synthesis. The installation is playing with listener’s doubt. One speaker is playing Mélia’s real voice and another one is playing the synthesised voice of hers. The idea is creating uncanny feeling based on no possibility to distinguish real and synthesised voice thus question which identity is real and which one is fake. Mélia realised that noises from her mouth produced during the speech are becoming the meeting point of distinguishability between organic and artificial. This piece may be indirectly pointing out to current questions and fears in regards to constantly evolving Artificial Intelligence when interacting with chatbots is slowly becoming indistinguishable from interacting with humans.

The voice is voices

Bibliography:

Sound Arts Lecture Series | CRiSAP research centre, UAL (2023). Available at: https://crisap.org/research/projects/sound-arts-lecture-series/.

Rie Nakajima

Rie Nakajima is a sculptor living in London. She has been working on creating installations and performances by responding to physical characters of spaces using a combination of motorised devices and found objects. (Sound Arts Lecture Series | CRiSAP research centre, UAL, 2023)

She is creating extensive interactive mechanical acoustic sound sculptures consisting from very random objects. By positioning various objects in different scenarios, combinations and flooring she can achieve very different results in terms of sound and loudness. During the performance objects are positioned in random places across the whole space where she is. Sounds of ‘mechanical creatures’ are slowly taking over the whole space and the audience is continuously fully immersed in the strange surround orchestra with an ongoing tension created by adding new and new sounds coming from different directions. Rie doesn’t like to call them ‘her objects’ and likes to give them space to express themselves thus she realised over the time that there is no need for her to name objects as well as her pieces and performances.

Rie points out that in Japan the culture around sculptures is very technical and material based but after she moved to London to study sculpture at Chelsea School of Arts further she decided for a different avenue and started experiment with including sound into sculpture. Later she introduced element of performance when she joined Slade School of Fine Arts in London too. When she perform her sculptures she doesn’t have set any intentions or theme. Performance is improvised and always evolve into very different results and scenarios also because of the audience which often has its own unconscious input based on position and interaction in the space.

I really appreciate ecological and recycling approach in her art. She doesn’t like to use expensive objects or materials. The whole approach is very compact. Rie mentioned that she never had her studio and the whole ‘sculpture scene’ is transported in her luggage.

Bibliography:

Sound Arts Lecture Series | CRiSAP research centre, UAL (2023). Available at: https://crisap.org/research/projects/sound-arts-lecture-series/.

Rory Salter and Ecka Mordecai

Rory consider himself more of a musician rather than sound artist. His music is formed through experimentations with electronic instruments, field recordings, amplified objects, cassette tape, feedback and voice. He is motivated by a relationship to changing and chaotic environments, objects and scores made from walking.  As an artist he works mostly with walking, text, feedback systems and participatory projects, often with a focus on actions and performance scores. (CRiSAP, n. d.) He is converting his drawings into musical compositions. Drawings here work as a form of score.

Ecka is an artist whose work intersect between music (cello, horsehair harp, voice, eggflute), performance and sensation (scent). She moved to London in 2020 to pursue career in Sound Art however after the start of pandemic she had to find an alternative and started to work in laboratory with scents creating perfumes and perfumed candles. Her cello composition ‘Study of a flame’ was inspired by observing the flame of a perfumed candle, its movement and smell inspiring her to compose a cello piece. The process made her question: ‘Can the process be reversed?’ This doesn’t include only the burning flame of candle but also its scent.

Ecka started to develop sound inspiring scents and perfumes noticing details from the object and its environment (f.e. tree) inspiring the subject (the perfume). She thinks about scents as about notes and sound waves and her invention consider to be Intersensory recording device and provides the form of synaesthetic experience.

‘Some perfumes are loud and at the end of the day… ‘the scent and sound are both airborne’.

Bibliography:

CRiSAP. (n.d.). Sound Arts Lecture Series | CRiSAP research centre, UAL. [online] Available at: https://crisap.org/research/projects/sound-arts-lecture-series/ [Accessed 4 May 2023].

Audrey Chen

AUDREY CHEN is a 2nd generation Chinese/Taiwanese-American musician who was born into a family of material scientists, doctors and engineers, outside of Chicago in 1976 (AUDREY CHEN, n.d.).

In Audrey’s performance I found particularly interesting an intersection between sound art, music, linguistics and body performance. Her vocal/synth performances go with its non-rigid flow into the opposition of structures within classical music which she has been trained in as violoncellist and vocalist. Her use of voice becomes more than just instrumentation since she is creating a new sonic language. By this Audrey Chen points out an inherent attribute of the music – to be a form communication which goes beyond human language and its ability to communicate unspeakable like for example certain emotions.

Audrey Chen stated that during her performances she may even achieve different states of mind by the way how she is breathing (hyper-ventilating) during her vocal expressions. This brings her into the realm of her own sonic language which she has been creating and elaborating past 20 years and admitted that it becomes sometimes difficult to tune back into communication with other people in the classic language based form of human speech just right after she finishes the performance.

This brings me to thoughts and questions of how many various ways of non-verbal communication may exist and arise from the realm of human mind and body. Art in general has been certainly one of them and the vocal performance of Audrey Chen challenges boundaries of usual human spoken language as well as traditional singing in the same time.

Audrey Chen performing at London College of Communication on 24th April 2023

Bibliography:

AUDREY CHEN. (n.d.). AUDREY CHEN. [online] Available at: http://www.audreychen.com [Accessed 24 Apr. 2023].

CREATIVE SOUND PROJECTS (Element 1) – PART 6: FINAL COMPOSITION

The final composition became 3 minutes and 23 seconds long track pending between jungle, IDM and electronica.

Another way of stepping out of my comfort zone and pushing creative boundaries was certainly the amount of time spent creating this track. I have recorded, arranged and mixed this track over period of few days and total amount of hours spent on production is somewhere between 24 and 30. This is way less than I would usually dedicate to a track because I can easily come back and forth to a sound or music piece and work on it for months.

The deadline definitely pushed me to make sharp and quick creative decisions and not to overthink certain moves. Creating this track was very inspirational and made me to consider recording the full album leaning towards IDM glitchy compositions based on electroacoustic recording of organic sounds and field recordings.

CREATIVE SOUND PROJECTS (Element 1) – PART 5: FINAL MIXDOWN

The final mixdown has been another challenge since for that purpose I have used a different DAW than Ableton Live which I have been using for mixing as well. For mixing I have chosen Pro Tools and this track became my very first project mixed in Pro Tools.

Since I have focused on most of the sound design and recording quality samples at the very beginning mixing session happened to be rather pleasant and very subtle polishing job.

I have mostly adjusted only volume levels of tracks, added compression on the drum group and EQ on few selected track where I felt that certain frequency needs enhancement as well as maybe a bit different colour of sound and improved automation of panning on glitchy sounds.

I have left of course the head room of -6dB.

Edit window of Pro Tools project for the final mix
Mix window of Pro Tools project for the final mix

CREATIVE SOUND PROJECTS (Element 1) – PART 4: ARRANGEMENT AND SOUND DESIGN

For arrangement of the composition and creating the sound design I have used Ableton Live 10. I have attempted to make the whole composition quite minimal with not too many tracks and instruments. For example I kept the drum rack as a single track and focused on levels and sound design before hand in order to leave it in the mix as a single track. I used six effects on Send/Return channels in order to enhance instruments and create the space within the track.

Drum Kit
Glitch sample from the duct tape
Another glitch sample from the duct tape
Hard panned saw wave synth created from the lead synth by pitching one octave higher
Saw wave Lead synth
Saw wave Climax
Saw wave Verse 2 Bass
Accordion sound created from Climax synth revered
Triangle wave ‘bell’ synth
Saw wave ‘horn’ synth
Risers made from the guitar
Guitar
Long Reverb
Grain Delay
Delay1
Pedal Distortion
Delay 2
Medium Reverb

CREATIVE SOUND PROJECTS (Element 1) – PART 3: MELODY – RECORDING MODULAR SYNTH AND ELECTRIC GUITAR

Creating quite fast percussion (170 BPM) together with layer of delay gave to the composition quite a jungle feel and manipulating samples from duct tape with grain delay and pitch shifting them created glitchy layers which brought the track closer to the realm of IDM.

For the melodic part I have decided to use my home made modular synthesizer connected to controller Arturia Beatstep Pro in order to create various sequences. The main theme of the track is a simple sequence originated in the single oscillator with a saw wave evolving through the low pass filter into the climax where sequence became groovy by adjusting the attack (from short to long) of ADSR envelope. Second synth layer of ‘bell’ sounds is a single oscillator this time with a triangle wave. ‘Horn’ sound is again simple saw wave recorded and pitched down enhanced by very long reverb. Envelope on filter of horn sound is manipulated manually by hand during recording.

Process of finding the ‘right’ sequence…
Arturia Beatstep Pro – Analog / Digital sequencer used for creating the main sequence theme via modular synthesizer
Electric guitar ESP-LTD EC 1000 VB used for recording of guitar parts

CREATIVE SOUND PROJECTS (Element 1) – PART 2: RECORDING SAMPLES FOR PERCUSSION AND CREATING DRUM KIT IN ABLETON LIVE

To the start I created quite minimalistic drum kit from recorded samples of only few items which were present in my room. Kicks and toms were made from hitting empty laundry basket with mallet sticks and hats were made from ceramic tea pot.

I have recorded the sound of duct tape at the moment with no clear idea how I will use it. I simply enjoyed the sound and its texture when the tape was being stretched and peeled. I have used the tape sound later as a miscellaneous glitchy textures throughout the composition.

For recording of every single sample I have used Condenser stereo microphone sets listed below. I have compared how they sound and chosen various sample based on personal preference.

AKG C414 Condenser Stereo Microphone set generally appeared to catch more high mids and high frequencies and AKG C451 Condenser Mic Stereo set was richer in the low end.

Recording laundry basket for kicks and toms

Recording tea pot for hats
Recording the cover of laundry basket for snare

I have uploaded selected samples into a Drum Rack in Live and added various effects in order to enhance their sound. Started with EQ, added various amounts of compression on kicks and toms by using Drum Buss compressor and added some colour by using Amp as a distortion. I also achieved interesting sounds bending the pitch of various samples either steadily or on timeline with automation (mainly with the glitch sounds).

CREATIVE SOUND PROJECTS (Element 1) – PART 1: CREATING THE CONCEPT AND SEEKING IDEAS

Our collective release will consist from various sonic works summarised under common collectively agreed concept – stepping out of our comfort zone.

I have been questioning what it will means for me as a sonic artist and music producer and couldn’t find the solution for quite a while. Then I have realised that I should probably create something different than I usually tend to create whilst also use different techniques, different instruments and tools as well as divert myself from the usual sound which I naturally incline to.

My usual music production could be described as dark experimental electronics leaning into industrial with a lot of ritualistic percussion. It has usually very slow BPM, a lot of metallic sounds, deep atmospheric ambience and not really a firm structure in terms of composition.

As an example you can listen to the EP ‘Theatre of Plague’ released on The Judgement Hall Records in 2022 below:

I have decided to create something where I will do the most of steps and creative decisions in exactly opposite way than I would normally do, think or would be drawn to.

I have recorded my own samples from very few items present in my room at the very moment and used as least instruments possible. In terms of composition I decided to make something adhering to the grid and following bars in contrary to less structured tracks, with use of absolutely no metallic but rather organic sounds. I also went for way more faster BPM than usual.

Sound design for ‘THE VILLAGE’ – a site specific immersive theatre by Persona Collective

During past few months Persona Collective have been working closely with LGBTQ+ communities, Soho’s local businesses and residents to stage this multi-sensory intimate journey across multiple secret locations and passageways throughout Soho and Chinatown.

When? From 30th March until 19th April 2023.

Poster design @emilygeorge.jpg

Poster photography @emilygeorge.me

Supported using public funding by the National Lottery through Arts Council England @aceagrams

Host + partner @the_koppel_project

Cast + Creators: @valentinebordet, @intact_sofa, @emilygeorge.me, @harperwalton_, @francescakos, @s__plowman, @gladyswen, @melanie__gautier, @_a_sf, Amanda Kamanda, Pilar Morales Perez, John Quan, Kieran Saikat Das Gupta, Tony Towell, Kim Way

The Collective: Artistic Director + Producer @rocio_ayllon; Assistant Director + Choreographer @gladyswen; Choreographer @georgiamay.__; Creative producer @abbie.madams; Producer, Photographer, Props + Costume @yagasovinska; Graphics, Props + Art Direction @emmaIddesign; Art installation + Set Design @jackwates; Costume Designer @jakubxnowacki

Lighting Design Team: Design Director + Labs Facilitator Satu Streatfield; Lighting Designers @_a_sf & @annng.y; Lighting technician Steve Lowe

Sound Design Team: Sound Design Director + Labs Facilitator @jmacabra; Sound Designers Enrico Lovatin & Vit Trojanovsky

Sound Design Assistants: Zell Couver; @juice_shuting; @joviennw; Haein Kim; Ella Macfarlane; Artem Spivak; @lilbobagginz

JOHN QUAN & CASTING FULL METAL JACKET by Vit Trojanovsky

This scene took its place in the first venue of immersive theatre, hairdresser Cuts in Soho. It is a rework and adaptation of my previous sound piece ‘Indochina Jungle’

DUMPLING SOUNDSCAPE by Vit Trojanovsky
DUMPLING SCENE by Jose Macabra and Vit Trojanovsky

Dumpling soundscape has been mixed into the full scene which took place in the second venue at The Koppel in Piccadilly Circus. It is a field recording of playful manipulation of a raw dough and boiling water.

HELL SOUNDSCAPE by Vit Trojanovsky
CHURCH SCENE by Jose Macabra and Vit Trojanovsky

Hell Soundscape has been mixed into the full scene which took place in the third venue at St Patrick’s Church in Soho Square. It is a recording of electromagnetic waves from inside of the Overground train made by Soma Ether enhanced with very long reverb

Video installation “Indochina Jungle” by Lucie Trinephi

I was delighted to create the sound for the video installation “Indochina Jungle” by Lucie Trinephi based on her childhood memories of Vietnam war in Saigon of 70s.

“Operation Popeye was a geo-engineering programme to control weather. The “make mud not war” programme used Cloud seeding technology to extend the monsoon season for as long as possible. Rainmaking as weapons was in operation for five consecutive years… Silver iodide required as condensation agent is highly poisonous to aquatic life, vegetation and humans.”

The installation was presented at The Chronic Illness XIX event situated at the secret underground venue in North London and it was extended into performance act by Lucie Trinephi herself, Neo Fung & Laboranta during her DJ set.

“The bird flies home and finds no nest.”

Thoughts on Mandy-Suzanne Wong’s Sound Art summary

Mandy-Suzanne Wong is discussing different approaches to what sound arts may and may not mean based on insight into history of various ways how sound was used for creative purposes. She is pointing out that term “sound art” was used for the first time by the American composer William Hellerman in 1983 however her reach into possibility of artistic expression through sound out of conventional music goes back to the beginning of 20th Century by mentioning aestheticization of noise and machinery by Italian Futurists.

I found particularly interesting points about questioning relation between sound art and other art forms. Mandy-Suzanne Wong says that sound art may explicitly include or exclude other art forms. This unbalance brings me to the thought that sound art as a quite modern form of art seemes to be rooted in questioning of its own existence and in comparison to other art forms seems to be inherently philosophical. 

Another interesting point is that research into sound arts has been undertaken rather by visual artists than musicologists. This suggests obvious diversion from conventional music and perceives sound art as something else. Is music as an organised artistic expression through sound waves part of sound art or is sound art an experimental form of music? This is another question arising.

Week 10: I am a Sound Artist

1. What context or genre is your work situated in: what artists does your work relate to?

My current sound work has been mainly happening in the field of experimental electronic music as producer, live act and a DJ. I have been mostly designing sounds in DAW however recently I started to incorporate analog synthesis, field recordings, acoustic instruments and music concrète. There are many musicians who I look up to but this has been changing a lot over years. Currently my favourite artists are experimental electronic musicians like Roly Porter, Nastika or OAKE.

2. What are the key ideas and motivations for your work?

This was always difficult question as often I struggled to find words to describe my music and source of its inspiration. The sound and music are for me somehow the most natural way how I can artistically express myself. I believe that art is a form of communication. People communicate commonly by language however human mind and life are way more complicated to be extensively expressed only by spoken or written language. Sound and music can communicate undescribeable feelings and emotions. My motivation is to communicate with others my inner world within space and time via sound waves. 

3. What form and media does your work take/hope to take?

Immersion in music and sound has been an element I have been ultimately always drawn to. The idea of exploring the field of audio-visual immersive installations is currently very appealing to me. Creating an experience inducing the fantasy of an altered state of mind. Not necessarily as a form of escapism (but also could be) but as a meditation and relief.

4. What do you want your art/practice to do?

As I suggested above I consider the art a form of communication based on different human experience than for example language. Inducing an emotional response in other people in a way that their soul would be touched, finding those people who can feel through the sound ‘this sound is familiar to me, this is reminding me of who I am and we are. Inducing such feeling until such extend that any spoken or written words couldn’t express.

Premiere of the track ‘From Dust To Flame’

On Friday 2nd December 2022 my new track was being premiered by Berlin label and music platform THE BRVTALIST. It will be part of compilation with likes of Orphx, 3.14 and Comarobot released on 13th December on Korean label GWI MYEON Records.

Track has been recorded and produced in collaboration with musician Elfeira. The core composition is a melody played on handpan by Elfeira recorded on two condenser microphones Oktava MK012. Overall distortion of the climax was achieved by running the same melody through the plate reverb from Lexicon MPX 100 Dual Channel Processor and overloading its preamp.

Track is bringing industrial and ritualistic vibes. Its name has been chosen to celebrate the power of subconscious mind (word ‘dust’ appeared in my dream in which I was thinking how to name this particular track).

WEEK 9: TEXT SCORES

Liquid Scream

Fill up the bathtub with the water.

Merge yourself in the water and scream.

Listen to bubbles of air coming out of your mouth and how the sound spreads within the bathtub.

Focus on the sound of your voice in the water and resonance of the bathtub spreading through the water.

Record it with hydrophone, contact microphone attached to body of the bathtub and field recorder positioned above the bathtub.

Make sure you don’t drown.

The Sound of Sleep

Position field recorder close to pillow in the bed where you sleep.

Attach contact microphones to your abdomen, structure of the bed, duvet and pillow.

Record for the whole length of your sleep.

Look for the peaks in the sound wave to identify louder or interesting moments within the period of your sleep.

Listen to sounds of your bedroom, your breath, sleep walking, sleep talking and sounds which your body makes during the sleep.

Sound portrait of Polymorphous Pan

Polymorphous Pan is an entity dwelling at its own Dungeon and living of energy provided by mutagenic performance art and experimental music. These creations have been occurring in the basement of a squatted bookshop in North London the past seven years, as part of events Chronic Illness of Mysterious Origin (in 2017 renamed to Chronic Illness) curated by Neo Fung, who writes:

Inspired by rotten fetishism, various acts of body performance that engage with fungi on a visual and material level have been featured at Chronic Illness to explore the possibility of enacting alternative sexualities and non-normative lifestyles as key ecological processes within the present-day, decomposing civilisation (Bockowski P., Fungal Fetishism, 2022).

The sound piece explores a sonic fantasy of an abandoned mouldy industrial basement which sometimes comes alive whilst hosting gatherings of the strange and uncommon expressions of arts, individualities and creates a backdrop for emerging varieties of new identities and non-conforming ideas about life. These are in juxtaposition with what is commonly occurring above ground.

Polymorphous Pan is a collection of field recordings gathered in the actual basement during late October and November 2022 using various types of microphones like dynamic Shure SM58, pair of small diaphragm condenser Oktava MK-012-01 (often positioned at opposite sides of the space or the object to capture the stereo), condenser Tonor TC20, ZOOM XYH-6, Soma Ether and 13 piezo contact microphones. I have recorded the space itself during two overnight recording sessions using every type of microphone and only objects present in the basement or close to its entrance. The only other acoustic object used during recording which hasn’t originated in the basement was a pair of mallet drumsticks. For recording of the metal cage I used twelve contact microphones via aggregate of audio interfaces Focusrite Scarlett 18i8 and M-Audio M-Track Eight. 

I have chosen particular samples based on various characteristics of the sound like texture, frequency range and amount of the natural echo of the basement. After I listened to the final selection of samples I edited and composed them in Ableton Live 10. To process the sound I have often used a resonator called Corpus and four different effects on Send/Return channels (short reverb, long reverb, ping pong delay and echo).

During the recording I have encountered several problems which led me to rethink the process and influenced my further creative choices. For example after the first overnight recording I realised that gains on ZOOM H6 recorder were set too high therefore a lot of hissing appeared. The second overnight recording had levels set much better but in terms of interesting events recorded not much happened in comparison to the first night. I have chosen the tapping water leak from the first rainy night, applied low pass filter and processed it with Corpus. This created the lead bass drone present during most of the piece. 

My favourite part of the process was the application of Alvin Lucier’s technique for the room resonance extraction articulated by speech. I recorded Neo Fung reading the article about Polymorphous Pan in the Dungeon and re-recorded the speech thirty one times.

List of equipment used for recording and production:

Microphones: Shure SM58, pair of condensers Oktava MK 012, condenser Toner TC20, 13 piezo contact mics, Soma Ether

Sound interfaces and recorders: Zoom H6, Focusrite Scarlet 18i8, M-Audio Eight Track

Analog compressor Focusrite Compounder for drum group

DAW: Ableton Live 10

Bockowski, P. (2022), ‘Fungal Fetishism, rotten performance at the Dungeons of Polymorphous Pan’, CLOT Magazine, 8(September). Available at: https://www.clotmag.com/ (Accessed: 20 November 2022)

Entrance to the Dungeon of Polymorphous Pan
Recording the carpet which covers the entrance to the Dungeon

The cage setup

Playing the cage

Sealed rusty door ready to get smashed

Sealed rusty door getting smashed
Arrangement of the tracks in Ableton Live project
Application of Alvin Lucier’s ‘I am sitting in the room’ – extracting resonances of the Dungeon of Polymorphous Pan articulated by speech of Neo Fung

Week 8: Listening and Hearing

There is an immediate idea coming to my mind what the dichotomy of listening and hearing might be. One is happening constantly and nearly cannot be avoided another one is done my willing choice. It is impossible to switch off hearing in the same way like we can close our eyes in order to not see. Hearing is very automated therefore a lot of information coming from this particular sense is becoming somehow suppressed and non-conscious. Listening on the other hand is a conscious activity requiring our attention in the moment. Listening is harnessing the hearing together with mental focus in order to perceive particular sound or sounds.

Listening is dependent on hearing. We wouldn’t be able to listen without hearing but we do hear without listening a lot. There is few questions arising. What make us to listen? When and how hearing becomes listening. What elements of sound(s) and which situations make that transition from non-conscious hearing into conscious listening?

I have created a typology of listening based on the avenue of how hearing becomes listening considering the relation between object and subject – Listening could be For purpose of art or for purpose of survival. In both situations the sound attracts our attention for some reason. The reason can be automatic (comes from outside – we hear something and are made to listen) but also a conscious as matter of choice (comes from inside – we decide to listen use the hearing consciously).

Here are some examples:

Artistic Automatic Listening – There is a sound piece, music or soundscape happening around us. We simply resonate with it for whatever reason therefore it draws our attention and we become conscious listeners.

Hearing the object ➔ Subject is listening

Artistic Conscious Listening – There is a sound piece, music or soundscape happening somewhere and we know about it from before, somebody told us about it or we may presume it happening based on other information. We decide to approach to listen to it with an intention which precedes hearing it.

Subject uses hearing ➔ listening to the object

Survival Automatic Listening – We are crossing the street and unfortunately pay less attention than we are supposed to. A car honks in order to attract our attention, very quickly, in mili-second transforms hearing into listening and stops us from being hit.

Hearing the object ➔ Subject is listening

Survival Conscious Listening – A group of humans decide to go hunting in order to obtain food. Conscious utilisation of silence whilst observing the prey is hearing transformed into conscious listening.

Subject uses hearing ➔ listening to the object

Week 7: Sonic Materialism

When I am thinking about three everyday sounds which contain qualities like rhythm, texture and pitch I don’t need to go very far at all. Those sounds can be found and identified simply in connection with my own physical body as in general human experience.

Rhythm – Breath

Very quiet and textured rhythmical sound with different pace depending on activity which

We are doing at the moment or our current mental state. We cannot avoid this sound since we need to breath. It is only a matter of finding the quiet moment and corner to hear this sound. Consciously or not we do find such moment and hear it clearly every day at least once anyway – before we fall asleep in bed. 

Texture – Crunching and Chewing Food

Once I realised that sounds related to my mouth are the loudest ones to perceive if we consider that our surrounding environment is quiet. We not only hear what is coming out of our mouth but we perceive more textures of resonances through the mass of our body directly touching the inner ear especially when we are eating food or drinking water. It is like the jaw bone being a contact mic. Again – we need to eat every day therefore we do hear this every day.

Pitch – Voice

Sound of our voice is naturally changing the pitch all the time. Either when we are talking or even singing change of the pitch it’s constant unless we are willingily trying to impersonate a robot as a perfomance. 

Week 6: Acoustic Ecology of LCC

Sound Signal – sounds which are meant to be listened to, measured or stored.

Very explicit sound signal is for example beeping sound of card reader upon entering classes or gates upon entering the building.

Keynote sound – Sounds which are heard by a particular society continuously or frequently enough to form a background against which other sounds are perceived

Backround noise of the corridors or dishes with cutlery on canteen.

Soundmark – a community sound, which is unique or possesses qualities which make it specially regarded or noticed by people in that community

Silent ambience of the library and the way to the library. Bridge right before entrance has high ceiling and it amplifies surrounding sounds with transition into hyper quiet environment of the library with occassional sound of turning pages.

Week 5: Gallery visit

Two pieces in particular drawn my attention during the visit of Tate Modern. Brain Forest Quipu by Chilean artist Cecilia Vicuña and 112L by Leonardo Drew.

Brain Forest Quipu has struck me straight away by its contradictions – massiveness and softness. At first I thought that this multi-media installation is dream-catchers. Two 27 meters long sculptures are hanging from the ceiling in Turbine Hall. The piece consists from fabric sculpture, sound, music, and video. It mourns the destruction of nature (rain forest and the loss of Indigenous history and culture (Anon, n.d.).

I have visited the piece several times in upcoming three days from our first visit. Physical presence of sculpture is quietly monumental like the rain forest itself. Sonic element gives to the piece the of dimension of change spanning over 8 hours. It has been conceived by Vicuña and directed by Colombian composer Ricardo Gallo and it brings together indigenous music from several regions, compositional silences, new pieces by Gallo, Vicuña, other artists and field recordings from nature.

I particularly indulged in listening to the sound and music from different distances from the piece within majestic Turbine Hall. The reverberation of the huge space brought to compositions the whole new perspective. The clarity of compositions was dissolving within long decaying echo of Turbine Hall and approaching closer to the core of the sculpture brought you to original sound of compositions but here it is – where the sound is coming from? It took me good few minutes to locate the source of the sound which was coming from many little speakers wrapped up in little cocoons blending with many other knots and fabrics.

What I didn’t enjoy about the piece was inherently necessary due to its placing and I myself was part of that – people visiting the gallery and their chatter within Turbine Hall were somehow disturbing the beauty of reverberation experienced whilst observing and listening to the sculpture from distance. 

Another piece which I liked wasn’t probably a sound piece at its original intention however what intrigued me was its relation to the sound by the bias of my mindset of sound artist. 

112L by Leonardo Drew is a sculpture made of wood. It made me thought how different may be interpretation of art pieces within fields or even across them based on our own history and interests. 

Wooden sculpture immediately reminded me visual representation of sound waves (most likely white noise) in wavetable generator.

Anon, (n.d.). [online] Available at: https://www.tate.org.uk/whats-on/tate-modern/cecilia-vicuña.

Week 4: Research and Writing Skills

“In the form of unfelt activities, what could be called matrix ‘of advancing acts that have already arisen from previous situations’ (Langer 1967: 281), constitutes the mood of a feeling, and this mood shapes the kind of conceptual moves that can be made in an occasion of feeling, then this dimension of organic activity should be regarded as a structured and structuring ground that determines the kind of abstractions or abstractive tendencies that can take place within it without, however, determining the characteristics of these abstractions.”

This paragraph has been extracted from the essay Felt as thought (or, musical abstraction and semblance of affect) by eldritch Priest. The essay is part of wider collection which ‘…features new essays that bring together recent developments in sound studies and affect studies.’ (Thompson and Biddle 2013: 250).

Considering the name of the essay, the content of the paragraph may be suggesting the dichotomy and mutual relation between rational thinking and emotions induced by music or sound. Possibly we could extend this to experiencing any art form in general. “…form of unfelt activities, what could be called ‘matrix ‘of advancing acts that have already arisen from previous situations’ (Langer 1967: 281)… “ cited from S. Langer’s Mind: Essay on Human Feeling indicates a collection of experiences derived from ‘unfelt’, which could be considered from rational. This rational thinking may be somehow shaping (‘biasing’?) our momentary emotional experience (abstractions) whilst we are exposed to the art piece. This suggests that our emotional experience of art never exists fully independent out of certain preliminary structures of our own individual experience of life, history and mind.

Bibliography:

Priest, e. (2013) Felt as thought (or, musical abstraction and semblance of affect), in: Thompson, M. and Biddle, I. (eds.) Sound, Music, Affect: Theorizing Sonic Experience. Bloomsbury Academic, pp. 45-64.

Schizophonia vs. l’objet sonore: soundscapes and artistic freedom – by Francisco López

As sound artists we have inevitably an object of interest which is being extracted and/or executed in a way which expresses an artistic idea. The sound is that object. Lopez’s article discusses two schools and negotiates two different points of view at the same phenomena of real sound environments – soundscape. 

On one hand there is a school of “Schaferians” named after Canadian composer Murray Schafer. He starts with a critique of “tuning” which he considers “silencing” and “noisy” which is a diversion of post-industrial human from the natural sounds of the environment surrounding us. Therefore he considers any kind of systematic attempt to isolate the sound from its natural environment as a form of divergence which he calls Schizophonia. 

Pierre Schaeffer talks about sound object (object sonore) which is exactly the opposite of what Schaferians wish to achieve. Sound object is a sound isolated from its environment in order to create artistic expression. Schafer says that keeping sounds within its natural environments is artistic expression itself. Schaeffer criticises him for restricting the creative freedom in favour of acoustic ecology.

I can agree with both points of view but only in their specific parts. Schafer is seeing an art in something naturally occurring and trying to raise a critique of common sonic art expression which we call music (in order to create an art piece music isolates tones, then further music concrete isolates sounds etc.). This may bring a lot of new inspiration but in the same time it stops here. That is where I agree with Pierre Schaeffer claiming that Schaferians are reducing opportunities for artistic freedom and expression.

Field Recording Trip

Thames South Bank 24th October 2022

In front of Sea Containers Hotel a security guard arrived to check activities of animalistic sounds and application of suspicious looking devices. Came worried and paranoid, left genuinely intrigued. Music producer from Southbank University liked it too.

See It, Record It, Sorted.
This piece was recorded on my way home from the field recording trip. I was playing around with Soma Ether at London Bridge station. There was a homeless guy busking, playing on guitar few chords. He was facing directly glowing advert screen in a tunnel where people were passing by during the rush hour. When I scanned the screen with Soma Ether I realised that sounds of its electromagnetic fields creates two major tones which exactly matched chords of the guitar riff which he played without stopping over and over again…

Chris Watson: The Art of Location Recording

The article provides very interesting and valuable insight into technical backround, ways of recording and use of various equipment in Chris Watson’s field recordings in various outdoor environments with sometimes even extreme conditions.

An extensive list of tips how and when to use specific types of gear to achieve specific sonic results of the recorded audio can be almost extracted here.

I found interesting Watson’s comparison of pros and cons of digital and analog recorders. Digital domain provides more portability and reliability. Gear is getting lighter, smaller and amount of audio recorded larger. Analog gear which often doesn’t possess qualities mentioned below on the other hand stands out by its physical aspect and distinctive sound.

For sound design he prefers original sound of hardware synthesisers due to better harmonic frequencies than those contained in software synths.

Watson’s way of post-processing of recordings depends on the goal or the task but personally he insists on doing as little as possible. Here Watson emphasise the importance of quality of the recorded audio sample at its source. He thinks ahead very carefully about the nature of the sound before pressing the record button so he avoids overwhelming himself with extensive library of low-quality samples later. Then he carefully choses specific microphones and pieces of gear based on experience gathered over the time and around places in order to capture those thoroughly premeditated coherent sounds.

Week 3: Electrical Walks of Christine Kubish – An extension or diversion from the Soundwalking by Hildegard Westerkamp?

In this blog post I would like to compare and explore possible relation between two sound art practices, soundwalking of Hildegard Westerkamp and Electrical Walks of Christine Kubish. The practice of sound walking emerged in 1970s inspired by practice of conscious listening of the environment and acoustic ecology (Staśko-Mazur, 2015: 440-441). Westerkamp’s soundwalking is inherently ecological activity. She is questioning the backround noise and sonic pollution of human cities creating ‘authoritarian environment’ which causes inability to hear and listen to tiny sounds of barnacles during Kits Beach Soundwalk and it concerns her. Westercamp isolated those tiny noises with a use of bandpass filters and equalisation and so gave the chance to unheard voices to be heard there she doesn’t promote complete abandonment of technology (Kobler, 2002: 41-42).

Christine Kubish started her Electrical Walks in 2004 inspired by her previous sound installations from 1980s based on listening to amplified electromagnetic fields of wires. She has created headphones with built-in microphones and amplifiers which allows listeners to walk freely in the city and listen to multitude of various electromagnetic fields which surround us constantly in urban environment. Kamila Staśko-Mazur considers Electrical Walks as a new strategy of soundwalking (Staśko-Mazur, 2015: 441).

Both types of soundwalking practices differ from each others as well as carry a lot of in common. Westercamp is trying to escape human cities so she can listen to silenced sounds of the nature and maybe to find her own inner voice too (Kobler, 2002: 42). On the other hand Kubish is diving right into the noise polluted human city centres. She also uses the technology for the similar reason – to listen to the unheard. Electromagnetic fields and barnacles are both silent by its nature. Both can be brought up to be louder and be heard by the use of the technology. However human induced electromagnetic fields are by product of the noise pollution which are silencing the tiny nosies of the nature. In contrary Westerkamp once said that ‘we should listen to our cities as the native did to the forest’ (Westerkamp, 1974: 18-24) and that’s what Kubish did in more depth and precision. Certain sense of ambivalence is arising whilst comparing these two practices.

I really enjoyed the idea of Electrical Walks by Christine Kubish. It reveals sonic realms constantly present in the city environment but hidden to our hearing and brings completely different perspective and inspiration. I am particularly interested in drone sounds and electromagnetic fields are inducing these a lot in many forms, textures and pitches. Regular percussive sounds can be found as well. On contrary randomness and irregularity of sounds of barnacles is what I liked a lot in Kits beach Soundwalk of Hildegard Westerkamp. 

Week 2: Keywords to my practice and aspirations

Immersion

I have been always intrigued by immersive performances and audio-visual installations. The main reason is the possibility of escaping from the reality which surrounds us into realms of dreams, timelessness and abstraction. Certain type of sounds (f.e. drones), its characteristics (f.e. reverberation), shapes and lights induce in me very particular states of mind close to meditation.

Installation

Music production and music performance have been part of my artistic expression for the most of my life. Over the time mainly due to electronic music club environments I started to develop interest in different avenues how to express the need around playing with and manipulating the sound. There are different scales of time and space when it comes to sound installations and sound sculptures in comparison to music performances and productions which take the form of the release for example. I would like to work with similar vibes and emotions which I always liked in performances, reframe and execute them within different time-scales and spaces.

Psychoacoustics

Certain frequencies and their combinations can make us feel and react certain ways. I am keen to explore how sound installations and sound/music performances impact human emotions and psyche when executed in specific ways. Playing with specific frequencies to induce particular states of mind.

TROJΛNOVSKX – my journey through electronic and experimental music, sound design and performance art

At very first I touched the DAW in 2014 but more consciously my music production, experiments and collaborations started to evolve in 2016 after I moved into a squatted bookshop in district Holloway situated in North London. I started instantly collaborate with my ‘squatmate’ performance artist Piotr Bockowski a.k.a Neo Fung and joined an underground art event under his curation called The Chronic Illness of Mysterious Origin which is focused on mainly on underground performance art and experimental electronic music. The event is situated in the basement of the squatted building – The Dungeons of Polymorphous Pan – by Neo Fung often described and conceptualised as an independent fungal post-internet entity or life-form.

Over the period of last six years I have been scoring various live performance acts and Neo Fung’s short movies whist living in the building, often bringing the equipment down to the Dungeon and rehearsing at its constantly changing environment. The basement has been several times flooded by underground rivers pressing on local sewage system. Couple of Chronic Illness events happened even in spite of of circumstances of nature and electronic music has been replaced by acoustic experiments with metal structures of the dark and moist environment.

Under the moniker Aurelia Trojanovska later transformed into TROJΛNOVSKX I played several experimental ambient live sets which created foundations for many of my tracks. Some of them has been released in form of digital EPs on London’s underground experimental electronics/techno/avantgarde labels Edited Arts in 2018 (‘Hemlock’), then Metempsychosis Records (‘Blaec Waella’) and The Judgement Hall Records (‘Theatre of Plague’) in 2022. One upcoming track will soon join few other industrial and techno artists as a part of VA compilation released on Korean label GWI MYEON Records coming out in few weeks.

Focusing mainly on performing live set and scoring performance art acts I have been mainly using the DAW and MIDI controllers. However in last two years I have started including in my production live recordings and the raw sounds of guitar effects, analog and modular synthesisers.

If I decide for mixing and layering music of my favourite artists I choose to perform DJ and hybrid sets. In those I often blend dark ambient, industrial, broken beat techno, ritualistic electronic, witch house and other sub-genres of dark wave and over the fast paced repetition I promote storytelling and narration.

Hildegard Westerkamp’s Kits Beach Soundwalk – shifting perspectives in real world music & Soundwalk as a multifaceted practice

Soundwalk

Westercamp’s approach to soundwalk is inherently ecological and reaches into ides almost spiritual. Regularly practiced experience of present moment during soundwalks may become an experience meditative. She juxtaposes ‘authoritarian environment’ of human cities inducing huge amount of noise and ‘tiny unheared vices’ of barnacles and points out the imbalance which concerns her. By practicing a conscious and focused listening to natural environments and their marginalised aspects one can regain the lost balance with an almost meditative effect and hear not only ‘barnacles in their whole tinyness’ but also ‘reach their own inner voice’.

It is important to point out that Westercamp doesn’t refuse the use of technology. For example the use of equalisers and bandpass filters allows us to bring up to live all those marginalised and silenced voices. Westercamp says that ‘we should listen to our cities as the native did to the forest’ (1974)[1 The quote is from a text by Westercamp entitled Soundwalking, which sets out the fundamental goals of the practice (Westercamp, 1974:18-27)].

It is, again, about finding the right balance between use of technology and nature by ‘…actively paying attention to current needs, and adopting an approach of awareness’[2 Kamila Stasko-Mazur, Soundwalk as a multifaceted practice, p. 448 (2016)].

Tim Ingold’s concept Being-in-sound follows up Westercamp’s conscious environmental listening by becoming present in the moment which may again results in meditative states of mind.

Reflecting my own experience of Soundwalk whilst blindfolded together with Westercamp’s ideas I found this practice as a good sonic and attention span hygiene. It is not only the auditory noise of the human cities which suppresses marginalised voices of barnacles but nowadays also ‘visual and attention noise’ of social media which drags us out from the experience of present moment and then often devalue various sensory experiences like conscious listening.

What Sound Arts means to me

I have been intrigued the most of my life time by probably the most common and widespread artistic expression through sound waves – music – listening and indulging myself in many genres and trends of acoustic, electroacoustic and electronic music. The logical outcome of my love to music was starting to play electric guitar and then later electronic music production.

Through my interaction with the music, both as a listener and musician/producer, I have observed several aspects which always caught my attention, for example the physicality of reproduced sound during live performances and club environments. I started to break down other aspects of recorded and live musical compositions and sounds which drew my attention and questioned why they make me feel certain way in certain moments. It became a constant back and forth coming from acoustic and electroacoustic music to electronic music perceiving the sound and its aspects more consciously until isolating them, realising them more in natural and synthetic environments and having an urge to explore them as phenomenas itself.

Drones – Pleasant and calm drone of bee hives contra nervous drones of insects high in the tree crowns of the forest just before the storm hits. Drone music and ambient can be very meditative and calming but also can create a tension by oscillating in lower frequencies and adding darker textures by distorting them. 

Reverberation – sense of space and depth within music as well as natural environments. I particularly enjoy echoing of very large spaces. A visit of Chislehurst caves in South East London striked particularly me with its extremely long and pleasant decay.

These were only two examples of many phenomenas which I would like to explore.