Last weekend I have finished assembling the central station which will collect data from all motion sensors in the real time and transfer them via WiFi into the computer. I have attached it to an old crop-top of mine until later there will be created more elaborated piece.
Prototype of the central station for sensors attached to the crop-top
Future idea for the central station piece (A.I. sketch)
Measurements of limbs were taken and sent to the latex maker Ella, who is going to be working on the piece this week
A.I. sketch of the latex based motion sensors using conductive rubber
Example of motion sensors prototypes controlling effects
3D printing process raised few questions for me. I have learned that the resin material is not recyclable. It made me feel quite uncomfortable and I started to think how I could use the waste from the resin support structure, which I found quite beautiful, interesting and eventually gave me some inspiration .
Resin waste from 3D printing
I got an idea for the audio-visual installation using primarily the resin waste. I would like to create a ‘plant cyborg’. The plant will consist from resin waste crystals and natural materials arranged into some structure. Crystals will be glowing from the bottom with the use of LEDs. Speakers will be hidden in the sculpture.
The plant will work as a clock. Different crystals will express different time frames – hours, minutes, seconds and sound will occur at specific time frames too.
The idea is to experiment with a change the perception of the time. Cyborg artist Neil Harbisson, who was born with achromatic vision got implanted in 2003 the antenna into his skull, which allows him to hear colours as sine wave notes. Over the time he described this new sensations becoming the perception. My assumption is that having such a clock around for long enough may eventually change the way how we perceive the time – in this particular case specific hour or time of the day may become in our mind associated with a specific colour and/or sound. this could maybe result new ways of thinking and having inspirations.
Jing Xu and Tsunagi sent me ready final piece in the file of .stl format. I have uploaded final .stl file to the PreForm to get it ready for 3D printing. Since I haven’t done 3D ever before, I encountered few problems like fitting the piece into the right printer in virtual space and creating the supporting structure. All these were quickly resolved with help of the technician in the 3D Workshop at LCC.
PreForm for printing – created support structure.
Tuesday 23rd April 2024: I booked 3D workshop last week for today at 12 pm. 3D printing process takes about 11 hours and 35 min. Then I will need to remove the support structure with a snips, sand the surface and provide UV curing.
Freshly 3D printed headpiece before removing the support structure Sanding the headpieceFinal headpiece
The headpiece came out from the 3D printer exceeding our
zexpectations. Measurements were madero fit Ona’s head however it comfortably fits the head of everyone who tried to wear it. The next stage will be attaching the sensor. Original idea was to paint it white but we agreed to keep it transparent.
Since the headpiece will remain transparent, I decided to attach on the insides programable LED strips WS2812B which will create light patterns based on accelerometer data from the sensor and work in parallel with the sound control aspect.
The task we got during the class was to position bluetooth speaker somewhere in the space of LCC, play some sound and observe people’s reactions. We had to make sure that sound is not causing any distress.
Baria have chosen sound of the glass being broken on the loop. We have positioned the speaker firstly on the glass table in the main gallery of the first floor. The sound certainly drawn some attention and I noticed one person being diverted from their previous trajectory. After that we pot the speaker in the middle of the corridor just next to the art work which was made of glass. This also drawn attention and people at first couldn’t figure out where the sound is coming from. Some people told us later that they thought it might be the part of the art piece itself. This means that we could temporarily completely change the perception of the art piece and also drawn more attention to it than it would get under normal ‘no sound involved’ circumstances. I noticed some people even started dancing to the song of breaking glass.
Overall reaction was somewhere between amusement, curiosity to just getting a mild attention. This made me think about the context of this particular sound being present inside of the art school. The sound of glass could be normally considered distressing or dangerous situation happening near by. My assumption is that reaction and emotional response would be probably different if the sound was positioned in for example shopping centre or train station. Considering we were playing sound inside of the art school and next to the art piece made of glass reactions were mild. The third location we chose was a narrow glass window with no art work around but reactions were still mild. My theory is that people inside of the art school are somehow cognitively desensitised to normally to ‘weird’ or ‘uncommon’ events of sound which under other circumstances would trigger different and more acute response. It is predictable to expect that anything ‘weird’ occurring will be more likely some sort of art work rather than dangerous event happening. Of course the amplitude played the role as well. I believe that if we put the speaker louder the response could be also different.
@xiaji2tsunagi2 and @abadteeth have been in past few weeks designing the headpiece together. After presenting the first sketch (see previous post) @ona.tzar and I had few notes regarding the wearability of the design.
Original sketch – front part of the headpiece
Original sketch – back part of the headpiece
Both of us agreed that the front part looks very interesting as an idea, however the reality of wearing could become uncomfortable or maybe even dangerous for eyes. We proposed to remove the endings which are covering eyes. Ona pointed out also that the back part V split could be lower in order to create space for her hair ponytail.
Improved sketch of the headpieceImproved sketch of the headpiece placed on the head model
The next stage will be creating a 3D scan and taking measurements of Ona’s head. The 3D scan and head measurements will be sent back to @abadteeth and @xiaji2tsunagi2 so they can make an appropriate size fitting in the software and prepare the project for 3D printing.
At my end I have started to build the ‘Core Station’ which will process data from all sensors and transmit them via WiFi. It will be attached to the back of the performer. Based on the research I did, I have decided to upgrade the micro-controller from the previous prototype based on ESP32 to Teensy 4.1. ESP32s are still being used but only for the WiFi connection. Teensy 4.1 contains microprocessor ARM Cortex-M7 clocking at 600 MHz which is in comparison to ESP32’s microprocessor Tensilica Xtensa LX6 clocking at 240MHz significant improvement allowing fast and real-time data transfer from multiple sensors in the same time.
Teensy 4.1
Teensy will be gathering data from two accelerometer sensors GY-521 (MPU6050) attached to the feet, two elbow and two knee stretch sensors and BNO055 (9-DOF absolute orientation sensor) which will be situated in the headpiece. Data from sensors are going to be sent via UART connection into ESP32 WROOM-32U Access Point. I have been considering SPI connection but I struggled to find appropriate libraries at Arduino IDE and learnt that it will require to learn different IDE. I have tested UART which I am familiar with and it proved itself sufficient enough, however I still consider sorting out SPI connection in the future.
On the receiving end there is another ESP32 WROOM-32U which is connected to the computer and sends raw numerical data to the Max MSP. ESP32 WROOM-32U has a specific feature – possibility to attach external WiFi antenna. This significantly improved the data transmission and range.
ESP32 WROOM-32U with the antenna.
Prototyping the Core Station on the breadboard – Teensy board, ESP32 Access Point device (Sender) and IMU sensors.
ESP32 Client device (Receiver)
Testing the speed of data transferTesting the range of the WiFi connection
I have been seeking for a while a fashion designer or maker, someone with appreciation of similar aesthetics as I do therefore the result could become a common effort rather than an order.
After discovering the conductive rubber I realised that it could be efficiently combined with the latex. I approached a friend of a friend, latex maker and designer Ella Powell @exlatex.
Ella Powell have been working with creating latex clothes and sheeting for the past two years. She studied a short course in latex making at Central Saint Martins over the summer 2022. Currently she is studying a master’s degree in computer science and AI.
After initial meeting we have drafted some ideas about creating latex based organic-like futuristic looking sensors which will efficiently collect data from the bending knees and elbows. Below you can see AI generated idea of the direction in which the piece might be evolving.
Other artists which are joining the team are @Elixir31012. Elixir31012 is a multimedia artist duo formed by digital artist Jing Xu @abadteeth and sound artist Jiyun Xia @xiaji2tsunagi2 in 2023. Both graduated from the Royal College of Art with a degree in Fashion in 2023. Elixir31012 creates an otherworldly landscape of non-linear time through digital animation, experimental music making, wearable technology, and performance. Cyborg study, myth, ritual, and feminist narratives are intertwined in their work. Elixir31012 takes its name from the Chinese Taoist vocabulary for “elixir”. 3 represent “Synthesis”, 10 for the “Sacred”, and 12 for the “Eternal Return”. Their sound art performance at the the event Chronic Illness XX very intrigued me and we started talking. The idea to collaborate emerged very soon and organically based on similar interests in creative technology, cyborg art and sound art. Elixir31012 proposed that they will make a headpiece which would carry the motion sensor for the Sonokinesis performance.
Elixir31012 performing at IKLECTIK Art Lab
Elixir31012 performing at IKLECTIK Art Lab
Declan Agrippa @hiyahelix, student of the second year of Sound Arts at University of the Arts London, London College of Communication, is going to create a sound design using the virtual wavetable synthesiser Serum.
Below you can see the work in progress sketches of the sensor headpiece in Zbrush.
Multi-disciplinary and kinaesthetic artist Ona Tzar @ona.tzar is joining the team as a performer. Her creative input is being very important for developing the whole system because we would like to have the garments as ‘user friendly’ as possible. We have been actively discussing materials, positions of sensors, shape of garments and the headpiece trying to find the right balance between ‘experimentalist aesthetics’ whilst keeping the usability of all pieces for performance comfortable, functional and reliable.