Performer-audience communication can easily be regarded as one of the biggest and therefor most debated problems the contemporary computer musician faces in a live performance situation. This can be accredited to a disassociation between the performative gestures and the sounds they produce. This notion of disconnection, which is absent in most traditional instruments, is intrinsic to the computer musician’s instrument. Furthermore, it’s inextricably linked to the idiomatic nature of the compositions which are to be performed on it.
While it is impossible to see the choices made in regard of the formation of the instrument as detached from the idiomatic nature of the performed compositions, the instrument has an undeniable influence on the aesthetic nature of these compositions. This conundrum is a common fact for the multi-threaded performer who is burdened with pursuing a balance between all given factors in order to create as much artistic freedom as possible in the context of both the composition of the instrument and the aesthetic language applied in the compositions.
In order to guarantee a meaningful electronics performance, I’ll propose a performance practice that’s based on connecting performance gestures to visual animations which are projected onto the performance platform of the computer musician’s instrument. These visual animations can, in their turn, be linked to the sonic result of the performer’s actions, thus creating a positive feedback loop, capable of optimising the communication model which exists between the computer musician and the audience in a concert situation.
With this research project I aim to demonstrate that the computer musician is indeed an artistic performer, maintaining a similar degree of artistry comparable to that of acoustic instrumentalists.