Pages

Wednesday 27 February 2013

Future of Music == Gesture + AI + Singing

While doing some research, I found this talk in which a musician talks about how she was able to use different gestures to compose a song. The actual talk can be seen below:


What is the first thing that comes to your mind after watching this? yes, it is amazing to see such a performance for a music fanatic. But for me it is even more interesting to see the different aspects of data fusion involved. By looking at her performance, there are a number of things that comes to my mind:

1. Hand gestures recognition using data gloves.
2. Body posture recognition using Kinect Sensor.
3. Localization of the person on stage using Kinect Sensor.

You might have noticed, that there are different hand gestures which are used to start the editing or instrument playing sequence. While the hand gestures are used to play specific notes as well, body posture specifies the different after effects/post processing. Similarly the location of the singer is used to relate it to different music effects.

This really shows the potential of natural interaction technology, and what might be achieved if new ideas are integrated into these natural interaction methods.

Reference:
http://www.kinecthacks.com/imogen-heap-talks-ableton-controlling-gloves/