Sunday, October 31, 2010

robotic chair is robotic.

12 days, 1 robot.

I am very close to getting this thing going I think. Changed my ideas for locomotion... was actually my dad's idea. Turns out what I wanted to do was too complicated... at least for the time frame.

Bluetooth chip is up and working.
Servos are up and working & I via bluetooth, I can control them wirelessly.
Started on the Supercollider program / framework for controlling the robot.
Still thinking of programming a particular rhythm / dance beforehand... maybe I will do that.. and also sort of let myself correct / add things live. Or maybe I will just do the sound processing live.
Ordered spare servos, another arduino, some moldable plastic, battery cases, etc.
Feeling less panicked than I was feeling the last two days.

2ish months is actually quite a short timeline for building one's first robot for performative purposes.

I feel like I need to take some basic mechanical engineering courses now though. Also, its funny how I completely started working on stuff like the arduino, bluetooth, control, etc. issues but totally neglected the mechanical part until recently. Maybe that's not funny, its just that I'm a programmer, etc. so I did all the things I *knew* about and that I should tackle but I was like, oh I'll just wing that silly mechanical shiz! (hahahaha!!!!)

ETA: just read over my blog, and its bit disturbing how many things I *say* I'm gonna/should do and don't... like 'in transit', 'n is for neville', 'incognito choir'... are all projects that have been floating around and always put off for another... I mean... hopefully after this robot thing, and this round of grad apps, I'll at least start recording the subway for 'in transit'... but I think maybe my expectations are too high. I can't do 2 projects at once like I think I can, really. And it *does* always make sense to work on the project that has a performance scheduled... so...

ETA2: I think, post-robot-show I should really book an NYC cabaret show for Feb or early March. That why post grad-apps, waiting for shiz, etc. I have music project, etc. to buffer the inevitable crushing rejections...

ETA3: I was thinking about writer's block and how I hardly ever have it... and when I do, its because I have this fixed idea of project/piece and/or it has to be PERFECT etc. I was thinking maybe others have writing blocks because they aren't absorbing enough outside the box information and experiences... because laser specialization on subjects, etc. isn't so good for creativity (maybe). I think I'm thinking of a specific person that I used to talk to a lot years ago. HMM... tis late I must be getting slightly judgmental.

Monday, October 25, 2010

idea! light bulb!

an installation piece.

A giant, anatomically correct human hand... people can control the hand like a puppet... either by robotics or actually... like being a puppet... (I can envision this) The work is the hand, itself, and instructions on how to make the hand snap. Then, I have a percussion score that people must work together to realize... perhaps, an invitation? I could also make a graphic score that non-musicians could read. hmmmm....

it's official.

I'm in the Provocative Objects show on November 12. Or, the robot will be in that show. ACK! Must. get. moving. !

Wednesday, October 20, 2010

robot news, ircam app, october deadline madness

First things first: I found a chair for the robot, it is nice and has its own character. The way its put together is definitely changing my vision for the way the robot will move. I think I might have to saw some parts away, too.

Also, I realized I need a separate power source for my servos... I very likely have enough torque with the large servos I got from sparkfun (oh good!!) and I learned that NYC has shit for electronic components stores. I actually need to order break away headers and such since Radioshack doesn't have them and my searching for electronic components in NYC only came up with a very sketchy store-ish thing I'm not sure still exists.

Here's my ircam research proposal. Yessss.... kinda sloppy (esp. references) but dude, it was last minute and I kinda had to turn it in as is. I won't print my personal statement since it was a horrible amalgam of previous personal statements copy and pasted and hastily made to fit oh, in about the 15 minutes I had before the midnight paris time deadline. Also, yes its long. AND somehow when I copy and pasted it inserted strange line-breaks.


Project Description and Work Plan for Ircam Music Research Residency
N is for Neville who died of Ennui : An Interactive Performance System to Explore Musical Learning in Humans and Machines

Context and Description
The process of learning an instrument involves many small failures of the body. For example, one’s fingers may not press hard enough on the string on the guitar to let the note fully sound, or one may pluck the wrong string entirely with the other hand. Such incidents disrupt the mode of embodied interaction, that is, the subjective experience of the body interacting with the external world. The beginning guitar player becomes aware of her body in these moments of failure, and she must develop a set of mental representations for how and where to move her hands and fingers. In contrast, once she is an accomplished player, she does not have to think about where to put each finger in particular.

This project is an interactive performance system and series of musical works for this system, which will explore this process of learning an instrument by changing the mappings between the performer’s action and resulting sound during the course of performance. This system will change the mappings at the time when it can successfully predict the human performer’s musical gestures. The title of the project refers to boredom as a metaphor for this process, that is, the idea that the musical instrument gets bored with its performer, so it changes the pattern.

There has been much research in creating novel interactive systems for controlling music via gesture recognition. From The Hands (1984), by Michel Waisvisz, which used two wooden frames for the hands, to the more recent EyesWeb, which attempts to extract emotional affect from musical and dance movement. Most of these systems use gestural recognition to control sound in a controlled, pre-specified way, and the machine learning is performed off-line before the system is used in performance. Often, fuzzy logic is used in order to accommodate performers and make the novel musical paradigm more intuitive for the performer. This will be a completely novel system in that it adapts to the performer’s movements during performance, in order to take the performer out of her comfort level. The goal is not to create an intuitive instrument but to explore how musical gestures and movements progress from feeling unnatural to being intuitive. In this sense, it is somewhat similar to the i-Maestro gestural learning system, AMIR, although N is for Neville is not an educational tool, but an artistic exploration of embodiment during musical
learning and performance.

Research questions
1) How does a musician’s relationship to her body change when her actions do not have the intended effect? Does it seem foreign and externalized? Does this sensation go away as she learns how to play the instrument in its new form? When does this process of learning start to occur?
2) What changes in musical mappings can a performer adapt to quickly, and which one take a long time to learn? Which changes lead to an interesting experience for the performer? For the audience?
3) To what extent can particular neural networks predict repetitive gestural information in an improvisatory context? Does this prediction correlate with a performer’s mode of embodied interaction with the system? Are there additional measures from the gestural inputs that correlate with the mode of embodied interaction?

Aims and objectives
1) To explore how the experience of embodied interaction with a musical instrument
changes during the process of learning.
2) To explore the connections between machine learning of musical gestural and
movement with human embodied learning of the same system.
3) To gain new insight into how people learn to play music, and how their gestures and movement changes as they start to acquire skills and automaticity.
4) To develop a system that can learn to predict the periodicity of musical gestures in the context of a changing interactive paradigm in real-time.

Detailed Work Plan and Methodology
I wish to spend six months in Paris working with the Real-Time Musical Interactions
team at IRCAM to develop and research. gestural analysis techniques to create a novel interactive system. I would prefer to conduct my research and development during the dates of March 2012 to August 2012.

First, I plan to set up a system for motion capture of the right arm using sensors, such as accelerometers and image tracking. Then, I will develop a set of elementary motions of the hand and arm that will form a basic alphabet of the gesture recognition system. I will also address the problems of motion segmentation, and the representation of motion, possibly by using or modifying existing gestural analysis systems, such as the one used by i-Maestro’s AMIR.

Second, I plan to develop the sounds and sound parameters that the performer will be able to control with her movements. I will then compose a simple musical work for this changing musical instrument, and experiment with learning to play this work using the motion capture system with different sound-to-movement mappings. Then, I will implement a system that introduces novelty into the mappings between gesture and sound, and experiment performing my simple piece while the mappings change. In this stage, the mapping will change at timed stages; no prediction of gestures is occurring. I plan to generate novel, unpredictable mappings via a network of nodes with weighted edges. The inputs to this network are the descriptor values of the motion and the outputs are mapped to sound parameters. Each time a value passes through an edge, it is multiplied by the weight of the edge. When two or more values are sent to the same node from different edges, those values are summed before being sent to the next node. This map could have varying numbers of inner nodes and edges, which connect the input nodes to the output nodes. Changing the values of the edges will change the mapping between the motion and the sound, in an unpredictable way. Any number of output nodes may not be initially connected to the input nodes if there are zero edge values, but adding weight to the edges will allow the performer to suddenly be in control of a new aspect of sound.

This design allows flexible, unpredictable mapping in which the amount of change
between different mappings can be carefully controlled. I will be researching and designing the predictive system that will learn and train in real-time concurrently with the first two stages of the implementation. In the third stage, I will begin implementing and developing the first system to predict the performer’s gestures. The input to this system will be sequence of symbols from the gestural alphabet
developed in the first stage, that is, output of the gestural analysis system after motion segmentation and representation. This predictive systems will first be tested with manufactured input that has known patterns in order to judge its effectiveness in a controlled setting. Initial measures and values for prediction success will be developed at this time. After initial development, this predictive system will be integrated into the musical interactive system and this system will determine when to change musical mappings. Then, through a series of experiments, the effectiveness and results of the first version of N is for Neville who died of Ennui will be tested and explored. Different performers will learn and test the system trying to play the musical work that was previously composed and tested with the timed system of changes. Further, the systems will be tested on during improvisatory sessions, to see if they can learn performer’s gestures settings in tat context as well. After this stage is complete, I will evaluate the predictive system and determine if it needs changing and refinement, and if there are ways or new approaches that will improve and refine its predictions. After new approaches and refines, further testing will follow. I will also keep a record of my experiences with embodied interaction while performing with my interactive system, and will ask other experimental subjects (performers) to write down their experiences as well. During this last stage, I will compose a work of music for this novel interactive system which will be informed by my experiences developing, testing, experimenting with the system.

Outcomes
The outcome of this project will be a novel interactive system that changes the mappings from movement to sound when it can predict the performer’s gestures, and at least two musical compositions written to explore the potential of this system. I also anticipate that this project will lead to better information and knowledge about the predictive behavior and training phases of neural networks and other pattern recognition algorithms in the context of pattern recognition.

Further, I hope to gain from this process a better understanding between learning, bodily movement, and the mode of embodied interaction.

Bibliography
Benbasat, Ari Y., Joseph A. Paradiso, An Inertial Measurement Framework for Gesture Recognition and Applications, Revised Papers from the International Gesture Workshop on Gesture and Sign Languages in Human-Computer Interaction, p.9-20, April 18-20, 2001.

Bevilacqua, Frederic. Fabrice Guédy , Norbert Schnell , Emmanuel Fléty , Nicolas Leroy, Wireless sensor interface and gesture-follower for music pedagogy, Proceedings of the 7th international conference on New interfaces for musical expression, June 06-10, 2007.

A. Camurri, B. Mazzarino, M. Ricchetti, R. Timmers, G. Volpe
Multimodal analysis of expressive gesture in music and dance performances, in A.
Camurri, G. Volpe (Eds.), "Gesture-based Communication in Human-Computer
Interaction", LNAI 2915, Springer Verlag, 2004.

Camurri, A., B. Mazzarino, G. Volpe. Analysis of Expressive Gesture: The EyesWeb
Expressive Gesture Processing Library, in A. Camurri, G. Volpe (Eds.), "Gesture-based Communication in Human-Computer Interaction", LNAI 2915, Springer Verlag, 2004.

Cumming, Naomi. The sonic self: Musical subjectivity and signification. Bloomington: Indiana University Press, 2000.

Merrill, D. and Paradiso, J. A Personalization, "Expressivity, and Learnability of an
Implicit Mapping Strategy for Physical Interfaces", Proc. of CHI 2005 Conference on
Human Factors in Computing Systems, Extended Abstracts, ACM Press, Portland, OR, April 2--7, 2005, pp. 2152--2161.

Rolf Inge Godøy1, Egil Haga1 and Alexander Refsum Jensenius1, "Playing 'Air Instruments': Mimicry of Sound-Producing Gestures by Novices and Experts", Gesture in Human-Computer Interaction and Simulation, February 2006.

Schroeder, Franziska. "The old and the new and the new old: A conceptual approach
towards performing the changing body". Hz #7 December 2005

———. "Re-situating performance within the ambiguous, the liminal, and the threshold:Performance practice understood through theories of embodiment." Diss. U of Edinburgh, 2006.

Waisvisz, Michel. The Hands, 2005

Okay, the end. My Other Minds project proposal was quite a bit less dry:

For the Other Minds Festival, I propose to create a piece for violin, cello, live electronics, and two robots made from found objects.


The frame of the first robot is made from a small, used wooden chair, and it makes percussive sounds by walking with its floor legs hitting the floor. This robot is intended to be precarious and seem child-like. The chair will travel around the space in loping circles, and sometimes I will have to physically restrain it move it back in place if it tips over… however, this robot will be able to avoid most obstacles using the input of a sonar sensor. I also intend mic the floor (using my contact microphones) in order to amplify and process the sound. The second robot will use a bicycle wheel to create percussion effects. When the wheel turns enough, it’s spokes will hit across metal bars. These sounds will also be amplified and processed by my laptop running SuperCollider.


I will control the motions of the robots via the laptop and include their percussion parts in my score. During the performance, I will be handling the live electronics and keeping the robots out of trouble. The performance should last around 8-10 minutes. I would be able to send a recording of the robot parts prior to the performance for rehearsal purposes.

I intend to write a score including many syncopated rhythms inspired by tango music and dancing. I am a tango dancer, myself, and I am particularly interested in the physicality of the piece: the juxtaposition of the awkward mechanical robots with skilled, embodied musicians. I also intend for this piece to invoke the connections between rhythm, movement, feeling, and how we perceive rhythm and movement in both human and vaguely human-like mechanical motion and sound.


AND... I totally still need to finish my origami piece. I ended up sending all old pieces to festivals but Oct. is the month of mad deadlines, so.

Monday, October 4, 2010

random idea.

So, I watched this documentary on Paul Erdos a few nights ago and it mentioned he got together this weekly group to talk about math... and that group was SO productive... and it was really informal, etc. I think it would be cool to have a similar composer's group... hmm...

minor progress report.

I got the motor/servo to work with arduino, so that's pretty cool. Turns out that servo was too small (I think) so I bought 2 large ones. Also, the bluetooth chip I got wasn't on a breakout board so... uh, that kinda sucked. I ended up getting the blueSMiRF thing... plus some crimp pins since apparently Radioshack doesn't have them (??!!). At this point, I really need to find/buy/get the chair I am going to use.

I'm also thinking of making the robot avoid obstacles, but maybe, hmm... badly. I don't need the robot to be very good at it.

I also started on some apps. Apparently the IRCAM thing got extended to Oct. 8, so I might as well apply right? The Interactivos? thing is due Tues. but I'm just not sure I can come up with an appropriate project... maybe I will wait for the June Neighborhood Science thing that organization puts on.

I did something of a rehaul on "I will grow teeth" in order to make it easier to play. Apparently my clarinet parts were really fast, too... so I have changed some things to triplet 16ths instead of 32nds in some places... which I'm not sure how much easier that is... it is slower but there are other factors, of course.

Also, I think I'm going to try and finish and record "Origami II" for this festival deadline Oct. 15. And maybe polish up, "in which these implications appear to cause difficulties" a bit more. Both of these pieces have just hovering on the edge of being completely done for a while.

Thursday, September 30, 2010

more robot & progress

So, I mostly finished "in which these implications appear to present difficulties" ...got a lot of the live electronics started... decided I mostly liked all the piano notes actually after all. Feeling meh about the piece so maybe that means its not actually finished. Originally, it was for toy piano but then I got all large with the range and stuff. Like... I'm not sure why I'm finishing it bc I really have no burning desire to get it played... esp. now since I made it not exactly playable by toy. But maybe I'm just feeling meh right now. Or I suppose it could work for 2 toy pianos. Maybe.

I applied to some stuff this week... made all my intended deadlines except one, which isn't bad. Also, found a show for my robot in mid-November in Boston. I'm not definitely in, but I'm likely to get in (says the curator). So, I really gotta start crackin'. That is actually a deadline that is coming up fast. Oh, and what I proposed was my chair robot that walks in rhythm & I mic the floor, etc. Also, moving it with a wiimote. Chairbot is fragile, blind, and can fall over. I intend NOT to name it Chairbot, tho. I need something snappy or cute, but I'm bored off 'bot' on the end of everything.

I feel like I get so off-track... I guess its hard to finish pieces that I have no definite deadline / show to get ready for. I'm wondering if I should try & book a Dec. show like I said... since w/applying to grad school + Nov. robot show... I'm going to be burning the candle at both ends... but... I am only working part-time this year.

One thing that occurred to me. I sometimes feel bad since my performances have gone down a bit. But one thing is that I have been (except for when I took a break last winter) composing fairly consistently... and when I had a lot of performances (in 2008 I had 22, which is a lot, I think)... I was often performing the same show, over and over again. And I didn't have a dayjob. And I was getting performances bc I was in grad school for it part of the year. Having a dayjob totally brings down my productivity... or music output, actually. Esp. the time that I spent doing it 40 hrs. a week. Anyways, I'll probably book a tour of my robots some time next spring. we shall see.....