Music and sound are usually characterized as art and ...



Emotive Motion:

Analysis of Expressive Timing and Body Movement in the Performance of an Expert Violinist

Sheena Chandran, John Stoecker, Matt Wright

Feb 17, 2006

Introduction

The vast majority of the music heard in our culture is considered some kind of art or expression, but a scientist can use musical elements to sonify data in order to reveal trends or patterns in a way that is analogous to using graphical elements to visualize the data. Sonification assigns sound to data using musical elements such as pitch, volume, and multiple dimensions of timbre to color each individual sound, as well as infinitely many ways to combine sounds to control rhythm, density, harmony and many other percepts, all potentially varying as continuous functions of time. Using sonification, the scientist has control over the aural expression of his data and makes a conscious effort to map his data parameters to sound parameters. Therefore, much like an artist, the scientist is in control of whether or not his sonification will elegantly reveal structure of the original data. Sonification therefore has qualities in both art and scientific rigor. Our project will explore this relationship by distilling aspects of motion data from a violin performance into a recognizable form through sonification.

In this study, we will use sonification to show how the timing of the motion of Barry Shiffman, a concert violinist, corresponds to the temporal evolution of aspects of the music while he plays an excerpt of J.S. Bach’s Chaconne (the fourth and final movement of Bach’s second Partita, BWV 1004). We hope that a combination of sonification and visual display of motion and force plate data from the Lucille Packard Children’s Hospital Motion and Gait Analysis Lab will show how rhythmic expression manifests in specific parts of Mr. Shiffman’s body.

Upon observing the highly emotionally expressive aspect of Mr. Shiffman’s body during performance and our own experience in music we hypothesized that the motion data collected from Mr. Shiffman’s performance will reflect the emotional state under which the piece is played. Furthermore, we expect that playing the same piece with different emotional interpretations will result in different uses of expressive timing, but in a more natural, non-intellectualized way than directly asking the performer to vary the expressive timing.

We also hypothesize that the periodicity of various body parts will relate to the periodicity of differing levels of the music (such as note, beat, measure, phrase, and section). Thirdly, we hypothesize that the greater a section’s deviation from the metronomic standard (the greater the magnitude of the “expression”), the larger the non-musical body movement.

Finally, we will be looking at the variation of joint kinematics using the common violin bowing techniques of legato, marcato, spiccato, ricochet and pizzicato. This data will serve both as a tool to understand the general motion of the violinist using these varied techniques as well as a point of comparison for a previous, yet unpublished study performed by Berger et. al.

Background

It is well known that an important element of musical expression is the manipulation of timing,1-3,8 in terms of continuous changes of tempo (i.e., speeding up and slowing down over time) as well as expressive microtiming of individual notes (i.e., musical events coming before or after the appropriate “clock tick” according to the current tempo). 3 One thread of computer music research aims to analyze expressive timing directly from the audio signal.1,7,8 In this study, we will apply a combination of these methods as well as manual analysis of the recorded music to identify the way in which Mr. Shiffman’s performances utilize expressive timing.

Many models of expressive musical timing, particularly continuous modulation of tempo in performance of western classical music, are inspired by motion of physical objects (i.e., the parabola that results from throwing an object up into the air).6,10,12 These tempo curves typically follow the music’s phrase structure;2 we expect to see the music’s phrase structure also manifesting in the correlated motion of certain parts of Mr. Shiffman’s body.

Much previous work on the analysis of motion capture data from musical performance focuses on quantifying the kinematics of expert musicianship. Examples include the effect of increased tempo on the timing of pianists’ upward motion in preparation for playing each note,5 determining that movement amplitude contributes more than movement anticipation toward the tendency for pianists to play louder when they play faster,5 and studying the differences between novice and expert ‘cello performance in hopes of finding support for a model of musical skill as dynamic constraint satisfaction.11 We will do similar analysis of the kinematics of the violinist’s right hand by analyzing examples of various bowing styles.

Methods

We will test these hypotheses using technical computing software such as Matlab. Signal processing filters such as the FFT (Fast Fourier Transform) applied to the motion and force-plate data will find the frequency content of movements in the ranges of 0.1-1 Hertz (phrasing level) and 1-5 Hertz (note level). Note onset detection, tempo tracking, and per-note deviation measurement methods were discussed above.

Data will be compared to find trends which will then be sonified to show their relations to the piece; previous sonification research has already laid a framework4 for the sonification of movement.

Using Matlab, we have created graphic representations of the X and Y displacements of the force plate data against time. We have matched the visual plots of these data sets to the bar markings of the pieces and seen several realms of possibility for future analysis.

[pic]

Figure 1. x-position vs. time and y-position vs. time plot of normal performance

One such area is the consistent occurrence of changes of direction of motion close to the bar markings. We are pursuing three avenues to quantify this temporal patterning of the movement data:

1. To evaluate a hypothesis such as “X position follows a pattern every two bars,” we can segment the position curve into 2-bar chunks, normalize the time durations to be equal, and then take the point-wise mean across all chunks. If the mean has a clear shape (e.g., rising and falling) rather than being flat or indistinct then that tells us something about the pattern. The standard deviation of the difference between the individual segments and the mean shape tells us something about how consistent his motion is.

2. We could synthesize some kind of continuous curve based on the time points, e.g., a parabola, one half period of a sine wave, or a simple ramp, with each segment spanning the duration of, e.g., two bars. Then we could see how well this synthesized curve correlates to the actual movement data.

3. Transform the variables of interest into the frequency domain and look for spectral peaks. (See below.)

We have completed visual graphical representations of the position paths of the center of pressure for the eight trials. The visualization of this data has given us a gross qualitative understanding of Mr. Shiffman’s motion during each of the trials. These graphs have motivated further analysis of this data with respect to the transitions between positive and negative X and Y motion.

[pic]

Figure 2. x-position vs. y-position force plate data plots of all eight performances

The plot of the X-force versus the Y-force data, as of yet, has not yielded any meaningful data. We are currently working to find meaningful ways to interpret this data. On the other hand, the plot of the moment about the Z-axis with respect to time clearly shows the up and down strokes of the bow. By superimposing the times of the onsets of each note against this plot we see that the direction of the Z-axis moment tends to change near the beginning of each note. Quantifying this tendency is essentially the same mathematical problem as quantifying the bar-related timing of the sway of center of force.

We have also applied the Fast Fourier Transform9 to the X-position data from the force plate analysis. We will are still determining a means to interpret this data.

[pic][pic]

Figure 3a/b. FFT of x-position data from normal performance: a) plot of 0-1000 Hz range b) zoomed plot of 0-10 Hz range (applicable to human motion)

We have identified some peaks in the data 15-20 db above the noise floor. Many of these are at frequencies well above those of human motion, e.g., 180 and 240 Hertz. As these peaks occur at regular 30 Hz intervals, we suspect that they may be due to electrical noise, but are still in the process of completing such analysis in all eight of our samples.

We graphed the first order difference of the X position data with an increment of one sample, then 10 samples. Because the sampling rate is so high compared to the rate of motion, this data manifested as noise clustered near a value of zero. However, this induced our examination of the data with higher incremental differences such as 100, 240, 1000, and 2400 samples, yielding potentially interesting curves. However, further analysis must be designed and executed to yield reportable results.

Data

Data collection consisted of using an eight-camera motion capture system to record the motion of Mr. Shiffman’s limbs, violin, and bow with respect time. Data acquisition was conducted in the Lucille Packard Children’s Hospital Motion and Gait Lab at Stanford University, January 30, 2006.

[pic]

Figure 4. Barry Shiffman performing while marked for three-dimensional motion capture

The data consisted of X, Y, and Z forces and positions of the center of pressure of Mr. Shiffman on a force plate as well as the moment about the Z-axis sampled at a 2400 Hz rate. The motion data captured by the camera consisted of X, Y, and Z position data captured at a rate of 240 Hz. This data was interpreted in conjunction with the ground reaction forces using modeling software such as the upper-extremity package called UEtrack. These programs serve to interpret the data collected from the force plate and the three dimensional motion capture system to find the joint kinematics of Mr. Shiffman’s performance.

Results

As of yet, we have little conclusive proof of our three hypotheses. We have created graphs of the X and Y positions of the center of pressure with respect to time, and of the position path of the center of pressure on the force plate. The position path data has yielded information about Mr. Shiffman’s movement during his performance (see figure 2). It was clear from these plots that in certain interpretations there is little movement, while in others the movement is broad. We will quantify the degree of movement during the different performances, and believe that this will yield reportable results. The transitions of this data (e.g., when Mr. Shiffman shifted his weight) highlighted the emotion in the pieces in an interesting fashion (e.g. in the angry interpretation the transitions are jagged while in the playful interpretation the transitions are smooth and rounded).

[pic]

Figure 5. Transition regions of x-position vs. y-position plot of the angry and playful performances

We are still working to create a meaningful way to represent these findings.

We have plots of the X and Y force data with respect to time that have yet to yield meaningful trends. The plot of the moment about the Z-axis with respect to time has the potential to yield meaningful results once the note markers are complete for all eight pieces.

The FFT analysis of the X position data with respect to time has yielded some meaningful data. There are a few extreme energy peaks in the lower range of human motion that we will be interpreting shortly.

Lastly we have plotted the first order difference in X position over time. With an increment of 10, we found that there was a great deal of noise in the data. With an increment of 2400 we found there were more meaningful results shown. We will continue analysis.

Future Work

We currently have a deep understanding of the force plate data. We are working to make meaningful sense of the relationships within this collection of data and will use these comparisons both to prove our hypotheses and to create music of our own. Data analysis will continue through February, and once we have conclusively identified patterns in our FFT data, sonification will begin.

As of yet, we have not received the joint kinematics data. However, we have postulated several means to interpret this data. Like the force plate data, we will be able to chart the movement of the joint, the joint torques, and the joint reaction forces as a result of time. This information may yield insight when matched with note and bar markers.

Sonification will demonstrate the significance of correlation between an aspect of Mr. Shiffman's movement and his music. Motion markers or joint angles exhibiting movement corresponding to hypothesis one will be assigned specific tones, and the timbre of these tones will modulate as movement occurs. The periodicity of movement will be heard as oscillations between arbitrary extremes of timbre. For example, the gradual extension of the elbow from 45 to 90 degrees might control the modulation of a synthetic tone from bright to dull. When the sonification of the movement and the piece are played simultaneously, tones will modulate in time to different time scales of the piece. The video capture will establish the correlation further by showing the actual ranges of movement in time with the piece. In our presentation, we will describe which movements to observe, and watching these movements in the video in time with both the piece and sonification will show the relationship between musical timing and aspects of movement.

Body movements consistent with hypothesis two will be expressed by volume modulation. For each marker or body angle, two tones will be played simultaneously to represent microtiming and velocity of movement. A tone representing microtiming will have a higher volume when a note is timed further from the current tempo, and movement velocity will relate directly to volume. If the movement corresponds to microtiming, the tones will increase and reduce in volume at roughly the same rate. In our presentation, we will play the video recording along with each series of two tones to display range of movement and expressive microtiming.

By the end of the quarter, we will have musical clips that show our progress and at least one final musical piece that demonstrates each hypothesis. Our final project will include a web page and a multimedia presentation using video and sound that shows how our sonification illustrates our research and explains the movement of the body during violin performance. Once we better understand the movement of the body in musical performance, we can start to explain why seeing a violinist such as Barry Shiffman live is so much more powerful and aesthetically pleasing than listening to a recording.

References

1. Bilmes, J.: Timing is of the Essence: Perceptual and Computational Techniques for Representing, Learning, and Reproducing Timing in Percussive Rhythm. In Media Lab. Edited, Cambridge, Massachusetts, Massachusetts Institute of Technology, 1993.

2. Clarke, E. F.: Rhythm and Timing in Music. In The Psychology of Music, pp. 473-500. Edited by Deutsch, D., 473-500, San Diego, Academic Press, 1999.

3. Iyer, V.; Bilmes, J.; Wessel, D.; and Wright, M.: A Novel Representation for Rhythmic Structure. In International Computer Music Conference, pp. 97-100. Edited, 97-100, Thessaloniki, Hellas, International Computer Music Association, 1997.

4. Kapur, A.; Tzanetakis, G.; Virji-Babul, N.; Wang, G.; and Cook, P. R.: A Framework for Sonification of Vicon Motion Capture Data. In 8th International Conference on Digital Audio Effects (DAFX-05). Edited, Madrid, Spain, 2005.

5. Palmer, C.: Time Course of Retrieval and Movement Preparation in Music Performance. Annals of the New York Academy of Sciences, (1060): 360-367, 2005.

6. Repp, B.: A constraint on the expressive timing of a melodic gesture: Evidence from performance and aesthetic judgement. Music Perception, 10: 221-243, 1992.

7. Scheirer, E. D.: Extracting Expressive Performance Information from Recorded Music. In Program in Media Arts and Sciences, School of Architecture and Planning, pp. 56. Edited, 56, Cambridge, MA, Massachusetts Institute of Technology, 1995.

8. Schloss, W. A.: On the Automatic Transcription of Percussive Music: From Acoustic Signal to High-Level Analysis. In Program in Hearing and Speech Sciences, pp. 119. Edited, 119, Palo Alto, CA, Stanford 1985.

9. Smith, J. O., III: Mathematics of the Discrete Fourier Transform. Edited, W3K Publishing, 2003.

10. Todd, N. P. M.: The kinematics of musical expression. Journal of the Acoustical Society of America, 97(3): 1940-9, 1995.

11. Ueno, K.; Furukawa, K.; and Bain, M.: Motor Skill as Dynamic Constraint Satisfaction. In Linköping Electronic Articles in Computer and Information Science. Edited, Linköping, Sweeden, Linköping University Electronic Press, 2000.

12. Widmer, G., and Goebl, W.: Computational Models of Expressive Music Performance: The State of the Art. Journal of New Music Research, 33(3): 203–216, 2004.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download