Here are comments from the various participants at Bristol:



THE ACCESS GRID IN COLLABORATIVE ARTS AND HUMANITIES RESEARCH

An AHRC e-Science Workshop Series

REPORT ON WORKSHOP 2: SOUND AND MOVING IMAGE

WEDNESDAY 17 JANUARY 2007, 08.00–10.00 GMT

Workshop Leader: Professor Andrew Prescott, Humanities Research Institute, University of Sheffield

Feedback from participants at Bristol

From Ale Fernandez on the live music component:

1) No mixer/microphones were set up previously so this has to be taken into account in the results... Perhaps a future test should have the input from university staff with experience with broadcasting, but also with artists who use this kind of thing day to day for telematic work, people from commercial environments and technologists working with sound and streaming protocols.

2) It was interesting that Dorothy when conducting instinctively asked people to participate in order of perceived sound quality, thus showing that there is always hierarchy, in this case technical, but it wasn’t the same order that we would have chosen in Bristol as we could hear a different level of quality for each!

3) The latency experiment was coming from a classical music background of conducting, and it’s interesting how this quickly solved a lot of problems we found when doing the Locating Grid Technologies workshops - all we did then was try and clap or play to a beat, but it was much harder then to work with the latency issue. Very interesting to explore conducting over the AG, perhaps even with electronic cues as we did in a very rudimentary way in the LGT workshops with powerpoint.

4) As noted by Neal Farewell, when the musicians were playing the single notes, we heard one note almost a semitone lower than the rest ( I think it was australia, it was the second sound played).

5) Interesting to hear that Dorothy was looking for a visual metaphor or representation for "the grid" - as a programmer I’d relate this to a need to research more appropriate interfaces for performance. Parip Explorer could give clues to this:

From Neal Farwell Department of Music, University of Bristol, neal.farwell@bristol.ac.uk on live music component: Sound quality via the Access Grid - a quick response In conversation after this morning’s session, Pam noted the trend towards lots of locally-conceived AG and eScience ventures in the Humanities, and the tendency inadvertently to reinvent the wheel. Sound quality is a case in point, and I think there are some ready improvements we can make by combining AG knowledge with the sound engineering know-how that most/all of our institutions have. We’re planning some local experiments in Bristol in the next weeks that should help this along. I’ll report back.

A quick outline for those who might be interested but don’t work habitually with music:

There is a huge body of knowledge in relation to recording engineering and broadcast. Any system for sound recording/reproduction or relay has multiple elements in the chain. A general principle is that each element can potentially contribute noise or distortion, and these artefacts are very hard to get rid of again further along the chain. Equipment designers and users in professional audio therefore take great care to match the relative configuration of the elements so that each is well suited to the task and is operating at its best. A corollary is that it is worth finding out which is the weakest link and strengthening that first, then repeat iteratively (HiFi enthusiasts know this syndrome!).

A simplified model of the AG audio chain - one to one, and unidirectional:

(1) musician behaviour

(2) microphone type and placement, room acoustics, ambient noise

(3) analogue conditioning, noise gating, CODEC

(4) network transport and clients

(5) CODEC

(6) loudspeaker type and placement, room acoustics, ambient noise

(7) listener / musician behaviour

Item (4) is for network specialists (not me!) but deals with the many-many potential of AG meetings, and with the tradeoff of latency (delay) versus dropouts, bandwidth per data stream, and so on. This has a bearing on choice of CODECs and conditioning, and leads to the new aesthetic positions that Dorothy outlined.

What I’m hoping to do is a quick review of info on (4) and of the recent

musical work done under PARIP, then do some pragmatic optimisation experiments on (2) and (6) especially, and their interaction with (3) and (5). We’ll probably take a mobile AG node into our recording studios where we can readily try out variants on microphones etc.

I’ve heard the topic of echo cancellation raised several times. My hunch is that this is a red-herring in relation to music, rather like the old studio fallacy that you can take a poor recording and "fix it in the mix". It’s usually much more productive to get the source materials right. A topic that might merit further research, re (1) and (7): are there differences to be observed or things to learn from a comparison of working with two different kinds of musician? On the one hand, pop/rock/classical "session" players who are comfortable with playing to microphones, wearing headphones (yes, why not?), responding to talk-back, etc; on the other, musicians whose work does not involve the studio.

I hope this is useful, and your comments and suggestions are welcome plus pointers to existing smooth-rolling wheels.

From Ale again on the video component:

Speaking about partnerships, an ex colleague of ours in ILRT, Libby Miller is now working with a company that is among many trying to become the next platform for "television", i.e they have very good software for viewing, annotating, distributing clips with realtime chat etc, all for use over the internet with desktop computers, but I wonder if partnering with this kind of business at this stage would be important so as to widen the horizons from the TV metaphor into one similar to what we’re starting to explore with the AG? The company was yesterday re-named and relaunched and can be found at

Finally from Pam King:

As I said during the summing-up period, I think we are all learning how to work with the virtual space, but that its ‘shape’ is confusing. In particular we cannot make eye-contact, and some people clearly speak to the screens they are watching rather than to camera. The conventional lay-out of rooms is, generally, unhelpful. Regarding the streamed video material we watched, it becomes doubly important when everything is on-screen, to distinguish between the previously-edited and the directly-experienced in real time. The performativity specialists who have been working the with AG have already explored these issues. There are, theoretically, a number of different audience experiences of performance in the AG medium that I can think of, all different, including:

• sharing pre-edited recordings

• watching in real time from fixed web-cams

• watching in real time following the gaze of a participant or physically present audience member with a camera at a ‘live’ performance

These are all legitimate and useful, but need to be distinguished. I felt that in this session we were focusing the quality of transmission over the AG ‘for practical purposes’, so tended to elide our experience of live performance with our viewing of pre-processed material.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download