Final programming refinements

17 06 2010

I’ve been working through some final refinements to the Pd patch that I will be using for the installation. My main concern was determing a way to adjust the ambient noise versus heart beat noise in my audio interactive installation. Basically, I wanted to be sure that the heartbeat sound didn’t dominate over the ambient noise and vice versa when causing the moving image to pulse. After a couple of intense days with Ed Kelly from the London College of Communication I now have this final revision of the Pd patch!

Some recent and notable additions to this patch include an ambient sound calibrator and an audio playing object that allows for tempo and pitch adjustment of the heartbeat. The ‘wiring’ of this patch is now set up a little differently. My previous versions of this program had a direct link between the pulse of the videos and the audio that came in from the microphone. I added a ‘metronome’ and ‘drunk’ object to a’translateXYZ’  object to create a simple zoom effect on the videos that zoomed in depending on the pitch of the sounds the mic picked up. This was effective in a quiet space when all one heard was my looped heartbeat sound, but I knew the installation would be noisy at times which could cause the video to shake erratically.

With some key guidance and advice the patch now wires a direct number stream from the heartbeat wav file so that it doesn’t have to pass through the mic (brilliant idea Ed, and no, I don’t think this is cheating). I’ve kept the mic audio input part of the patch but have lessened its influence on the video through a set of objects that auto-calibrate for ambient sound (thank you for all your help with this one too Ed). Also, the fade between the videos is now linked to the heart beat sound directly. In effect what will happen is that the noisier the ambient sound is the less of video 2 (the text) you will see, that is until the auto-calibration kicks in. If it is silent or if noise levels maintain at a steady hum, the text will pulse and become more legible. If ambient noise is erratic all you will see is the video 1 (the traffic light themed images) pulsing with the heart beat and loud sounds in the environment.

Another thing that is newly added is adjustability to the patch’s sensitivity so that when I set up the installation I can digitally fiddle around with things until I have the output as I would like. My previous plan was to fiddle around with the position of the mic and speakers but this way allows for more fine tuning. In the end, I can still adjust the hardware side if I so choose 😉

I’ve also worked out how to eliminate the choppiness in the playback of the videos. I changed the compression of the video to  H.1263 codec which suits Macs and I reduced the file size by opting not to go HD. As a result the video plays better and is more reactive to sound. My previous videos were using 324% of my CPU (when running the program) and I have now reduced that to 36%!

I have also resolved a simple way to show the video fullscreen. I will set up using two screens: 1. my work screen  and 2. the projector screen. I’ll link the two screens on Pd then simply drag the video window into the projector screen. After that I can unplug my work screen and all that will be left is the playing video. Voila!

Hope this all makes sense. My head is still reeling from the stuff I picked up from the last two days with Ed!

Advertisements




Experiments with interactivity

2 04 2010

I’ve  trying to figure out the mechanics of how I’m going make my short video clips interactive. Of course I intend to use the heart rate as a trigger but I am running into a few road blocks with the technology (please see the bottom of my Project Summary: Curatorial Notes). As a back up plan, I am thinking of incorporating motion sensors or touch pads (or even mics) to and around the suitcase so that interactivity can be established through motion and sound instead.

In my research into this I’ve been working with Processing. So far I’ve been able to get small video clips to move in relation to cursor control. In theory, if I were to incorporate an Arduino generated number string to the code (similar to the experiments I did with the potentiometer) , this code could be the engine for image generation in relation to the  movements of a suitcase. Below is an initial experiment using a clip of me cutting a mango.

Below is another example using some video I captured of palm trees that are thriving in South London. I like what’s happening here as one really gets the sense that one can move through the clip. It’s like scrolling through a large picture or document only it is a running movie with a sense of inner motion that is independent of one’s movements.

In the writing/rewriting of this code, I ran into memory problems which I was able to fix by a quick adjustment to the cache and a resizing of the video. That said, my project will involve a variety of different video clips and I will probably run into this problem again. I am aware that I will eventually have to shift away from Processing and probably use a Max/MSP or Pure Data (more weighty programming interfaces) but I am loathed to give up everything that I have learned with Processing. Maybe there is a way that I could incorporate the two or possibly simplify the concept so that I would not need as many large video files?





Combining techniques

31 03 2010

Lately I’ve been trying to figure out a way to combine some of my recent experiments in film with the moving image technique I first showed in the Mid Point review. I’ve been fiddling with Final Cut and have come up with the short clip below.

I really like the dimensionality to this work and it has some visual cohesion to the montage aesthetic that I have been developing over the last few years. I like where this is going but now I just have to figure out how long I want these clips to be and how I can visually tie them so that they can link when called upon randomly. My first step will be to build a timeline/architecture on which I can slot in potential clips. I am thinking of using audio as the guide for the time intervals (ie. heartrate) and create my clips accordingly. That also reminds me… I have to get cracking on learning Pure Data as I want to start creating some of my own audio. The current electronic beats that I’ve been using in my experiments have been by Lali Puna and will certainly not be used in the work. By using this music at this point in the project,  it’s been a bit like a suggestion to myself of an audio ‘mood’ that works with the images. I’m thinking that I should connect aspects of ‘Filipinoness’ to the audio as well…





Arduino to Processing

12 03 2010

This week was spent figuring out how to bridge the analogue to the digital (from Arduino to Processing). In the video below I have hooked up the Arduino board to a potentiometer. The potentiometer could essentially be replaced by any analogue sensor (ie. motion sensor, IR sensor or heartrate monitor) to create a number string. The number string going into the computer was then interpreted by a sketch (program) I wrote in Processig to interpret visually into a moving linear graph. In theory I can replace this to play movie strings or random image generation. In this case, the line graph goes up and down as I turn the potentiometer dial clockwise and counter-clockwise.





A shift to Arduino

8 01 2010

Admittedly I haven’t been as diligent at blogging my creative process as I would have liked to be in this last month mainly due to: one the funeral that I had to fly back ‘home’ to (in Canada) and two the hectic Christmas/New Year holidays that I returned to in the UK. Things have been less than perfect these last few weeks but I’m slowly getting back in to the groove of things.

lenticular tv experiment

I feel it’s best that I record a few important shifts in the digital aspects of my project that are an outcome of the experimentation and research that I have been doing lately. I initially began this project wanting to create a series of works that incorporated animation (moving image) with my current visual aesthetic of digital collage and layering. Themes of dimensionality and ‘multi-layeredness’ became more prominent as I explored aspects of London’s Fil-Brit community both in the community itself and in my relation to the community as a Filipino-Canadian. This prompted me to exploit aspects of 3D Lenticular imaging to create vibrating, 3D pieces that held the illusion of visual depth. I aimed to use lenticular screens on top of computer screens to achieve this effect of dimensionality. Typically a lenticular screen is placed over a print that has been specially designed so that two images can be seen at once depending on the angle of the viewer (I am reminded of the Cracker Jack Box popcorn I used to eat as a kid that would include a lenticular decal of the most popular superheroes at the time).  I was even able to source a project that involved utilizing this technology on televisions:

http://crave.cnet.co.uk/monitors/0,39029456,39189996,00.htm

But in the end I decided to drop this line of exploration as I found the two image limitation of the lenticular screen to be constricting. Furthermore on a purely aesthetic level, I felt the visuals created by such a process had a very kitsch look and by it’s nature inadvertently  referred to a time and context (cheap commercial promotional materials from the 1940s up to 1980) that were not relevant to my current project.

Deumilanove mini-computer

Sometime after that was spent trying out different creative possibilities, some of which included the use of Processing to alter images on webcams but I found myself hitting a eureka moment sometime in late October when I was first introduced to Arduino. I was in the middle of an Arduino workshop where we were trying figure out the wiring and code to setup a potentionmeter on our breadboards (basically we were trying to make a little LED blink at different speeds according to the turning of a small dial) and I found myself mentally wandering due to the hypnotic nature of the pulsing red light I was so desperately trying to control. My mind went on tangents of how figuring out this little red light was a sort of microcosm to all the frustrations I’d been having with my project so far in trying to pin down the Filipino community and trying to find a way to best represent its fluidity digitally. It became a question of finding patterns, commonalities and themes. I found myself thinking of home. How eventhough I was well aware things would be very different in the UK, I was still working through cultural presumptions that I took with me from Canada thinking that living in an English speaking country would be easy compared to my previous years and months in Japan, France and Thailand. The emotions and memories that these thoughts brought up reminded me of how home means something different for everyone. There is a reason why some Fil-Brits still call the Philippines ‘home’ , while others think of the UK as ‘home’. Home is Canada for me right now, but the longer I stay here, UK may one day feel like home too. After three years of living in Tokyo, I began to feel Japan was my home despite myself.

Please see my Japan blog: http://www.pageshome.com/travel/old%20site/old%20splash.html).

Where is home for the Filipinos who just immigrated to London and how does this differ from those who were born here? How does a community of people who are in between ‘homes’ manifest itself visually on the urban landscape. How can I best represent this flux of attachment and detachment to memories, culture and identity?

Polar Heart Rate Monitor Interface

My aim now has been to use this technology to incorporate the viewer’s heart rate as a trigger that sets off a moving collage of images that mean home for Filipinos in London. I was able to acquire the needed technology (an Arduino Duemilanove 328 and a polar heart rate monitor interface) from a specialized electronics shop in Toronto called Creatron (www.creatroninc.com) and from www.sparkfun.com. This process of looking for the hardware has informed me of a large number of communities both online and in the physical sense that are really pushing this Arduino stuff. Hopefully connecting with these communities will prove to be helpful to this project as I am certain I am not the only one who has though of this idea.

That said, much of my time is still spent playing with wires, resistors and code. On the code end, I am currently trying to get the data from the heart rate monitor to talk with a image manipulator program such as Flash or through a sketch I make in Processing. On the wires and resistors end, I am desparately trying not to leave huge amounts of hazardous mess that looks tantalizingly edible for both my cat and my 11 month old!





Sifting through the flotsam.

16 10 2009

“Good ink cannot be the quick kind, ready to pour out of a bottle. You can never be an artist if your work comes without effort. That is the problem with modern ink from a bottle. You do not have to think. You simply write what is swimming on the top of your brain. And the top is nothing but pond scum, dead leaves, and mosquito spawn. But when you push an inkstick along an inkstone, you take the first step to cleansing your mind and your heart. You push and you ask yourself, What are my intentions? What is in my heart that matches my mind?”

Amy Tan, The Bonesetter’s Daughter

I was looking through an old blog of mine when I lived in Tokyo, a lifetime ago, and came across this quote which I was so fond of at the time. At the time I was studying shodou, Japanese calligraphy, and much of my time was spent making ink and repeating the same kanji, characters. In this repetition I found a meditative clarity and a sureness of stroke.

I find it fitting and relevant to my week which has come to represent a mental step back from the project. I have spent a lot of time developing my intended method and contextualizing the final piece. I’ve spent a lot of time imagining what I want this project to culminate to, but it was only this week that I was reminded that I needed to revisit the ‘inkstone’ and stop worrying about the final piece. It’s time to get out there and get my hands dirty.

But first it is important to record a few of the more seminal events of the week that have helped me along the way. This week has provided a multitude of inspiring, refreshing and critical moments but as a consequence I haven’t had much time to get it all down. So here is a quick summary in the chronology that they happened:

1) TrAIN seminar with Yuko Kikuchi discussing Edward Said’s 2003 Preface to Oreintalism. I had read parts of this text before but I had never read this preface which placed Orientalism in the modern context of the global fiasco caused by the US Bush administration. Salient points that I took from this seminar were:

Orientalsim is a formulated concept of the East which creates a binary “other” that helps to define the West. In effect, Orientalism is a man-made which has been created by force.

Said posits humanism as an alternative where societal priorities shift to the realities of our interconnectedness rather than differences.

At the end of the seminar Yuko turned all our philosophizing back at us and asked us how Said’s critique of Orientalism inform us, our practice and our study. I’m still chewing on this one 😉 no simple answers.

At some point in our discussions models for multiculturalism were talked about and I learned of the “British Salad Bowl” view that multiculturalism is a mix of leafy greens and  and sliced tomatoes and carrots. (just kidding). The salad bowl actually reminded me a lot of Canada’s mosaic model. I did not go into this in the lecture but I have critiqued these models of plurality as well as the US melting pot in a previous online publication with lepanoptique.com, but it was refreshing to hear of like minded opinions of the datedness of these models.

2) MADA Seminar discussing Stephen Boyd Davis’ Interacting with pictures:

film, narrative and interaction. I found the article to be very relevant and enjoyed revisiting this concept of immersion. Davis detailed the relationships between the aesthetic of film media and ‘new media’. I put ‘new media’ in quotations as we sorted out through discussion that this term ‘new’ was indeed problematic for a media that is already a few decades old. Specifically, Davis used video games and online interactive documentaries as his case studies for ‘new media’ which obviously does not represent the growing numbers of examples which could be cited under such a vague term. That said a lot of very fresh points were made and discussed about:

Davis argues that there are two main approaches to image making, the first being self-effacing (transparent in it’s depiction… direct point of view representation of a scene) and the second being pictoral (using interpretive methods to convey the reality of the scene but creating images that could never have been experienced by the human eye… ie the event/scene of Vivien Leigh falling down the stairs, in Gone with the Wind, used six different camera angels in eleven seconds to portray an immersive reality to the scene that could not have been done from a simple point of view shot).

Davis also states that new media borrows techniques from cinematography to portray reality but that they eventually fall short as film’s guiding principle for spatial representation is narrative whereas new media may have a number of motivations outside of narrative.

Some holes in Davis’ argument involved the complete disregard of audio as an immersive element.  As I get caught up in figuring out the visual elements in my project I am reminded not to forget the audio component. How would I represent the multi-layered realities of the Philippine diaspora if my intended audience could not see. I believe that our current “MTV culture” has developed an acute and discerning eye for visuality as a result of several decades of movies, TV and advertising vying for visual attention and immersion. This has lead many to forget about humanity’s other four senses, specifically the immersive qualities of the auditory.  Back in the day when I used to teach English as a second language in Asia, I learned of a 1970s psychotherapist, Gregori Lazonov, who created a teaching method called Suggestopedia. Lazonov’s method involved creating an ambience that facilitated language learning. This immersive environment often included dimming the lights to dull one’s visual senses and playing classical music to turn on the learner’s ‘affective filters’. In the same way that we set the ‘mood’ for a romantic dinner by putting on Barry Mantilow to the dim of candles, immersion is caused by other senses other then the visual.

Furthermore there is an important shift in focus between film and gaming environments. In film, the focus is on characters and incidents whereas in gaming the focus is always on the one playing the game. In what ways will my audience be engaged in my work? To whom will my intended focus be on. In what ways have other artists dealt with this in their work?





A week of seminars, lectures and workshops (part 2)

3 10 2009

I’ve broken this week summary into two posts as I simply didn’t have time to get it all down in one session. In yesterday’s entry,  I went over the current state of the proposal and the TrAIN Seminar with Oriana Baddeley.

In this entry I will summarize this week’s digital art seminar regarding ‘Two Myths about Immersion in New Storytelling Media”, the peer-reviewed article by Pierre Gander. Gardner essentially deconstructs the validity of the myth that new media is by nature more immersive. Specifically he analyzes the common mis/conceptions that increased audience immersion is directly related to 1) an increase in sensory information and 2) increase interactivity. The mainstay of his argument is that there is no empirical data to evidence these claims yet a number of academics base much of their work on these assumptions. I admire the scope of his critique and agree that more scientific evidence is needed on the digital factors (if any) that contribute to immersion, but I feel his argument could have been stronger in several ways.

Gander pulls on and deconstructs various definitions of immersion to reinforce his claims. One such example is his deconstruction of Steur’s 1992 definition of virtual reality ‘which defines immersion in terms of technological dimensions such as the number of sensory dimensions simultaneously presented.’ Gander goes on to say that immersion ‘in the story-telling context is in the feeling… (or) mental state.’ Gander is touching on the fact that immersion happens on an emotional level but he doesn’t explain what causes it or why sometimes more sensory input actually does cause more immersion. One can’t ignore the success of hyper realism in regards to first-person gaming. I am certain there are a slew of N.A. males between the ages of 14 to 29, game designers and marketing moguls who have empirical data that can substantiate the claim that the more realistic the product the more immersive it is (more sales).

That said I do agree with Gander. More does not always mean better (where better is assumed to be more immersive). I appreciate his examples of MUDs  and story reading to support his argument against the two myths of immersion in digital media, but I feel the article would have been stronger if he deconstructed obvious instances where immersion is indeed increased by more stimulus and interactivity.

Gander does make an attempt to deconstruct the myth phenomenon of immersion in his use of a media/degree of immersion table:

(Score is calculated according to the following rules: “No” = 0 points, “Yes/2-D” = 1 point, “3-D” = 2 points)

Storytelling media

Sense modality

Participatory

Degree of immersion according to myth (numerical score in parentheses)

Visual

iconic

Visual

symbolic

Auditory

Written text:

e.g. a novel

No

Yes

No

No

Low (1)

Oral storytelling:

e.g. a bedtime story

No

No

3-D

No

Medium (2)

Text adventure game:

e.g. Deadline

No

Yes

No

Yes

Medium (2)

Film:

e.g. Casablanca

2-D

Yes

2-D

No

Medium (3)

Play:

e.g. Hamlet

3-D

No

3-D

No

Medium (4)

IMAX Theater film

3-D

Yes

3-D

No

High (5)

Multimedia, VR:

e.g. Myst

3-D

Yes

3-D

Yes

High (6)

Although his table quite clearly shows the flaws in assuming that immersion is related to  participatory and sensory factors, the table is too simplistic. One can obviously argue that reading a story (Low, level 1 immersion on the chart) is not always less immersive than a playing Myst, a multimedia VR videogame (High, level 6 immersion on chart), but Gander fails to clearly point out why this is the case. I agree with him but feel he could have addressed factors that aren’t always scientifically measurable to be other causes of immersion.

I believe that the level of artistic quality and relevance to the audience are directly related to audience immersion. These two factors can actually be enhanced by use of the two ‘myths’ of sensory stimulus and interactivity (qualities that are easily achievable through digital means) but are certainly are not dependent on them. This is the reason that a well written book can be just as immersive as a well designed multimedia video game. It is really about clear communication and it is up to the author/artist to understand the audience and the context in which the narrative/piece will be delivered to create an effective/immersive piece.

Gander could have added another level of depth to his deconstruction by referencing Marshall McLuhan. In McLuhan’s Understanding Media, he defines media as either being ‘cold’ or ‘hot’ where cold media, like reading a book, forces action from the audience to fill in gaps of information (ie. one often creates imagined images of protagonist and scene when reading a novel as part of a common immersive process). Hot media, like watching TV, requires the audience to be in receptive state and provides most, if not all the necessary information. No gaps need to be filled in. One knows exactly what the deck of the Starship Enterprise looks like in red alert. McLuhan’s definition of media takes into account a deeper context when referring to the immersive value of media.  Through his definition certain things can be both hot and cold depending on what you compare them with. Books are cold relative to TV, but if the audience is highly literate books can often be hot in comparison to TV.

For me, Gander’s article made me further question the immersive nature of my intended project (very timely as I endeavor to start the ‘Outcomes’ section of my proposal in the next week or so). How important is immersion to the project? In what context and to what level do I want my work to be immersive, hot, cold, participatory and multi-sensory stimulating? I find myself thinking about the ‘bells and whistles’ of the project, the extra technological bits, that could add or hinder to the concept. I intend to utilize animation and audio to my previous practice of static images. In what ways could this enhance or hinder my creative process? Will the additional use of 3D imaging distract or aid in the immersive nature of the work?  Ultimately, I’ll have to tackle these questions one at a time as part of the workings of my artistic process. Questions similar to these were worked through years ago when I added digital collage to my traditional painting practice. It has been a few years since I have made a painting (with just paint) and I look forward to the day when I can say the same for a static digital collage.