top of page

Week 1

LIVE IMAGE PROCESSING & PERFORMANCE

Assignment #1: take 5-10 minute cell-phone footage, a non-linear video, considering color, texture, light, and upload blog post describing the footage

This is a video that shows the reflection of the moon in a puddle of water on a street corner. The white sparkles around the moon are from the rain pouring onto the puddle. I felt drawn to the moon's reflection, and how in camera, the raindrops added an affect around the moon. I recorded this fully zoomed in to make the visual almost appear as if it wer in the sky.

This is a video of my stovetop burner, that I took from close proximity, just enough for the back of my phone to heat up. The shape of the fire takes a beautiful shape. It almost reminded me of visualizing sound based on how high or low the gas knob is turned. The color of the fire being blue makes it that much more attractive, and its movement is mesmerizing and satisfying to the eye.

This video is from my camera directly up against a light from my WiFi router. This clip is my favorite, because the red light creates a kind of iris, which mimics human eyes. I moved my camera around in a rigid gesture to see the light at different angles. It's also interesting because the human eye can't stare into a light, especially not this close up, for this amount of time without some sort of blindness, but the camera is able to provide alternative access to seeing.

This video is a super close up of a flame that is lit from an incense. There is a point in which the flame loses its color, which is caused by how the camera picks up the light when out of focus. I was also drawn to the movement of the flame, and the form it took.

This is a short clip depicting a time lapse from my commute from Brooklyn to Manhattan. The structures that make up the bridge take an interesting shape. In retrospect, it would be easier to understand this by seeing the structure from a moving train in real time.

Week 2

Assignment: 

- Begin building your own video playback system in anticipation of your solo video performance assignment due in a few weeks.

- Use the examples from the github repo and the youtube tutorials to get started.

- Start to think about some ideas of what you might want to explore for your performance.

- Write a blog post showing what you built this week. Do a screen recording to show us how it works. Explain what parts of Max you are still struggling to understand. Also mention some of your ideas for the performance.

For this week's assignment I decided to take a screen recording of one of my favorite music videos directed by Hype Williams and Busta Rhymes. The song is  called "What's it Gonna be" from Busta Rhyme's E.L.E. (Extinction Level Event): The Final World Front. I've always loved the imagery in this video, and also the imperfections of the CGI. It was less hype-realistic than it is today, but it still achieved a beautiful depiction of a futuristic atmosphere. Other than that, I find it to be a captivating way to tell a story through a Hip-Hop and R&B song. The videos were just so much better back then! 

 

Anyway, this week working with Max, I noticed that the workflow became more intuitive over small increments of time. I spent some time working along with the tutorials, and applying the jit.brcosa affect, while using a slider feature and messages described as 1 and 0 to both adjust affects, and also turn the affects on and off pertaining to each individual setting, i.e. brightness, contrast, and saturation. In the video for this week, I had to manually insert the jit.matrix 4 char etc.. Direct question: I wanted to know how to add an object that will allow the pixelation to be adjusted with either a float or an integer? I also worked out understanding the use of the cold outlet for storing-- yay.

Video of my patch

Screen Shot 2020-02-12 at 1.19.55 AM.png

Screenshot of my patch

As someone who completed their BFA in photography, I have a deep love for images, both moving and still. This made me want to shoot my own videos, using new cameras that aren't necessarily dslr cameras, but maybe a 360 camera, or even a GoPro. The distortion capabilities within these cameras (fisheye and super wide angle) already add an exciting visual layer to images and video, which explains my attraction to this idea. This also made me think about having a bank of photographs, both of my own and found, to manipulate. In addition, I am interested in using my personal MK3 Midi controller to trigger both sounds and audio for the purposes of multi-dimensional storytelling.

I'm also really into installations. I remember there being a discussion about being able to transform the space. If this applies to the first performance, I am interested in knowing how to set up external projectors to display and map to have different displays do things independently.

Week 3

This week I worked mostly with git.op for weaving different video files together. I took two different music videos that, one from last week, seen above, in addition to Missy Elliot's, The Rain music video. I knew that I wanted to juxtapose these two videos, because although they are by two different artists, there are a lot of visual qualities that align with one another. Almost like two different stories colliding, but are visually relative. I also tested out some of my own footage using a 360 camera.

In this first video I was able to use the jit.op object to fade from the two, separate music videos. I used the dropdown menu to control affects, which drew curiosity about being able to use a potentiometer to control this? In retrospect, this could be more interesting to use with the umenu object, for loading in multiple videos. 

In the second video on the right, I uploaded two different clips from testing out the 360 footage. I was a able to (almost seamlessly) jump from one clip, to the other, and back. I noticed a lot of buffering when using these clips, which explains the lagging in the video. Wondering if this can be fixed by switching the video settings to 480p?

Screen Shot 2020-02-19 at 2.41.36 AM.png

Photo of my interface/max patch

This last video shows me combing all 4 separate clips together using the slider. I also used jit.chromakey to key out different elements of each video.

Final thoughts on this week's study into Max: I think I'm unsure about how many different affects to feature in my interface. There seems like a lot of cool affects, but I want to both explore options, and also be intentional with the use of affects. As in, will whatever video affect that I place into this interface serve the audio that I plan to pair with it?

Week 4

bottom of page