notes & things

Blink Stories

by lia on January 13, 2012, no comments

Blink Stories is based on a theory that I had wanted to test for a very long time, one by renowned Film Editor Walter Murch, and is an exploration about why we blink, our stream of consciousness, and film editing.

This is from Murch’s book, “In The Blink of an Eye”:

“… the blink is either something that helps an internal sepa- ration of thought to take place or it is an involuntary reflex accompanying the mental separation that is taking place anyway… And that blink will occur where a cut could have happened had the conversation been filmed. Not a frame earlier or later.”

The theory is that when you blink, there is a switch in your consciousness. What I like about this is that it speaks of empathy; when you watch a film and you blink at the same time as a character, you empathize with them. When you react in the same way as a theater full of people, you are all connected in this way.

For my Data Representation final with Jer Thorp, I decided to explore this theory on my own at last.

These are the questions that I wanted to answer:

  • Do people blink at similar, precise times while watching a movie? Is this proof of a job well done on the filmmaker’s part, in that the audience empathizes wholly with the film?
  • Does it coincide with any of Murch’s markers — while an actor himself blinks, or during an editor’s cut?
  • Is there a particular genre where people empathize particularly strongly with the character? Comedy, drama, horror?

What I particularly find wonderful about Murch’s theory is that when we blink, we ourselves are making a “cut”. When we change consciousness, when we switch our train of thought, we are making a “cut”, just like a film editor does. So we can apply this theory to real life in a number of ways –

  • Do two people blink in similar times during a conversation?
  • What about when people listen to music? What about different types of music — jazz or classical?

Given the limited amount of time, I scaled down the idea and separated it into manageable components:

Data Collection
Technology: Blink Detection Using Open CV and Processing
Write/ Read To XML
Amazon Turk to collect a large amount of data
A booth in the Tisch Building to collect more data
Film Selection: clips from which films?

Data Representation
A movie poster with every frame in the video clip represented
A live representation of the blinks playing back through a video screen.




Open CV is well-loved in ITP because it comes built-in with some neat face-tracking functions. The trick was to find a way to just track the eyes. I finally found this site that carried Custom Haar Cascades, which comes in XML form and basically does the trick of finding the eyes for you. After you have downloaded XML, a few modifications to the file itself and the OpenCV Javadoc is all you need to do.

I was able to find eye Haar Cascades here, with instructions on how to use it with processing here.

Ideally, I would have wanted to use as accurate technology as possible with the blink detection, and looked for a while into DanO’s pupil detection code, as well as consulting with Dan Shiffman and Greg Borenstein. I also looked into the Eyewriter, which I will look into some more in the future. After a few days of experimenting with pupil detection, I decided that I would need more time to make it work properly and given the time constraints (a week), would need to scale down for the meantime. So for now the code uses a combination of eye-tracking and motion detection — if any motion is detected within the eyes, then call that a blink.

This means that essentially, the viewer cannot move at all during the experiment (or at least, cannot move his/ her head). Even eyeball movement might be counted as a blink, so I decided to make the size of the screen small to minimize this.


The XML Tree was simple enough: basically every user would have a clip, and every clip would have a blink with their corresponding times.

The recorder application I wrote has the ability to create a new User, and within that user, new Clips, and within those Clips, blinks.

First I create a new user every time the program starts up, in setup:

New clips are created whenever a new movie is loaded, with keypress:

New blinks are created with a function “addNewBlink” that is called when OpenCV finds a new blink:

This is called whenever an XML is loaded. Number of users already in the XML is counted, so that when a new user is created it increments by 1.

The generator applications then read the final XML:


I settled with having people watch 3 or 4 two-minute scenes. The Criteria:

Must be something that the viewer will enjoy watching, and be riveted to for the entire two minutes.
Must be familiar and non-threatening (especially if it was going on Mechanical Turk)
So after a selection of around 20 movies, I narrowed them down to these four:

Pulp Fiction – Ezekiel 25:17 speech: Samuel L. Jackson carries this scene, and I wanted to see if the audience synced up when watching him.

Jurassic Park – The first brachiosaurus reveal – a scene with a clear high emotional point.

Apocalypse Now – The Helicopter Attack (Ride of the Valkyries) – Walter Murch edited this iconic scene.

The Little Mermaid – Kiss The Girl – a catchy, familiar tune that is hard not to like.

Movies that didn’t make the cut were: Carrie, the Prom Scene (too much?); Taxi Driver, “You Talkin’ To Me?”; Gene Kelly Singin’ In the Rain.


Was something that I found out after I had almost completed making a downloadable program, wasting a week.

Now that I think about it though, it may have been difficult to get workers to turn on their camera for a job that would pay them around $0.15 — I would have been paranoid about my privacy too, had I been them.


No matter, I will try again in the Spring.


I watched all 4 clips 4 times to see what I got. Given all the flaws of my software — motion detection instead of true blink detection — there was a pattern! There were certain areas, little clumps, where I would blink more often, and areas where I would not blink at all.


This is a picture of Walter Murch’s editing room. While editing, he lays out pictures of all the scenes on his wall so that he can, at a moment’s glance, glimpse the flow and content of the story.

I wanted to do the same thing to represent the blink data. When the viewer blinks, the corresponding frame is darkened. The more blinks happen within the area of the frame, the darker the frame is.

From the movie poster, you can see the areas where I have a tendency to blink by the darker areas (a frame where I blinked in the same spot every time are black!), and the areas where I keep my eyes open.

apocalypse pulp fiction

jpark     littlemermaid


The results of the poster make me optimistic that this is an idea worth exploring further.


I wanted to show real-time blink data, so that you could see the precise moments that the blinks happened, as well as be able to compare your own blinks to the recorded ones while you were watching. There was no other way to represent this. I chose a row of ellipses that appear whenever I blinked (the X-coordinate is dictated by the session). I thought this was a simple and direct way to represent it for the meantime.

I was pleased with how the ellipses moved along the screen, visualizing the way my body reacted to the film as the film played.

The results themselves are revealing. For one, I noticed that I was blinking quite a lot in the data. The two possible reasons I can think of are that one, although I kept myself from moving, and even kept the movie screen size small to avoid this, perhaps the application logged small things like eye movements across the screen.

The more obvious reason for the noise is that because I wrote the program, and because I knew that I was logging blinks, I was constantly thinking about blinking the whole time I was watching the clips. And thinking about blinking is like thinking about yawning — the moment you think of it, you do it. You yawn, and I blinked.

As I continue to work on this, controlling this kind of noise and getting more accurate data is one of the more important but difficult challenges that I have to work on.


Github repository here. Clicking the name of the application will take you to its main class.

Blink Recorder – The application I wrote to detect blinks. OpenCV with custom Haar Cascades, ProXML Libraries required.









Graph Generator – Simple graph plotter. The blinks are so dense that it does not really help, but it was a place to start.

Frame Saver – Takes a clip and exports frames for the poster generator.

Poster Generator – Generates the poster material (movie titles added in Photoshop).

Clip Generator – Live playback of the clips.



Even if I did not finish this project to the extent that I had planned, I am very pleased with the results. First of all, I learned a lot of programming, no matter how slowly I moved.

Secondly, tentatively speaking, it looks like Walter Murch’s theory is worth exploring further. It makes me excited to think about what the data will look like when there are hundreds of people watching the same clip. The idea of an entire audience syncing up when a film or piece of art is involving seems very beautiful to me.

This is a project that I have been thinking of for a while and would like to explore further, so I think I am going to pursue this for my thesis. There is a lot to work on:

  1. Perfecting blink detection
  2. What different situations? Not just movies?
  3. How to represent this is other ways – perhaps sculpturally.

There was a suggestion during my finals presentation about creating a film that was edited by a collective of blinks — crowdsourcing the edit, in other words. This was one of the first ideas I had when I first got interested in this topic a few years ago. The more I thought about it, the less it sat well with me. I think the project should be about glimpsing moments when we as human beings synchronize in our train of thought, and the exploration and representation of that. Furthermore, editing is much more than knowing when to switch from one scene to another, or when to cut: it is about fitting together the correct pieces in order to tell a story. Saying that a good edit is when things are timed right based on the changing train of thought of a crowd would be oversimplifying it. A crowdsourced edit (I think) would not be either a good representation of when people synchronized, nor a good edit.

I have a little time to think about this further before the final semester starts. Updates will be forthcoming.

Happy New Year!

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">