Adobe’s Project Rush: Reading, Writing, and Visual Literacy

If you work professionally in television and contemplate last year’s announcement from Apple that FCP X has over 2 million users, you are left asking yourself post production’s version of the Fermi’s Paradox: “Where is everybody?”

But I think we need to take an expansive view of post production for a moment. Something Philip Hodgett has been writing about for quite awhile:

Pre-printing press and general literacy – being literate (having the skills of literacy) made you a hot commodity. The work you did was appreciated by many although most didn’t understand what was involved. In fact, at that time, if you were literate, then your entire career was probably built around it: copying scripture (and other Holy works); reading it to people; interpreting it.

I think that’s where we are now: there are still those who use their “video skills” as their primary income: they put “Editor” on their tax return and employment questionnaires. For the record there are just slightly more employed as ‘Television, Video, and Motion Picture Camera Operators’ (26,300 in 2008).

His post is seven years old today. Go read the whole thing.


I used to teach nonlinear editing on Final Cut Pro 7. From my experience most of my students understood editing intuitively. It was the ingesting, media management, and exporting that tripped them up.

After my first semester teaching I made one important modification: before the first class, I would pre-load all of the media onto the computers. My first few classes would focus on what we traditionally think of as editing: insert, over write, trimming, etc. After giving my students the opportunity to build confidence, we’d move into the difficult areas. Rightly considered, ingesting and exporting, isn’t really editing anyway, it’s file management (and humans are generally very bad at file management).

Adobe recently announced Project Rush. This is a Very Big Deal for the future of the NLE! Project Rush represents a future without traditional ingest, media management, and exporting. Just like the new version of Lightroom, users can work on their phone, tablet, or desktop and have their work sync across all of their devices.

Project Rush will be a boon to IGTV creators. But will probably go unnoticed in traditional motion picture production in the near future. This is foolhardy. The ETC has already experimented with cloud based production workflows two years ago. The benefits of eliminating file management are going to be too great to ignore. (In a weird way you could argue that the Assistant Editor is the current solution for abstracting away the file system for Editors.)

Sooner or later, what we consider editing, and who we consider an editor, are going to change significantly. Are we in the middle of the motion picture’s evolution; from an art form created by a small number of specialists, to a medium of mass communication practiced by everyone?

What do the Avengers talk about. An interesting use case for Data science and creative writing

Interesting post on Medium about analyzing the word usage of the Avengers. A more in depth how-to can be found here.

Avengers_Word_Usage
So where are the Guardians?

What I like about this analysis is that you can draw interesting inferences from the data:

Thor’s keywords suggest that in the Avengers movies (not including the films Thor headlines, like Ragnarok and The Dark World), he’s more action-oriented than most other characters. With the exception of his relationship with Loki, he tends to focus on tangible artifacts that drive the plot forward. Like Loki’s scepter, the Tesseract, and the mind stone.

Check out how Vision and Scarlet Witch have some similar words- they’re talking about fear an awful lot. I’m hoping they stay synced in Infinity Wars. Interestingly, I did a sentiment analysis as well, and Vision had the most lines with a negative sentiment. It’s not because he’s a constant downer, but because he calls situations like he sees them and reflects sometimes on the futility of the human heroes he comes to love. He sees the extra, extra big picture, and I get the sense it disturbs him.

This analysis is very from the less useful one I critiqued last November. As analytic tools become more prevalent, it is important that our understanding of their uses grows with it. Glad to see work like this being done.

 

Computational Video Editing for Dialogue-Driven Scenes

I finally got around to reading the Computational Video Editing for Dialogue-Driven Scenes paper from Stanford. For those who don’t remember this technology from last July I’ve linked to the video below, followed by my thoughts. Interesting stuff.

  • It’s interesting to see interfaces that expand the NLE beyond the “Source/Record” model. The idea of interacting with your footage based on idioms is an interesting one because it creates a “natural language” way of interacting with your footage. As the importance of visual literacy grows it is fascinating to imagine a world where people edit without a traditional timeline.
  • I think this technology is an example of ways that the edit suite could change in the future; and put additional downward pressure on the Assistant Editor. I’ve long argued that tools that eliminate technical quantitative work are going to create huge pressure for producers to operate more efficiently. (That’s why only the very few have dedicated secretaries.) A tool that performs sequence prep and allows fast ideation is compelling indeed.
  • That said, while interesting work, I’d guess that 99.9% of all the video that’s edited worldwide is unscripted and therefore not applicable to their work. That the idioms of unscripted editing are an order of magnitude harder than scripted is completely logical when you think about it.