BCU: AI, Your Assistant Editor in Video Post-Production

Last edited 8 July, 2024
Industry Insights, Guest Blog
3 min read.

The below blog has been written by Sandeep Chahil, a lecturer in post-production at Birmingham City University and an Avid Certified Trainer.

As a digital marketing agency, we support graduates with developing their skills to break into the industry. The features discussed in this article by our partner, BCU, include DaVinci Resolves Neural Engine based facial recognition, Avid Media Composers PhraseFind and Google Photos AI powered editing features.

Machines have been identifying objects in images long before we were shown a terminator from the future scanning phonebooks for the words, Sarah Connor. Factories employ scanners to recognise bad products on the production line and post offices use text recognition to organise the mail correctly.

With the introduction of artificial intelligence, or AI and neural networks specifically, machines are not only able to pick out objects in images, but also improve their recognition process over time. Think of it as training a person, the more images you show them, the better their understanding of what they’re looking at.

Image recognition tools have made their way into a whole host of user applications we interact with on a daily basis. Social media uses it to identify our friends, Google uses it to translate other languages, and our phones use it to improve our photographic skills (though Annie Leibovitz is safe in her job for now).

In the world of video editing, there has been a similar revolution through the use of AI. As a video editor, your primary product is imagery. You identify, classify and modify images to create something new. It can be repetitive work, but with image recognition, you just got a new assistant editor, and its name is AI.

A multitude of AI based features have come to the fore in recent years, one of the standout ones is facial recognition – perhaps because an editor’s work so often involves cutting footage of people. With facial recognition, you can now search for video clips based on who’s in it. The software can identify faces in a variety of positions, locations and under differing lighting conditions and in some cases, even partially obscured faces can be identified. If you’re editing a wedding video and need shots of the groom, the software will show you every clip he’s in.

Soon you’ll be able to search not just for people, but any objects that are in your video. Need a clip which features a chair or shots that take place by a window? Just tell the software what you’re looking for and it’ll find it. This feature already exists for dialogue detection. You can type in a word and the software will list every video clip in which that word is spoken. So it’s a natural progression to develop this feature with visuals.

The key power of AI is that it is a timesaving tool. A lot of repetitive tasks which used to be the job of an assistant editor can now be undertaken by the software. Not just that, a neural network which is usually employed in these scenarios is a form of AI which improves through training. The more images it analyses, the more reliable it becomes.

So if you edit videos, should you be worried? Will AI be getting a promotion from assistant editor to editor?

Unlikely anytime soon, the job of an editor isn’t just to cut footage together, but to consider the overall narrative and the dramatic focus of a scene. Even home videos are trying to convey an emotion to the audience. These are abstract terms and something AI struggles with, for now.

That doesn’t mean it hasn’t been attempted. You can already find AI powered editing software which will cut footage together automatically. It can do meaningless montages, but will struggle to create a cohesive narrative.

What programmes and algorithms require is structure. If these can be better defined, then AI will improve at editing. Indeed, if you consider most wedding or holiday videos, the structure is usually consistent. A wedding may feature the couple reciting vows with the occasional cutaway of a relative crying. Holiday videos show us the sites with shots of the family at these locations.

It’s not inconceivable that a structure will be able to be defined which a computer can follow. It may not match a human editor’s nuanced ability to select the best shot, but it won’t be far off and a neural network generally improves over time.

Overall, the inclusion of AI in video editing has proven itself to be net positive. It’s slowly chipping away at the repetitive tasks that have to be done but don’t directly equate to quality on screen. This allows editors to focus their time where it matters. However with AI learning fast, video editors will want to hone their skills to stand out from their AI counterparts.

EDGE Creative

If you want to find out more about the importance of AI or how you can use it in your marketing processes, email us at info@edge-creative.com.

Read our blog for BCU on Why web development and UI is crucial to SEO.

EDGE team member


Sandeep Singh Chahil

Vastly experienced in corporate video production, Sandeep Singh Chahil is the expert in every stage of the process with a strong technical knowledge of hardware and software systems. He is an experienced lecturer in post-production at Birmingham City University, working across several undergraduate courses to share his expertise in editing, compositing, motion graphics, and colour correction/grading.