All posts
Editorial
Engineering
Insights
Product
Residencies

Breaking down "After Light" by directing duo Vallée Duhamel

User Avatar
Sophia Jennings
/
August 22, 2023

Vallée Duhamel is the directing duo of Julien Vallée and Eve Duhamel, two Montreal-based artists. Together, they’ve directed videos for global brands including Google and Samsung, as well as influential musicians such as Katy Perry. Most recently, the duo collaborated on After Light, a mixed-media experience utilizing Runway alongside other generative AI tools. We called Julien in Montreal to talk about their experience directing this latest video.

Hi Julien. When After Light was first released, you described the piece as a “surreal journey through the human mind.” I’m curious, one month later, what would you say your video is about today?

Being an artist, and being in the world that we are in, we hear all of this crazy [news] about AI, all of the time. What is AI going to do for us? What is AI going to potentially steal from us? There's a lot of uncertainty, and anxiety about it. It’s one thing to read and talk about AI, versus using the tools, and really understanding where we stand at this moment in time.

[After Light] was a way to overcome this anxiety, and dive into the tool in a way that we felt more comfortable [doing]. 

Once more of your community saw After Light, did you find that your conversations about AI changed?

Yes, it has. On a personal level, we haven't had this much fun working on a project in awhile. It’s something we didn't expect. There's so many things that we had in mind for so long, that we were finally able to inject into this video. Some of this stuff that we couldn't even afford on other commercial or artistic projects, we got to do here.

A month later, our point of view about the technology and our conversations with our friends and peers about AI has changed a lot. Of course, these new technologies are bringing tools that drastically change the landscape and workflow of the industry, and we are well aware of it. But the one thing we didn’t expect, was how working with Runway opened up a whole new world of possibilities that we didn’t anticipate. It helped us achieve some ideas without the need for in-depth technical skills.



I know that generative tools were an important part of the video, but I’m curious what the original idea was. How did it start?

It really started with an interest to understand the tools and see how we could use the technology. We did a lot of research in terms of what AI tools were available at the time. Runway was the one that we felt we connected with in terms of how we could use it in a way that could create something that’s a little different from what other filmmakers were doing in the space. We wanted a tool to help us elevate a story, rather than a tool that was mainly slapping a TikTok filter on top of the work.

We started exploring with one archival clip from something we shot around four years ago in Desert Shores, not too far from LA. We went there for a day and we shot this guy, Darren “Outrage” King, a very good krump dancer. It ended up sitting on the hard drive for so long, because life led us somewhere else, and we didn't work on the project. So when we started exploring, we picked it back up, and we started to create compositions, mixing it with other footage, and ingesting it into Runway to test different visual styles.

We wanted the result to feel like a project that’s following our corpus of work, so we used a lot of past projects as input images to get a color palette and texture that fits our world.

I love how you used a mix of filmed performance with generated content. Can you tell me more about how you pieced those two media forms together?

Most of the film started with traditionally filmed footage. We created some scenes using very lo-fi equipment like our iPhone to capture the different elements, and quickly comped them together in After Effects. Some parts also came from music video archives or outtakes from past projects that we combined with additional elements we shot or generated for the scenes.



I’m sure for the dancers, the idea of working with AI might have felt threatening at first. How did you communicate this new process with them?

We did a first pass, it was three different scenes, and we shared it with them. That enabled them to visualize where we were going with that. Some people think about AI and what comes to mind is robots or sci-fi images. It helped them get a sense of how we’d turn the footage into something very different, and they were pretty excited to see the result.



You’ve made live action videos, animated videos, and now a mix of the two. How was this process different from the past?

The big difference in the workflow was being able to navigate minute-by-minute, and knowing that when we ran our material through Runway, it might come out completely different than what we expected. Some scenes required a lot of work to get it where we wanted it to go. I think that's a big misconception, that with AI tools you just have to click a button and it’s done. The truth is, we still had to curate a lot of the output and rework it and try to get to a point where it felt like it was elevating the scene where we imagined.

Also, since [the output can be] random every time, we might have one treatment for one scene and then the day after, we can’t get the exact same result for that same scene. If we wanted to apply the same effect for every scene, there was a lot of work to make it happen. So that was kind of interesting as well, because the way the Runway works is sometimes creating results you’re not expecting. And sometimes those results give you another idea into how the scene evolves.

We were relying a lot on what Runway would provide us in terms of output to just keep going. Some of these renderings – like the ones where people caught fire – did change the whole narrative of the piece. It was a totally different process than in traditional animation, where once you get started with the style frame, there’s no turning back without involving big costs and time consideration.



One of the things I love about After Light is you really can’t predict what’s coming next. It brings viewers into a meditative state. Did you know where you wanted to go?

You’re totally right. There’s so much going on in such a short clip. A lot of different versions of the characters, and their environment, shifting at such a fast pace.

The entire process unfolded in a much more organic manner compared to a traditional production. We didn't finalize the edit before diving into Runway, mainly because the output was altering the original footage significantly. This aspect brought a fascinating dimension, allowing us to continually reshape the narrative while experimenting with the visual aesthetics of the scenes.

Why the name After Light?

The title is a play on After Life, which is, you know, the opposite of light. We thought of it as in, we’re opening up new ways of approaching filmmaking. Technically, I'm happy that we still had to use traditional filmmaking to make this project because it's not something that we want to leave aside in the future.

The decisions that you make on set are very different than the ones that you make when you're working with the software. Instincts are still very important, as is working with actual people. Human connection remains incredibly significant in everything we undertake. Our aspiration is that despite the AI-driven visual foundation of this project, it remains relatable on a human level. Our deliberate choice to collaborate with real individuals influenced camera movements, framing, and lensing. All of these elements still reside within the realm of traditional filmmaking.



I’m curious what you think about the current debates surrounding AI and its role in filmmaking.

I think it's similar to any major revolution, right? It always comes with a lot of resistance. There's been much more resistance in the AI evolution than we've encountered in any other aspect of our career, and it's challenging to just sit down and figure out how to adapt, especially since it's still changing so rapidly.

For a few months, we found ourselves expending a great deal of energy in resistance mode. That's why we decided to harness this anxiety as a driving force to explore how we can adapt to this revolution. What we've learned is that these tools offer significant potential, while still being tools (at least for now!) What we've discovered is that we can now revisit our sketchbooks, and explore ideas that have been stagnant due to their high production costs. There's now a possibility to modify them in a way that allows us to bring these narratives to life without the constraints of hefty budgets or the need to wait for a commission.

From gaining spontaneous insights and inspiration to breathing new life into old projects and ideas, Julien and Eve were able to leverage AI-powered creative tools to unlock a new mode of storytelling.

See more from Vallée Duhamel by checking out their website or following on Instagram.

Share
Everything you need to make anything you want.
Trusted by the world's top creatives