Pietro Gagliano tells us about developing Agence, a dynamic VR experience that lets users play God with Artifically Intelligent creatures, tackling issues of responsibility, human-machine empathy, and more.
Created by two-time Emmy award-winner Pietro Gagliano and screened on multiple platforms at the London Film Festival earlier this year, Agence is an interactive, dynamic Virtual Reality experience that is never the same twice. Merging cinematic storytelling, artificial intelligence, and user interactivity, this fascinating project tackles issues of responsibility and morality by placing the lives of artificially intelligent creatures (the “Agents”) in your hands. As you experience Agence, you’ll find yourself facing a planet where intelligent life exists and lives in harmony, until you arrive, and everything changes. You soon realize that you have the power to disrupt the Agents’ world: you can plant magical flowers, move them around and observe their interactions, save them from falling off the planet. Or you can throw them into the Abyss, and lead them to their death. But Transitional Forms‘s developers trained the Agents using reinforcement learning, giving them the ability to think and act outside of their programming. This means that they constantly evolve in new and unpredictable ways, learning to react to the changes to their environment brought by us humans – users who have been given the ability to play God with Artificial Intelligence.
Agence is an incredibly captivating, thought-provoking project, and an entirely new form of storytelling that shifts your attention back to yourself, asking you to think about issues of morality and responsibility. Here’s what creator Pietro Gagliano told us about three-way authorship, AI behaviour, user interaction, and more.
Agence: A Dynamic Experience That is Different Every Time
Agence is such a fascinating form of storytelling! It’s interactive, as it lets users make decisions that will affect the “Agents”, but it’s also unpredictable, as the “Agents” can make decisions too. How did this project take shape and what was the idea behind it?
From the beginning, we wanted to experiment with the notion of three-way authorship, where cinematic storytelling, user interaction, and machine reaction would combine to create a dynamic experience that, in theory, would be different every time. This concept, in addition to a number of additional dynamic systems for music, camera work, and Agent animation, came together in such a way that the experience is indeed dynamic. We hope that the emergent narrative that occurs through the user’s interaction will allow for a deeper relationship with, and ultimately empathy for, intelligent machines.
How many outcomes are there in Agence, and why did you decide to have the experience start again after it ends?
In the same way that trees can grow or clouds can form in countless different ways, no two experiences of Agence are the same. Due to the dynamic systems at play, even with little interaction from the user, each simulation will unfold a little bit differently every time. Having said that, there’s a limited number of story frameworks at play that help to craft the narrative on the fly. Think of it as a three-act structure with a limited number of branches (5 to 10, I would say) for each chapter of the film. But depending on user or Agent actions, the length and pathway across these structures is different with each and every experience.
What were the main challenges of making Agence?
Definitely the AI development and integration… I was warned, going into this project, that “AI is a black box,” and it’s so true. As a director with a strong vision, it was the most challenging to deal with actors that may or may not do what you expect. As a result, I am excited to make this curse a feature, where brains can be added to the film now after its release, and I look forward to seeing how new types of Agent behaviour will affect the story world in which they exist.
Experiencing Agence: Observing Both AI Behaviour and User Interaction
It’s so interesting to watch the “Agents” learn as they react to changes to their environment! Do the Agents evolve as Agence keeps being “played”?
This has been an interesting aspect of the project to receive feedback on. From a technical standpoint, the Reinforcement Learning Agents do not learn within the film itself, but are rather pre-trained in an environment that mimics the film… living their lives out millions of times over as they learn new abilities. It’s been interesting to hear people claim that they are learning within the film, though! I suppose that’s “emergent narrative” at work.
Can you share some of your findings on Artificial Intelligence?
One of our favourite findings when it comes to AI has been the fact that researchers have gravitated towards this film, not as a research tool, but as an interesting platform to observe AI behaviour. Like, instead of referring to a graph to understand an Agent’s abilities, now we can observe their behaviour within a story world.
Learning from Agence: “Just because we can do something, it doesn’t mean we should”
How did you work on the “look” of the Agents?
A central deciding factor for the look of the Agents was empathy. We really want people to care for these little creatures, who never asked to be born in the first place. Initially we tried more humanoid characters, but we found the cuter, more lovable characters evoked the sense of compassion and responsibility we were looking for. We also didn’t want them to look entirely like robots, but chose a design that would embrace potential glitchiness and unusual movement that might come from the machine learning side of things. Thus we gave them three legs and no arms so that they would have limited capabilities, but be interesting to observe.
What message would you like Agence to leave viewers with?
As a film, I hope Agence taps into a familiar message from classic science fiction: “Just because we CAN do something, it doesn’t mean we should.” Too many of the science fiction properties that come out of Hollywood show intelligent machines taking the power away from humanity. But the fact is, the power currently lies in human hands and not the other way around. I feel that Agence has an important message about humanity’s role in developing modern-day artificial intelligence: to be compassionate toward and educated about machine intelligence. I don’t advocate halting technological progress or research. In fact, even if we wanted to stop, it would be impossible. And, next to storytelling, I consider artificial intelligence to be humanity’s greatest tool. At the same time, we must consider the importance of human-machine empathy if we want to manifest a safe, prosperous and human-centric technological singularity.
Not only does Agence let users decide on whether or not to make decisions, but it also lets them be cruel to the Agents, and it even monitors their cruelty, in a way, as we find out at the end how many Agents have “died” in the game. Can you share some of your findings on the users’ decisions?
Even from our earlier user testing, it’s been interesting to see how each user approaches the film. Some users quickly kill all of the Agents in order to discover new pathways. Others are so compelled to interact with them that it takes time for them to discover additional functionality. My mother, for example, spent 40 minutes picking them up, talking to them, and repositioning them so they wouldn’t fight. There were even a few users that tried to shoot the Agents as soon as they saw them.
We took an unconventional approach to both limiting functionality and not hand-holding users with tutorials because we were concerned explicit instructions would imply there is a specific goal to the experience. While this hasn’t been a popular choice for some critics, it’s been really interesting to see how many types of interactions there have been with such a limited pallet. Throughout the production we had countless ideas on where to evolve the film towards a “social experiment,” from correlating user behaviour to better understanding human nature, to giving the Agents different personality types in order to simulate how real-life social dilemmas may play out. We see all of these as exciting avenues to explore in future projects, for sure.
What’s Next for Pietro Gagliano
Finally, can you tell us about your future projects?
We certainly hope to work with the NFB on another dynamic film in the near future, and may be in development on the next one soon. In the meantime, we at Transforms.ai are working on refining a number of tools and technologies that we discovered uses for during our making of Agence. One of them is a tool that lowers the barrier to entry to train Reinforcement Learning Agents through the application of storytelling. That’s the concept, anyway. We’re also developing more dynamic systems in dialogue, cinematics and even soundtrack generation for future dynamic-film projects.
Don’t miss our monthly updates with film news, movie-inspired recipes and exclusive content! You’ll only hear from us once a month. #nospam