How Vivek Anchalia's 'Naisha' is Shaping the Future of Film with AI

'Naisha', India's first AI film, is a bold experiment in storytelling. The director reveals how AI's rapid advancement is transforming the way films are made and consumed in the country.

Prathyush Parasuraman
By Prathyush Parasuraman
LAST UPDATED: MAR 24, 2025, 16:07 IST|5 min read
A still from 'Naisha'.
A still from 'Naisha'.

Vivek Anchalia’s Naisha is India’s first AI [artificial intelligence] film, and like any pioneering project, is both an experiment that gestures towards the future, and an object that is self-fulfilled and complete enough for the present. The film’s trailer dropped in early March, announcing a May 2025 release. As of now, Anchalia is still in talks with distributors to finalise the date.

In a conversation with The Hollywood Reporter India, Anchalia breaks down the process of making Naisha, the isolation of the process, and the first-mover energy with which the film was made. 

Read more | The Formulaic Future of Indian Streaming: 'Engagement' Points, Unnecessary Deaths And Algorithm-Driven Storytelling

Watch on YouTube

When and why did you decide that you wanted to make this film with AI?

A year and a half ago, I was pitching a live-action film, and came across AI softwares like Midjourney, that helped me prepare a deck — it took me around 16-20 hours to do. That was my first interaction with AI, and then I left it at that.

Then, one of the music directors on this film, Daniel B. George, (Andhadhun, 3 Idiots, War) had made a few songs which I had written, but we didn’t have the distribution or money to make videos. 

So, initially it started with an idea for making music videos using AI, but because we have been doing long format storytelling, we tweaked things, and realised we could make this into a seven-part ten-minute episode series, or something. This was eight months ago.

Some 45 days ago, we came across something which made upscaling in AI possible — we went and checked it in the theatres and it played well there. At that point we decided that what we have is something that could be put out as a theatrical film. Before that we were just thinking of releasing it online or on an OTT platform.

But because the technology is constantly evolving, we are constantly changing the film again and again. What we now have is the fifth or the sixth iteration of the same film.

Director Vivek Anchalia.
Director Vivek Anchalia.Vivek Anchalia

So you are still not done with the film?

No, I am still working on it. The film is ready, but there are constant developments that are happening on the AI end of it, with better and newer technology available. So we are not even rendering a large part of the film right now. We are going to render it closer to the release so that we get the best technology possible, because we don’t want the film to be outdated by the time it comes out. 

And by best technology, you mean one that looks more realistic. More like a live-action film?

Yes, more live-action in terms of skin texture, movement of characters, how they emote. 

Read more | Anil Kapoor on Why AI is a Double-Edged Sword Like No Other

What are the prompts that you give to arrive at the face of your characters?

The prompts are generic prompts — ethnicity, age, what environment you want them to be in. Sometimes you are even putting in camera angles and aperture to get one final image that you would want to lock in.

And once that happens, then you are trying to recreate that consistent character in different environments. I am doing this on Midjourney because I find it to be more cinematic. 

What about prompts like — give me a mix of Deepika Padukone and Alia Bhatt? Are you also experimenting with those prompts?

From the beginning, I was very clear that I didn’t want to use any names or styles of living artists. That would be borderline immoral. I am coming up with an image in my mind and writing a prompt to somehow reach a place where I feel comfortable.

When I started, my success ratio would have been... one out of 200 images. That is how much time it took then. Now, because it is not just me learning the machine, it is machine learning, my software has learned enough of me to know what I am exactly asking for. Hence, the greater accuracy. 

And how long does this process take you?

It could be done anywhere between three to six months. If I did it in live action, on a scale like this — shooting in three different countries — it would take around a year.

Here, because you are not doing a recce, not doing the entire casting process, the budget is a fraction of what a live action film would be. Though we are using human beings while emoting. 

A still from 'Naisha'.
A still from 'Naisha'.

In what sense?

Using the same motion capture technology that was used in Planet of the Apes — I am performing some of it; I have a couple of actors who are also performing.

Read more | How Amitabh Bachchan's 'Sooryavansham' Became the 'Sholay' of TV

But the trailer only notes “virtual stars Naisha Bose and Zain Kapoor”. Who are these actors you are working with?

When the film comes out, their names will be there.

Everything from set and costume design is AI-generated. Why did you not use AI for music?

Music plays a big part in our film. I don’t know if it is possible to get that emotional cohesiveness using AI at this point. Having said that, I don’t understand music enough. I have done all these other things — cinematography, production design, costume design, editing — at some level or the other, so I have more knowledge of those than music.

Can you tell me, apart from the budget and the time it takes, what are the advantages of using AI to make films? 

AI is going to open up a new form of entertainment. There are enough storytellers in our country who do not have access to big budgets or studios or platforms. AI is going to democratise the entire system. Just like with YouTube or TikTok, actors became influencers putting out their content; I think AI will be that for filmmakers.

Another thing is that we never had the budgets to tell our story — especially with stories of history and mythology. There are scales we are not able to pull off just because of budgetary reasons, not because we don’t have the vision. So with AI now you can tell not just more stories, but bigger stories, too.

But the thing about the big stories is that they feel big, right? Do you think that big-ness comes through in an AI iteration?

We are at a very nascent stage. Naisha is an experimental film to begin with. But in the last year, I have noticed that AI’s growth is faster than anything I have seen in my life. We will reach there.

Read more | Allu Arjun to Collaborate With Atlee, Trivikram Srinivas Before Returning to 'Pushpa' Franchise

A still from 'Naisha'.
A still from 'Naisha'.

The thing about being on set is the happy accidents that come from collaboration. With AI, you have turned filmmaking into a one-person job, almost. Are you worried about these happy accidents getting lost in that process?

I am. If you ask me, I would rather be on a set than be on my laptop — cinema is a collaborative art form; a scene is never truly just yours, it is the actor’s, it is the cinematographer’s, it is a production designer’s...

But, at the same time, there are happy accidents in AI, too, because it throws something back at you, which is not exactly what you asked for, and that triggers another thought in your brain, taking you somewhere else. See, I would like to do more live films, but this also allows me to tell more stories. 

There are some artists who have a distinctive stamp — like a Sanjay Leela Bhansali or a Wes Anderson. The thing about AI is it culls from data that already exists. So when filmmakers use AI, do you think they will be able to create a distinctive aesthetic for themselves?

I think it is possible. There are training models where you can train them through sketches, colour choices, and how you see the world with prompts. It is going to be a little different from how live-action operates because there is now another brain. I think what you are talking about is how everything on AI looks the same?

Yeah. There is an AI aesthetic I see — a glassy, empty behind the eyes, smooth skin, even if it is pimpled.

I know. But there’s a constant effort by AI platforms to reach as close to reality as possible. It may not be as good as live-action, but I think it is going to lie somewhere between anime and life, where gaming exists, probably.

Latest News