

The backlash over the AI-altered ending of Raanjhanaa has opened up a deeper fault line in the Indian entertainment industry. At the centre of it lies a question that is becoming increasingly urgent. Where does technological innovation end and creative violation begin?
In an exclusive conversation with The Hollywood Reporter India, Vijay Subramaniam, founder and group CEO of Collective Artists Network, does not hedge. Even in a case where the studio legally owns the intellectual property, he believes altering a filmmaker’s work crosses an ethical line.
“I agree. Eros International owns the IP, but creatively, it feels wrong to take something and change it,” he says, referring to the controversy. “Dhanush said that, Aanand L. Rai said that but it still happened.”
For Subramaniam, the distinction between legal ownership and moral responsibility is stark. When asked whether altering an existing film could ever be justified if the rights holder approves it, his answer is blunt. “That’s theft. There’s nothing right about it.”
Pressed further on whether legality changes that equation, he is unequivocal. “Even then. Legally, they may own it. But ethically, it’s wrong to twist a filmmaker’s vision. The creator may have given away their rights, but that doesn’t make it right.”
The debate, however, is not just about one film. It mirrors anxieties seen in Hollywood, where both the SAG-AFTRA and the Writers Guild of America staged industry-wide strikes to secure protections around AI, from consent to compensation. India, by contrast, is still in the early stages of framing such guardrails.
“Guardrails are being created. I just don’t know exactly by whom yet,” Subramaniam admits.
What he does offer is a glimpse into how he is navigating the terrain personally. His approach is to sidestep proprietary material altogether. “The work I’m building is rooted in Indian history and mythology, stories like the Ramayana. These are public works. I’m not infringing on anyone’s personal IP,” he says. “My whole intent is to tell stories that belong to all of us.”
It is an argument that leans on the idea of cultural commons. For Subramaniam, the solution is not to retreat from AI but to deploy it in spaces where ownership is not contested.
That clarity, he insists, leaves little room for ambiguity. “If it’s blatant IP infringement, taking someone else’s work and twisting it, that’s wrong. It should be penalised. There’s no ambiguity there.”
What frustrates him is what he sees as a flattening of the conversation. “People are conflating two very different things. Launching an AI creator is not the same as infringing someone’s rights.”
The distinction matters because, in his view, AI itself is being judged through the lens of a handful of controversial use cases. “AI is being unfairly judged because of a few high-profile cases. People are treating those as representative of the entire ecosystem, which isn’t accurate.”
To illustrate the gap between legality and ethics, he reaches for a familiar analogy from the music industry. “Back in the day, so many songs were lifted. Was it right? No. Did people still dance to them at parties? Yes,” he says. “So morality and legality don’t always align. But that doesn’t mean we shouldn’t call it out.”
If the present moment feels chaotic, Subramaniam suggests it is also transitional. The speed of AI’s evolution has outpaced the industry’s ability to regulate it. “The technology is still new. What we’re seeing now has happened in five years. Something of this scale usually takes much longer,” he says, pointing to the decades-long evolution of social media as a comparison. Regulation, he believes, will follow. “Guidelines will come. We’re already seeing early steps, like disclosure norms around AI-generated content. Over time, there will be clearer frameworks, much like advertising standards today.”
Until then, he places the burden squarely on creators. “As creators, we have to be responsible. I can’t speak for others, but I choose to work with stories that are clean from a rights perspective.”