The advent of artificial intelligence (AI) and related technologies has unlocked possibilities which once formed part of science fiction. One of the most significant advancements in the modern technological era is the emergence of deepfakes technology. This technology uses deep learning algorithms to create synthetic media. Voice cloning, also known as audio deepfakes, is a subset of deepfakes technology, which utilizes AI to replicate one’s vocal patterns including their accent, tone, speech rhythm and much more. While technical developments, such as AI enabled video and audio content, may appear exciting when looked at from the lens of entertainment, accessibility and personalized user experience, unfortunately, it also opens doors to privacy, consent and ethical concerns.
The repercussion of uncensored AI use has stormed the internet in the recent past. We are no stranger to the proliferation of AI to generate deepfake content and its transmission over various digital media platforms. We have been coming across news reports of misuse of AI in creating objectionable and derogatory audio and video content. This issue extends beyond concerns of privacy and reputational harm and has begun to encroach upon the domain of intellectual property law, including potential infringement of copyright and violation of personality and publicity rights.
India Judiciary on AI Misuse
Fairly recently, the High Court of Bombay in its landmark ruling of Arijit Singh v. Codible Ventures LLP (IPR SUIT (L) NO.23443 OF 2024), established a legal precedent in the age of AI technology and became the first Indian decision to address the misuse of AI technology and its overlapping consequences as regards the intellectual property and music.
In this case the plaintiff alleged that AI tools were being used to synthesize artificial records of his voice by way of voice cloning. Additionally, the plaintiff’s likeness was also used for brand endorsement and commercial use. The court observed that while freedom of speech and expression allows critique and commentary, it does not permit exploitation of a celebrity’s personality rights for commercial gain. While addressing the potent threat of AI technology, the court stated, “what shocks the conscience of this court is the manner in which celebrities, particularly performers such as the present plaintiff are vulnerable to being targeted by unauthorized generative AI content”.
Further, the court observed that such unauthorized use not only hamper the career of celebrities, such as the one involved in this case but also “leaves room for opportunities for misutilization of such tools by unscrupulous individuals for nefarious purposes”. The court underscored the importance of drawing a legal boundary between permissible creative expression and exploitative use of an individual’s likeness, including voice, without consent, and for material gain.
Legal Developments
- International Perspective
On March 21, 2024, Tennessee passed the Ensuring Likeness Voice and Image Security Act (ELVIS Act) with an aim to protect musicians from unauthorized usage by AI. The ELVIS Act, also known as a first-of-its-kind legislation, has broadened the scope of the Protection of Personal Rights Law to cover “new, personalized generative AI cloning models and services that enable human impersonation and allow users to make unauthorized fake works in the image and voice of others”. This new legislation also imposes civil and criminal liabilities against infringers.
This Act introduces a novel form of secondary liability against individuals or entities other than those directly engaged in the unauthorized act. It establishes civil liabilities for social media and streaming platforms, as well as developers of generative AI technologies, thereby broadening the scope of accountability within the digital ecosystem and expanding the reach of legal remedies available to rights holders.
Further, in April 2025, the US Congress introduced the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act. This is one of the most recent legal developments that aims to protect the creators in the realm of AI and deepfakes. This legislation protects the voice and likeness of individuals from their unauthorized reproduction using AI and other related technologies. It aims to tackle the risk posed to artists and content creators owing to non-consensual image creation and voice cloning by way of AI technology.
In May 2025, the U.S. President Donald Trump signed the Take It Down Act, which prohibits publishing or threatening to publish intimate images without a person’s consent, which includes AI generated deepfakes. Though this legislation does not directly deal with voice cloning, it is a positive step towards regulating unauthorized reproduction of one’s likeness using AI.
- Indian Perspective
While the Information Technology (IT) Act, 2000 and Digital Personal Data Protection (DPDP) Act, 2023, do not explicitly regulate AI and related technologies, they do address certain aspects of data protection, cybersecurity and intermediary liability. It is pertinent to note that provisions regarding intermediary liability play a pivotal role in administrating secondary liability which may extend to issues, such as, voice cloning.
In the late 2022, talks to replace the decades old IT Act with the new Digital India Act surfaced. The government has described this proposed Act as a “future ready legislation” aiming to provide a strong legal framework which would provide a comprehensive institutional mechanism and establish well-framed accountability for issues arising out of the misuse of AI technology. The new Act would definitely be instrumental in formulating a regulatory framework which would encompass modern-day legal solutions to emerging problems of AI, machine learning, deepfakes, voice cloning, and much more.
With reference to criminal legislation as regards AI-specific crimes, it is worth noting that the existing criminal jurisprudence has been consistently applied to address emerging challenges posed by AI. A notable example is the widely publicized deepfake incident involving actress Rashmika Mandanna, which surfaced social media in late 2023. The legal ramifications of the case extended beyond issues of intellectual property and information technology, as the Delhi Police also invoked provisions of the Indian Penal Code (now Bharatiya Nyaya Sanhita), particularly those relating to forgery, in the FIR filed against the accused.
Further, provisions relating to criminal defamation may also be invoked in the adjudication of AI-related offences, particularly in cases involving unauthorized deepfake videos or audio clips that harm an individual's reputation. In the context of celebrities, courts have acknowledged that the potential for reputational damage is significantly higher, given their heightened visibility and susceptibility to such targeted attacks.
Apart from legislative developments, India is trying to regulate deepfake issues by way of a voluntary code of conduct through the AI Safety Institute to address AI risks and safety challenges. This would include establishing adequate framework for ethical AI, AI risk assessment and management and deepfake detection tools.
While lagging in specific AI deepfake laws, India is clearly ramping up through judicial decisions, proposed as well as existing legislative frameworks and advisory measures, as is evidenced from above.
Conclusion
It is true that the rise in technological developments has unlocked numerous opportunities across sectors. However, challenges emerge when such advancements are misused for personal or commercial gain. The objective is not to halt innovation, but rather to seek a balanced and responsible approach that safeguards the interests of both technology and creative industry.
In this context, a robust regulatory framework plays a crucial role in maintaining an equilibrium between advancement in technology and protection against its misuse. This balance is being shaped through judicial pronouncements, comprehensive legislative framework and government regulation, both worldwide and in India. Courts and lawmakers alike are progressively recognizing the potential risks posed by uncensored technological advancement, particularly with the rise of generative AI, voice cloning, and deepfakes. Here, it is pertinent to highlight that existing legal frameworks ought to be utilized to the utmost extent, so long we await specific legislation governing AI. It is important to appreciate that we are not stuck in a legal vacuum and in light of the existing civil and criminal laws, we should judiciously employ their provisions while determining liabilities and regulating AI-risks.
