Given the advancement in technology, Artificial Intelligence (AI) generated inventions are deemed to becoming more common. All usage of AI whether as a convolutional neural network to screen cancer or tools such as ‘Deep Text’ which drives Facebook with user-generated content, is either an invention of an AI application or an AI tool. The human ingenuity is evidently less visible in such inventions, while at the same time, the inventing becomes easier because mental efforts are minimal. This makes it difficult to identify the “inventive step”- which is a pre-requisite for patentability. The issue has triggered an intense debate on the future of patent law and policy at the international level.
On 21st December, 2019, the European Patent Office announced its refusal to examine patent applications, designating an AI system DABUS as the inventor, on the ground that to acquire the status of an inventor under the European Patent Convention, the innovation must be by a human being, not a machine. Shortly before the pronouncement, the World Intellectual Property Organization (WIPO), issued a call for comments, primarily on how patent law and policy should reach to inventions ‘autonomously generated by AI’.
AI is often portrayed as yielding inventions with a wave of magic wand- or a magic click or simply by asking- autonomously “from humans”. However, researchers in this field believe that automatic programming by inculcating high-level program codes in computers only specifying how tasks have to be accomplished in the foreseeable future. However, the experts in AI believe that complete automation can be achieved by 2099.
AI-invented or AI-aided?
Surprisingly, while delving into the patentability of autonomous inventions, no specific definition of such inventions was provided. The WIPO draft only mentions that “inventions are autonomously generated by AI”, which is ambiguous around the word ‘generated’. The World Economic Forum also stated that “AI is no longer ‘just crunching numbers’ but is generating works that have historically been protected as creative or requiring human ingenuity”.
The request for comments by the USPTO referred ‘AI inventions’ as to include both inventions that utilise AI and inventions developed by AI. Evidently, both the types are AI-inclusive because of the involvement of similar algorithms and standards. However, it is an opinion of one jurisdictional Patent office and therefore, is difficult to impose on the international IP regime. Further, a stratum of legal literature does not differentiate between the two.
Additionally, it must be kept in loop that either of the inventions (generated or aided), shall involve human presence. It can lead to difficulties in drawing a line between patent conferred upon human beings or machines. It is true that amendments need to be made in the existing international regime but first, it is essential to determine the extent till which AI can be grated patentability.
Theoretical Perspective
It has been a long-lasting perspective that personhood must be attached to inventorship. It clearly means that AI cannot be granted inventor rights because studies are inconclusive of whether AI can “really thing”. Therefore, it is essential to understand and decipher the theories related to the rights.
- Incentive theory: The aim of patent is to incentivize the inventor, which can also arise as a challenge in case of granting monopoly rights to a machine. The theory clearly states that absent a patent; the inventor would be discouraged from engaging in inventive activities.
- Natural Rights theory: According to this theory, an individual should have natural property rights over the products of one’s mind. Granting property rights over the result of one’s labour should be conditioned to the fact that “there should be enough and as good left for in common for others”.
- Personality theory: The personality theory justifies intellectual property rights. It states that the property is an extension of the creator’s personality and shall be a mechanism for self-development and personal expression.
- Lockean theory: It is also called the theory of unilateral appropriation or the labour theory of ownership. The theory postulates that the human beings can use labour to establish their rights over the natural resources which creates a moral obligation upon others to respect these rights.
The Way Ahead
The nature and complexity of the modern algorithms has led the shift of focus to transparency with the view if algorithms cannot be scrutinized, any risks to human rights cannot be identified and rectified. It is essential for trust and to ensure that there are appropriate bounds and limitations. Understanding how an algorithm works is essential to monitor and control the consequences. Some authors have therefore, contended that transparency must be the policy response for any automated decision-making.
Despite the need of attaining transparency, many commentators have acknowledged that it may be difficult to attain it in practice. The nature of algorithms is dynamic, due to which one hundred percent transparency becomes a challenge. The authors have chosen Blockchain to be a potential solution to all the problems. Blockchain has been used as a tool to track down the details of financial transactions. The same process can also be applied in the decision-making processes. It could improve the accountability by verifying how and when the data was used. The researcher also believes that intellectual property rights have facilitated transparency challenges because they enable the businessperson to not disclose the algorithm used. It is true if algorithm is related to sensitive data, it can be protected and restricted by law. Even if there is a framework to ensure transparency, it must be assessed if the requisite resources and expertise are available at disposal.
Conclusion
Till date, no similar oversight model has been implemented for AI and algorithms. However, its importance might be central to ensure that the states and businesses fulfil their human rights obligations through an expert’s opinion. The newly established UK Centre for Data Ethics and Innovation is set up as an advisory body to strengthen the existing algorithmic governance landscape. But it does not qualify as an effective oversight body and is conceived as an interim measure by the government to test the utility and value of centre’s functions. Lessons learned from such systems can be used to provide useful insights for the future and other territories. Another suggested model includes dedicated ombuds to address the issues as well as certain industry regulatory bodies. In the event any violations are found, the framework can impose a number of measures to prevent any reoccurrences and provide sufficient remedies to the affected. This is one part of the process and the states need to establish justice-facilitating bodies so that no person will be wronged in the future. Hence, determining which entity or entities are responsible for the harm is essential in providing the requisite remedy.