Concerns about the use of GenAI
The emergence of GenAI will have a significant impact on various industries, providing companies, organizations and individuals with unprecedented capabilities to create, synthesize and manipulate data. However, there are also concerns about the increasing use of GenAI models and tools, ranging from copyright infringement and the potential for misuse, to the risk of job displacement.
The impact of GenAI on the labor market
In terms of the impact on employment, a common fear is that GenAI will lead to significant job losses in many industries as machines become capable of performing tasks previously done by humans. A recent study by Goldman Sachs predicts that GenAI will significantly disrupt the labor market, with around 300 million workers in major global economies exposed to some degree of GenAI automation (Financial Times 2023). However, the impact will vary significantly from job to job. In fact, many professions will benefit from GenAI tools that complement their work and enhance their capabilities, allowing professionals to focus on higher-level, more strategic aspects of their work. As a result, GenAI, like other forms of automation before it, should boost GDP growth and improve overall income levels, particularly for countries. However, some jobs are certainly at risk of becoming obsolete, and AI-sensitive individuals will need targeted support to transition to new opportunities and retrain for emerging roles. Moreover, unlike previous waves of automation, which mainly affected middle-skilled workers, the risks of AI displacement extend to certain higher-paid positions such as certain types of data analysts, market research analysts, bookkeepers or paralegals.
The role of GenAI regarding copyright and the protection of intellectual property
Concerns have also been raised about potential copyright infringement of GenAI art, text and code, as well as training data. AI models may generate text, images and audio that closely resemble existing works, potentially infringing copyright. As a result, copyright issues are already fueling debate in a number of jurisdictions. In the US, for example, artists, writers and others have filed lawsuits accusing major AI companies such as OpenAI of using their copyrighted work to train AI systems without permission (Brittain 2023). Another much debated issue is the question whether AI inventions can be patented, as AI models and tools play an increasingly important role in innovation activities. For example, a recent decision by the US Federal Circuit stated that inventions developed purely by an AI machine are not patentable, whereas inventions made by humans with the assistance of AI are (Nemec and Rann 2023). The examples above are non-exhaustive, and full consideration of the various legal issues surrounding IP and GenAI are beyond the scope of this study.
Other concerns include deepfakes and training biases
Other areas of concern include deepfakes, which are fake images or videos that insert a person’s likeness into another video without their consent. Because GenAI can create highly realistic and convincing content, these deepfakes can be used for malicious purposes, such as spreading misinformation during election campaigns.
In addition, while the capabilities of GenAI models have improved significantly in recent years, these models still sometimes produce incorrect results. For example, answers from chatbots may sound convincing but may be wrong (e.g. AI hallucination)
Will GenAI evolve into a general AI?
Although the latest GenAI models use human-like language and appear creative and intelligent in their output, they are still far away from human intelligence, as GenAI models do not really understand things but instead are still only capable of making very good guesses based on their input data. Whether GenAI models can be further improved to have the ability to reason is the subject of much debate. Some AI advocates believe that GenAI is an essential step towards general AI and even consciousness. There are concerns that progress in the development of GenAI requires a sense of urgency to ensure that humanity can still contain and manage these models. Some experts have even called for a pause in AI development (Narayan et al. 2023). Others argue that even faster advances in AI would provide tools to better understand the technology and make it safer. One example is OpenAI’s use of reinforcement learning from human feedback (RLHF) to create guardrails that make ChatGPT’s responses more accurate and appropriate (Goldman Sachs 2023). It is also important to note that there is significant disagreement among experts about when general AI will be achieved, with many experts believing that general AI is still a long way off (World Economic Forum 2023).
Regulations are being developed to address these concerns
In light of the above concerns about GenAI, new GenAI regulations are being developed and introduced around the world with the aim of harnessing the benefits of GenAI while mitigating the risks. The goals of regulation vary from country to country, but typically include protecting consumers, preventing misuse and ensuring responsible development.
China was one of the first countries to introduce legislation on GenAI, just months after ChatGPT was launched (Yang 2024). The country initially focused on individual pieces of legislation for different types of AI products. As a result, in 2023 China had different rules for algorithmic recommendation services than for deepfakes or GenAI. However, in January 2024, China’s Ministry of Industry issued draft guidelines for standardization of the artificial intelligence (AI) industry, with more than 50 national and industry-wide standards for AI to be formed by 2026 (Ye 2024).
The EU has also been working on regulating AI. In March 2024, the EU Council and Parliament adopted the EU AI Act, which is expected to come into force in another month or two. The AI Act will regulate providers of artificial intelligence systems by categorizing the type of applications into different risk categories, including a category for a potential future general AI. Companies developing GenAI models and tools that are deemed to pose a “high risk” to fundamental rights, such as those intended for use in sectors such as education, healthcare and policing, will have to meet new EU standards. The law will require companies to be more transparent about how their models are trained, and to disclose any copyrighted material they use for training. It will also ensure that AI systems deemed high risk are trained and tested using sufficiently representative datasets, for example to minimize bias. Other uses of AI will be banned outright in the EU, such as the creation of facial recognition databases or the use of emotion recognition technology in the workplace or schools.
The US has not regulated GenAI so far, but there have also been certain steps, for example by the Federal Trade Commission (FTC). In addition, the Biden administration issued an executive order in October 2023 directing federal agencies to develop a comprehensive national strategy for regulating artificial intelligence with GenAI as a specific area of concern. The focus in the US is on creating best practices with a reliance on different agencies to craft their own rules.
Responsible AI best practices and approaches can also help
Another approach to addressing concerns about GenAI and AI in general is the development of so-called Responsible AI. Responsible AI is an umbrella term for making appropriate and ethical decisions related to AI. It includes best practices and responsibilities for companies and other organizations to ensure the responsible and ethical development and use of AI. Some steps to achieve this include (Gillis 2023):
Transparency and explanations: there should be documentation of the training data used and the algorithms employed to avoid potential copyright infringement and to allow users to understand how the GenAI models work. Users should also be educated about the risks and benefits of GenAI.
Mechanisms (for example regulatory frameworks) should be put in place to hold developers and users accountable for the ethical use of GenAI.
Avoid bias: responsible AI must ensure that biases are identified and addressed so that GenAI algorithms do not unfairly discriminate against certain groups of people.
Human-in-the-loop approach: GenAI models should be integrated by companies with human oversight and judgment.
Monitoring: ongoing monitoring of GenAI models and tools ensures that real-world performance and user feedback are taken into account to address potential issues.
Resilience: GenAI and AI systems in general should be resilient to potential threats such as adversarial attacks.
Limitations and future of patent analysis in relation to GenAI
Methods of patent analysis are changing
Patent analysis is an established field and has been widely used for years, and while tools have become more powerful, the methodology has not changed dramatically. Analysts use patent classes and keywords to build searches and count, measure and visualize the results.
However, the world of patent search and analysis may see its biggest changes in the coming years, and the patent landscape for GenAI shows perfectly why. Patent classes are built after a concept has reached a certain popularity, keywords are sharpened and defined only after they appear for the first time. While it has always been difficult to do traditional patent analysis on emerging topics, it is now closer to being impossible to get an accurate collection of patents using traditional tools. Digital fields now follow different laws of development compared to classical technical advancements. New concepts or technological improvements can spread rapidly due to the extensive reach of global networks. GenAI, such as ChatGPT, is an example of this. Within just five days, the tool had a million users, while streaming services like Netflix took 3.5 years to reach the same number of users. Even before the patent system can respond in the form of new patent classification classes/subclasses, groups etc. or well-defined keywords, technology development has already advanced several steps further. At the same time, generic or descriptive terminology is used that is either nonspecific or semantically ambiguous. GenAI also exemplifies this issue: there is a semantic difference between saying “an image generated by AI” (from data that did not previously exist) and “AI was used in the image generation process” (as in image classification or in object detection etc.).
It was already useful to use AI for the WIPO Technology Trends Report on AI (WIPO 2019), but it is even more relevant to use modern classification tools and advanced LLMs today to create an accurate patent collection for GenAI, as done in this report. We have already seen the introduction of patent-trained LLMs, e.g. from Google with BERT for Patents (Google 2020), from EPO for automatic classification (EPO 2023) and for patent search with Searchformer (Vowinckel and Hähnke 2023) or from academia (Freunek and Bodmer 2021).
However, the impact of GenAI on how people will generate text, visuals, insights, statistics and even new innovations using GenAI tools has the potential to turn the patent analytics space, like many others, upside down. Instead of using patent classes that first need to be discussed, created and agreed, it will be possible to ask GenAI tools to show areas of development or to collect the appropriate patents for new cutting-edge concepts directly on a prompt or with fine-tuned models. The patent analysis of GenAI, that we have conducted in this report, should therefore be understood as an approach using the most advanced methods and tools available at the time of the analysis. It is to be expected that these tools will develop faster than the traditional patent information system, and will therefore be able to challenge old patent analysis methods very soon.
We are only seeing the beginning of the GenAI patent wave
Collecting the patents for GenAI has been a challenge, but the growth is clearly visible and measurable. In contrast to scientific publications, we have, however, very limited visibility on recent patent trends as patents are only published 18 months after their filing in most jurisdictions. It is likely that most current patent applications in GenAI have not yet been published. We can expect a wave of related patents very soon, especially as the success of Chat-GPT has driven innovation in a wide range of applications. We can only guess at what is to come by counting the patents from patent offices that publish earlier than 18 months on average, such as China. A future update study at a later date should be able to visualize this development, perhaps by using GenAI itself to do the work.