In a reversal of roles, technological progress can undermine creators’ rights and thus hinder progress. Recent developments in AI have shaken up creators’ rights, and we expect further transformations in the years to come. The anticipated evolution towards increasingly sophisticated AI agents capable of perceiving the environment, making decisions and taking actions autonomously to achieve specific goals will probably redefine the boundaries of what counts as creative work generated by humans. Ensuring that human creativity is fully recognised (in law and otherwise) as AI transforms the landscape for innovation will be vital to scientific and technological progress. To this end, we examine the impacts of the AI revolution on two types of creators — scientific authors and AI users who deploy their creativity to generate AI outputs — to better anticipate the future of creativity and innovation.
First, scientific authors. The published works of scientists (along with literary and artistic productions) are routinely used to train large language models (LLMs) and other generative-AI models. Attribution is possible and is typical of agentic AI (systems that can make decisions without human intervention) carrying out “deep research” or “reasoning.” These AI outputs are more likely to attribute content to the scientific sources used to produce that content. However, in many other circumstances, outputs do not include attributions to sources by default, even if scientific sources were used to train the model. If the source remains invisible and no attribution is made, citations vanish. This has a double significance: citations are both an engine of scientific enterprise and a currency for scientists.
Attribution is an engine of science as a cumulative endeavour. Knowledge builds on previous work. Proper attribution allows researchers to position their work within the existing scientific landscape, demonstrating how their findings build upon, challenge or synthesise previous discoveries. This intellectual lineage is vital for the progress and validation of scientific knowledge. Attribution is also a currency in science. A strong record of both giving and receiving appropriate attribution enhances a scientist's standing among peers, and it is a pathway to career advancement, funding and collaboration opportunities. If this incentive is removed, science as a profession will be transformed.
Looking into the future, this trajectory seems unsustainable without a change to how AI platforms digest scientific creativity and the ways in which science produces knowledge. A legal proposal is to establish a duty of AI developers to generate algorithms capable of tracing scientific sources. Attribution and integrity functionalities must be developed or activated. Use of scientific sources to train LLMs must be made conditional upon the LLM’s capacity for attribution. Note that the basis for this duty is not to respect copyright (which is disputed and currently litigated), but to ensure credit for authorship and respect the moral rights of creators.
Second, we come to AI users, who also have moral rights whose forms are not yet obvious. AI has already impacted how human creativity is understood, and the act of using AI to generate an output is in itself creative. But to what extent do AI users create content if the creative process is partly outsourced to AI? While a complete assessment of AI's impact on scientific and technical creativity may be necessary, we know that we are just at the beginning of this trajectory. LLMs are a first-generation technology, with AI anticipated to move to symbolic AI and neuro-inspired AI, probably triggering new forms of human creativity.
Anticipating these new forms requires making them legally visible so that the moral rights of authors are recognised. As we enter an age of AI-embedded creativity, two aspects of human creativity become more relevant in analysing the moral rights of creators. The first relates to authorship and the power of the prompt; the second, to potential novel ways to construct (build, assemble and present) scholarship.
AI outputs depend not only on the AI model deployed but also on the human input interrogating the model. Prompts are undoubtedly critical to activating the creative tasks that generate novel outputs using LLMs. However, prompts in the AI-sphere require new forms of “intellectual labour” to be effective. The problem is well explored by Sebastian Porsdam Mann and colleagues:
Although the use of an LLM should not fundamentally change the criteria for what constitutes a substantial contribution, we argue that it introduces new forms of intellectual labour, such as prompt engineering, knowledge embeddings, model fine-tuning, and data and output curation, that need to be evaluated alongside more traditional forms of contribution.
”The anticipatory challenge is recognising these new forms of intellectual labour in a way that entitles creators to claim authorship.
The power of prompts can also be used to craft scientific outputs tailored to the user. Publications can be made available by an AI platform in a way that can be customised to the reader’s preferences and expertise. Stephen Witt, author of The Thinking Machine, suggests that the future of the (non-fiction) book may evolve into “something more like a knowledge database” that a reader can choose to retrieve in different formats by prompt. This could also apply to forms of scholarship other than books. In this scenario, the author has the power to assemble scholarship as a knowledge database creatively. Here, too, the issue of recognition of this “intellectual labour” must be anticipated.
These new and potentially empowering possibilities come with downsides. Such gains in creativity might come at the price of disengagement and dissatisfaction with the use of a tool that may hide some of the intellectual labour that goes into its workings. AI also creates the risk of rewarding or empowering only a particular kind of creativity. While the training materials for LLMs are a vast and complex ecosystem of text and increasing multimodal data sourced from across the internet, books, academic publications, and other digital repositories, biases remain. An obvious example is the preference for AI inputs in English. Heavy reliance on English data (more than 90% of the training tokens of GPT-3 are English) can lead to an “English-centrism” in AI models, thus supporting or “rewarding” English-speaking creators. This raises human rights issues of equal participation and access in the development of AI. From an anticipation perspective, privileging English in AI development may result in a narrower appreciation of expertise and knowledge. Cultivating diverse viewpoints seems to be a preferable path to address the growing complexity of the world.
Disempowerment may also result from AI substituting or replacing existing forms of creativity. This is already evident with AI tools that generate images, logos, stories, etc. This has the effect of devaluing forms of creativity valued in the past but susceptible to being supplanted by AI. The concept of cultural heritage can help preserve creators' human rights. One exciting avenue for tackling these problems lies in blockchain technologies, which can act as a tamper-proof record of ownership for digital assets. As Sarah Kenderdine wrote in the 2022 GESDA Breakthrough Radar:
Distributed digital ledger tokens (DLTs) and non-fungible tokens (NFTs), in particular, have emerged as a way to assert rights over digital art and culture. These technologies could open the door to new, more inclusive models of ownership, in which the rights and control of creative artefacts can be shared among many people simultaneously, or entire communities. Smart contracts embedded in the blockchain could encode how these digital assets can be used and by whom.”
Human creativity is at the core of scientific progress. AI tools have already cracked some of the basic operations of knowledge production in science. While the genie is out of the bottle, the path to the future can yet be written in a way that aligns with the moral rights of scientific authors and AI users. The development of more powerful, better-trained AI tools can yield benefits for society. However, the moral and material rights of creators, and everyone’s right to scientific and technological advancements, must be integrated and aligned. To anticipate means to examine how this technology should evolve, as well as how human (particularly scientific) creativity can be re-imagined. There is an opportunity to develop a clearer and more cohesive understanding of human rights standards for innovation. It is time to form a new “Pact for Science” to fully recognise the promise of AI and align it with the recognition of human creativity and dignity.