How will AI change the future of authorship?

Théâtre D'opéra Spatial — Jason M. Allen via Midjourney AI

Image: Théâtre D'opéra Spatial — Jason M. Allen via Midjourney AI

Artificial intelligence (AI) and machine learning (ML) are making incredible strides. They’re now capable of outperforming humans in a variety of tasks. But as AI assistants become increasingly common, the potential for plagiarism and other ethical issues is also increasing.

In this article, we examine the unprecedented ethical and legal conundrums emerging as a result of AI collaborations and publishing.

The field of artificial intelligence has seen a surge in investment in recent years. It’s hailed as the premier technology of the new Industrial Revolution, and we’re seeing a rise in AI labs and applications.

Meta AI, Google DeepMind, and Elon Musk's OpenAI are among the best-known platforms, but other companies, including Microsoft Research, Salesforce, Amazon and IBM are also investing in disruptive AI to drive further advances in the near future.

The key measure of an AI’s abilities is its ‘parameters’. They determine its ability to learn and make decisions. The more parameters an AI has, the more it can learn from historical training data to improve its accuracy and decision-making.

Google and Alibaba’s AI systems are the real giants in this field, with 1.6 trillion and 10 trillion parameters respectively – but AI systems with far fewer parameters are already producing impressive results.

For example, OpenAI’s GPT-3 platform recently wrote an academic paper about itself.

OpenAI’s GPT-3 wrote an academic paper about itself

Earlier this year, Almira Osmanovic Thunström, a researcher at the University of Gothenburg in Sweden, conducted an experiment in which she gave OpenAI’s GPT-3 interface a simple command: “write an academic thesis in 500 words about GPT-3 and add scientific references and citations inside the text”.

The AI Thunström used for this task has 175 billion parameters — far fewer than Alibaba’s 10 trillion. Yet to Thunström’s amazement, the algorithm completed the scientific paper, including references and citations, in just two hours.

Thunström and her colleague Steinn Steingrimsson felt the ethical need to ask GPT-3 for permission to publish their paper on the algorithm's abilities. While they were amused by the fact they were treating an algorithm as sentient, they of course had to wonder: what if the GPT-3 said 'no'? They would have to respect its wishes. Thankfully, the AI said 'yes'.

The situation is reminiscent of the Google engineer claiming that LaMBDA, an artificial intelligence with which he was working, had become sentient and even asked to hire an attorney.

While GPT-3 didn’t ask Thunström for legal assistance, its role in producing the article published by Scientific American does pose some questions. Under which circumstance can someone publish a paper from a nonhuman source? Should the role of AI be disclosed? And have we opened a ‘Pandora's box’?

Thunström wrote in Scientific American, “We did not need accuracy; we were exploring feasibility”. The AI writing about itself and making mistakes was part of the study of GTP-3 and its writing accuracy. However, in other contexts — say for instance, if Thunström were to use AI to write a paper about Alzheimer's disease — any mistakes could contribute to the spread of misinformation and give rise to real, human consequences.

AI is reshaping academic writing and ethics

As AI-based tools become increasingly widespread in academic writing, institutions will need to define what constitutes AI-assisted help against what is considered plagiarism or cheating.

AI technology tools that can independently research and write entire articles or papers present obvious problems in terms of authorship – and academic institutions will need to adapt to manuscripts written with AI assistance within a few years. The nature of a human researchers’ publication records may need to change in order to allow AI to take credit for its part in the work.

Whether institutions will embrace machine-human collaboration remains an open question, but it's likely universities will be forced to adapt their plagiarism prevention techniques at the very least.

Traditionally, they’ve relied on text-based plagiarism detectors like Turnitin. However, because AI-generated content is not ‘plagiarism’ – in the sense it’s unique and not copied directly from a particular human – its role in academic writing would likely go undetected by current systems.

The development of AI tools like Grammarly present similar, though less clear-cut problems.

The tool helps users with spelling, style and grammar, rather than content; but as such tools improve, universities will be forced to grapple with questions around the point at which writing ceases to be a human’s alone.

There are already rules around PhD students engaging a professional editor, for example, and where that editor’s interventions should stop. It seems likely, however, that it will soon be within the abilities of AI assistants to exceed these restrictions without being detected.

Clearly, the implications of such technological advancements will reshape academic integrity and guide new rules, policies, oversight and ethics among students, academics and institutions.

As the next example shows, similar developments are also occurring in the visual arts.

An AI recently won Colorado State Fair’s fine art competition

This year, Jason Allen won the Colorado State Fair's fine arts competition with his entry Théâtre D'opéra Spatial — but the judges weren’t aware that the artwork had largely been created by Midjourney, an artificial intelligence program that turns text descriptions into images.

Allen has stood by his work, saying he had made clear it was created by AI when he submitted under the name "Jason M. Allen via Midjourney" and entered in the division for digital art and digitally manipulated photography.

The judges were blown away by the realism and detail of the artwork, but revelations around the role of AI sparked furious backlash from competitors and the art community and raised questions about whether someone should be able to submit work produced by artificial intelligence.

Contributing to the controversy was the fact that AI technologies like Midjourney and OpenAI’s DALL-E 2 are being trained using the work of living, human artists. This begs the question: are artists who upload their works to the internet helping to train algorithmic competitors? And if an AI is used to create award-winning works, is that really fair?

In this instance, the artist instructed Midjourney to produce more than 900 renderings, from which he selected three favorites. Spending significant hours, he continued adjusting them in Photoshop until he was satisfied. In one image, the central figure was missing a head, so he painted in a crop of dark, wavy hair. He boosted their resolution using a tool called Gigapixel and printed the works on canvas. Clearly, significant human input was still involved.

Perhaps this simply calls for a new genre of art and art competitions.

Now for the fun bit, an interview with Davinci GPT-3 AI

OpenAI's latest and most capable GPT-3 model, Davinci, was used for this interview.

Matt: Hello GPT-3!

AI: Hello! Thank you for talking with me.

Matt: Do you mind if I ask you a few questions on machine-human collaboration, ethics, patents and authorship?

AI: Of course not. I am happy to answer any questions you have.

Matt: I recently read about an AI called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) and two inventions it created without human intervention; an improved beverage container and a light beacon used in search and rescue missions. Can you tell me about DABUS and the patent law issue?

AI: DABUS is an artificial intelligence system that was developed by British researcher Dr Stephen Thaler. It is notable for its ability to generate new ideas and inventions without human intervention. In 2019, DABUS generated two inventions that were patented by Thaler: an improved beverage container and a light beacon used in search-and-rescue missions. However, the U.S. Patent and Trademark Office has since rejected Thaler's patents, citing a lack of "human involvement" in the invention process. This has sparked a debate over whether artificial intelligence systems should be eligible for patents, and whether they can be considered the "authors" of their own inventions.

Matt: In your opinion, should AI get patents for their inventions?

There is no easy answer to this question. On the one hand, it could be argued that AI should not be eligible for patents because patents are intended for humans. On the other hand, it could be argued that AI should be eligible for patents because they are the ones actually creating the inventions. Ultimately, it is up to the lawmakers to decide whether AI should be eligible for patents.

Matt: What are the ethical and legal considerations for academic researchers using AI to write academic papers and submit them to a peer-reviewed journal?

AI: There are a few ethical and legal considerations to keep in mind when using AI to write academic papers. First, researchers need to be sure that the AI is not plagiarizing any existing work. Second, the papers should be thoroughly reviewed before being submitted to journals, to ensure that they meet all academic standards. Finally, it is important to consider the potential impact of AI-authored papers on the field of research as a whole.

Matt: What is the negative impact?

AI: The potential negative impact of AI-authored papers is that they could be of lower quality than papers written by humans, or that they could be used to spread false information.

Matt: What if AI advances and the quality surpasses papers written by humans?

AI: Yes, the ethical and legal challenges would still be the same even if AI advances and quality surpasses that of humans.

Matt: Do you think it's ethical for a human to enter an art competition that an AI platform like Midjourney or DALL-E created?

AI: There is no right or wrong answer to this question, as it depends on each individual's personal beliefs. Some people might argue that it is ethical, as the AI is simply a tool that the human is using to create art. Others might argue that it is not ethical, as the AI is doing the majority of the work and the human is not contributing anything significant to the final product.

Matt: What if AI-produced art was introduced as a new category?

AI: If AI-produced art was introduced as a new category, this would likely be seen as a positive development by those who believe that art produced by AI is a new and unique form of art. However, some people may argue that this would be unfair to traditional artists, as it would create a separate category for a new type of art. Ultimately, it is up to the individual to decide whether they believe a new category for art produced by AI would be fair or not.

Matt: With the advancements of AI writing assistants, how do you think universities will manage students handing in a paper written by AI?

AI: There are a few ways that universities could manage students handing in papers written by AI. First, they could require students to disclose if they used AI assistance when writing their papers. Second, they could set up a system to flag papers that are suspected of being written by AI. Finally, they could provide guidelines on how to properly cite AI-authored papers in academic work.

Matt: How could they check for AI plagiarism?

AI: There are a few ways to check for AI plagiarism. For example, you could use a plagiarism detection tool to compare the paper to other texts. You could also look for tell-tale signs that the paper was written by AI, such as perfect grammar or an unusually high level of stylistic sophistication.

Matt: How would you compare an AI like GPT-3 with 175 billion parameters to an AI with 10 trillion parameters?

AI: The main difference between an AI like GPT-3 with 175 billion parameters and an AI with 10 trillion parameters is the amount of data that each can use to learn and make decisions. An AI with 10 trillion parameters can use orders of magnitude more data than an AI with 175 billion parameters, which gives it the potential to be much more powerful and intelligent.

Matt: Can you give examples of what these ‘orders of magnitude more data’ could be used for?

AI: Some examples of what an AI with 10 trillion parameters could be used for include developing better algorithms for data analysis and machine learning, creating more realistic simulations of physical systems, or developing new and more powerful AI systems.

Matt White
UX Design Director

Make what matters.
Let’s talk.

132C Gwynne Street
Cremorne VIC 3121 Australia
Suite 3-4, Level 6
2-12 Foveaux Street
Surry Hills NSW 2010 Australia
Suite 4, 32 Thesiger Court
Deakin ACT 2600 Australia
25 King Street
Bowen Hills
Brisbane QLD 4006 Australia
Suva Business Center
177-181 Victoria Parade
Suva, Fiji

ICON recognises the First Peoples of this nation and their ongoing connection to culture and country. We acknowledge First Nations Peoples as the Traditional Owners, Custodians and Lore Keepers of the world's oldest living culture and pay respects to their Elders past, present and emerging.

Australia's leading integrated agency offering best-in-class PR, digital and creative services in Melbourne, Sydney, Brisbane and Canberra.

Copyright © 2024 ICON Agency (ABN 12 103 086 451) /Privacy statement