The pros and cons of AI have been heavily debated with some people going to the extent of arguing that the age of AI is also the end of originality.It has been suggested that the AI’s output is not subject to any accountability and the user of that output can easily escape allegations of plagiarism.The problem seems simple at first.AI, like the Google Bard, Chat GPT, Open AI platforms use the pool of data online to be trained.This data comes from our conversations online, data posted in the public domain and images as well as paintings available online.The same goes for music available online.The AI can mimic and alter any form of data to produce certain output at the request of the user.Experiments have shown how convincingly deep fake videos can be produced for propaganda.The line between original and machine-produced seems to be getting blurred with each passing day.
This brings us to the question, whether we can use Intellectual Property laws to take AI to court.This is going to become a very important question in the coming days.And brings us to difficulty of proof of plagiarism, i.e. the Black Box problem.
The black box problem is a term used to describe the difficulty of understanding how artificial intelligence (AI) systems make decisions. As mentioned above, this is because AI systems are often trained on large amounts of data, and the process of how they learn from this data is not always transparent. As a result, it can be difficult to explain why an AI system made a particular decision, or to identify potential biases in the system.This difficulty results in making accusations of plagiarism weak.To put it simply, if we do not and cannot know how an AI app came to a conclusion it did, we cannot accuse it of plagiarism.What makes it further complicated is how will we put a finger on what has been plagiarised.Has the writing style been plagiarised, or words or the tone of writer or even more importantly the voice of a singer? What happens when you ‘see it and can’t put your finger on it’.This would make the entire case for plagiarism or theft, weak.
The black box problem can be a challenge for a number of reasons. First, it can make it difficult to trust AI systems, and this is becoming a problem in AI adoption across sectors.A futuristic approach can be to develop AI systems that are more transparent, by using techniques such as visualization and explanation. Another approach is to develop AI systems that are more accountable. This can be done by developing systems that can explain their decisions, and that can be audited for bias.For example in a plagiarism law suit the AI could be asked what sources they drew upon to model the output whether it was text, image or voice.
The black box problem is a complex challenge, but it is an important one to address. As AI systems become more widely used, it is essential that we are able to understand how they work to avoid losing the battle of human uniqueness and originality to machines.