Advanced Software (return to the homepage)
Menu

Human vs. AI: Who is responsible for AI mistakes?

27/05/2024 minute read OneAdvanced PR

In an era where technology advances daily, Artificial Intelligence (AI) stands out as a double-edged tool. While society admires self-driving cars running on highways and virtual assistants simplifying our daily tasks, incidents like regrettable collisions involving an automated Uber vehicle or the wrongful arrest stemming from flawed facial recognition data are some cautionary tales, highlighting the limitations and imperfections of AI.

This raises an important question – who is responsible when AI systems make mistakes – Humans, who are the creators and users of AI systems, or the AI itself? The answer is not as simple as one might think. The responsibility for AI mistakes lies at the junction of humans and machines. Therefore, a thorough exploration from both perspectives is important to comprehensively understand the roles and duties of humans and AI in AI mistakes and preventing those mistakes.

Humans’ involvement in AI mistakes

Humans – be they programmers, data scientists, designers, or engineers – stay at the core of AI advancement. These experts bring their own values, convictions, and biases into the development and training of AI systems. Consequently, AI models can mirror these imperfections, resulting in errors or biases during decision-making. For instance, Amazon's facial recognition software, Rekognition, which exhibited notably higher error rates for women and people of colour. This highlights the significant ethical dilemmas in AI that stem from biased training datasets.

Furthermore, humans are also responsible for supervising the deployment of AI systems to ensure their alignment with ethical standards and regulations. However, there have been situations where AI systems went wrong due to insufficient testing and lack of proper oversight. For instance, incidents like those mentioned above – Uber vehicle collision with pedestrian and facial recognition unfairly targeting specific demographics.

And finally, the users. The users of AI tools should responsibly utilise them, understand their limitations, and not blindly depend on them for critical decisions. For example, solely depending on AI for medical diagnoses, legal judgments, or content creation can result in severe medical repercussions.

These instances demonstrate that the human errors in AI mishaps often stem from a lack of ethical considerations during the development and implementation phases of AI software. However, it would be unfair to solely blame humans for AI errors. In several cases, the fundamental cause of an AI breakdown can be linked to inherent flaws within the technology itself, which we will discuss next.

AI’s involvement in AI mistakes

Despite the significant human element in the creation and supervision of Artificial Intelligence, the autonomous nature of AI systems presents unique challenges in attributing mistakes. To examine AI's involvement in errors, let's delve into the following aspects:

  • Autonomy of AI decisions: AI systems, particularly those powered by machine learning, develop their understanding of tasks over time. This can sometimes lead to unpredictable outcomes. For instance, Google's AI algorithm once erroneously classified a couple of African Americans as gorillas, a profoundly offensive and racist error that the tech company promptly apologized for.
  • Speed and scale of impact: AI operates and affects decisions on a scale and at a speed unmatchable by humans. Facebook's (Meta's) news feed algorithm has faced criticism for amplifying and speeding up the fake news and polarising content, influencing public opinion and even election outcomes with its automated content curation.
  • Adaptability leading to errors: Sometimes, AI's capability to adapt can also result in mistakes. For example, Tesla's autopilot system underwent inspection after several accidents, raising questions on AI’s adaptability skill to the complexities of real-world driving conditions.
  • Inherent flaws in design: AI's interpretative errors may be due to flaws in its very design. A particularly stark event was the Flash Crash of 2010, where algorithmic traders contributed to a drastic and sudden stock market fall. Although swiftly corrected, it showed how AI can play its part in amplifying mistakes due to inherent design vulnerabilities.

The need for shared responsibility

With the above arguments, both human and AI plays crucial role when AI makes mistakes. Therefore, instead of solely blaming one party, a shared responsibility approach is pivotal to prevent and mitigate the impact of such errors. This approach encompasses ethical considerations during all stage of AI systems – from development to deployment to continuous learning, oversight, and responsible use of AI tools by individuals. In addition, robust safety measures and AI regulations are also important to ensure all parties are liable for any AI disasters happen.

What perspective does OneAdvanced hold?

As newcomers to the AI domain, we at OneAdvanced, recognise that responsibility for AI mistakes cannot be solely assigned to humans or machines but requires a shared commitment to the ethical and transparent use of AI. We understand that the potential for AI mistakes is a genuine risk, and hence we are committed to mitigating it by adhering to ethical practices in our development and implementation phases of AI systems.

To achieve this, we constantly educate ourselves, our stakeholders, and customers on the responsible use of AI. Our objective is to develop ethical, unbiased, and secure AI integrated systems that enhance human efficiency without compromising on accountability and responsibility of its actions.

To further explore the foundational strategies that can assist in cultivating a responsible AI ecosystem, read our in-depth article on "How can organisations build a responsible AI framework?"