Skip to main content

Justice, Fairness, Inclusion, and Performance.

Making AI Better Than Us – What Could Possibly Go Wrong?

March 15, 2024


It was not until the end of 2022 that ChatGPT entered the public domain, meaning we all had to learn what generative AI was about. Its overnight success was no less amazing as it took just five days to reach a million users. ChatGPT version 3.5 was trained on 175 billion parameters, and ChatGPT version 4 was trained on roughly ten trillion words. No matter how one looks at such incomprehensible numbers - that is how much information humans have produced. Structured and unstructured data are processed and stored in what may be likened to a giant data vacuum cleaner sucking up everything it can.

Generative AI continues to learn from us and our collected data every moment of every day. It learns our languages, our biases (conscious and unconscious), our nuances, our way of writing, our way of speaking, and everything there is to know about humans. But knowing and understanding are not necessarily the same.

As interest in generative AI and all its offerings increases, so does concern over bias and ethics—let alone false and misleading information. Without proper filters, it is no wonder AI sometimes spits out some very scary stuff. And it should not surprise us that much of the AI output is a true reflection of us, with all our faults and biases. Sometimes, AI can make things worse by “speaking” with authority.

It makes sense that since so much of generative AI is trained on human documentation, it is only natural for AI systems to learn from “us,” inheriting our conscious and unconscious biases and ethical challenges.

Ethics and bias in AI are incredibly serious and have profound implications for society. And in case you need further proof, here is why:

  • Impact on Individuals and Society: Biases in AI systems can perpetuate and amplify societal inequalities, leading to discriminatory outcomes in hiring, lending, criminal justice, and healthcare. These biases can result in unfair treatment, marginalization, and exacerbation of social disparities.
  • Trust and Acceptance: Ethical concerns surrounding AI, including bias, transparency, accountability, and privacy, can erode trust in AI technologies. Lack of trust may hinder the adoption and acceptance of AI systems, limiting their potential benefits to society.
  • Legal and Regulatory Implications: As AI technologies become more pervasive, there is a growing need for regulations and legal frameworks to address ethical considerations and mitigate potential harm. Failure to address ethical issues in AI could lead to legal challenges, regulatory fines, and reputational damage for organizations deploying AI systems.
  • Human Rights and Dignity: Ethical AI development should prioritize protecting human rights and dignity. Biased AI systems can potentially infringe upon individuals' rights to fairness, non-discrimination, privacy, and autonomy.
  • Long-Term Implication: The decisions made by AI systems can have long-lasting and far-reaching consequences. Ethical considerations must be integrated into the entire lifecycle of AI development, from data collection and model training to deployment and impact assessment, to ensure that AI technologies serve the greater good and align with societal values.

Addressing ethics and bias in AI requires a multi-stakeholder approach involving collaboration between policymakers, technologists, ethicists, researchers, civil society organizations, and the public. It involves implementing measures such as diversity in dataset collection, transparent and accountable AI algorithms, ethical guidelines and standards, bias detection and mitigation techniques, and ongoing monitoring and evaluation of AI systems' impacts on society. By prioritizing ethics and bias mitigation in AI development, we must strive to create more equitable, trustworthy, and beneficial AI technologies for all.

While bias is considered one of the leading concerns of AI, we rarely hear about another term that can be as significant – it is “prejudice.” Some believe they are one and the same, but bias and prejudice in AI are closely related, yet they have distinct nuances:

Bias refers to the inherent slant or unfairness in the data used to train an AI system. This can arise from various factors like:

  • Selection bias occurs when the training data does not represent the real world and is skewed toward a certain group.
  • Data bias: When the data reflects existing societal prejudices like criminal records disproportionately affecting certain demographics.

Prejudice is a stronger term implying a preconceived negative attitude towards a particular group. AI doesn't inherently hold prejudices, but biased training data can lead it to make discriminatory decisions that reflect those prejudices.

Both bias and prejudice in AI can lead to unfair outcomes, highlighting the importance of using diverse and unbiased datasets to train AI systems.

As AI architects struggle with developing algorithms that directly address bias and prejudice, we must be mindful that they may not come to a collective agreement. This is not as easy to address as some suggest. After all, ethics are derived from societal norms that can change over time. So, as technologists struggle to find fixes to what might be considered “commonly accepted,” others may view such alterations as discriminatory. Unfortunately, many individuals are unaware of their biases and perhaps, prejudices. Can we make AI better than us? And if such a plan or vision exists, might it worsen things? And that is a question for humans to decide before AI does it for us.

Dr. Alan R. Shark is the Executive Director of the Public Technology Institute (PTI), a division of the nonprofit Fusion Learning Partners. and Associate Professor for the Schar School of Policy and Government, George Mason University where he is also an affiliate faculty member at the Center for Advancing Human-Machine Partnership (CAHMP). Shark is a Fellow of the National Academy of Public Administration and Co-Chair of the Standing Panel on Technology Leadership. Shark also hosts the bi-monthly podcast Sharkbytes.net. Dr. Shark acknowledges collaboration with generative AI in developing certain materials.