It was not until the end of 2022 that ChatGPT entered the public domain, meaning we all had to learn what generative AI was about. Its overnight success was no less amazing as it took just five days to reach a million users. ChatGPT version 3.5 was trained on 175 billion parameters, and ChatGPT version 4 was trained on roughly ten trillion words. No matter how one looks at such incomprehensible numbers - that is how much information humans have produced. Structured and unstructured data are processed and stored in what may be likened to a giant data vacuum cleaner sucking up everything it can.
Generative AI continues to learn from us and our collected data every moment of every day. It learns our languages, our biases (conscious and unconscious), our nuances, our way of writing, our way of speaking, and everything there is to know about humans. But knowing and understanding are not necessarily the same.
As interest in generative AI and all its offerings increases, so does concern over bias and ethics—let alone false and misleading information. Without proper filters, it is no wonder AI sometimes spits out some very scary stuff. And it should not surprise us that much of the AI output is a true reflection of us, with all our faults and biases. Sometimes, AI can make things worse by “speaking” with authority.
It makes sense that since so much of generative AI is trained on human documentation, it is only natural for AI systems to learn from “us,” inheriting our conscious and unconscious biases and ethical challenges.
Ethics and bias in AI are incredibly serious and have profound implications for society. And in case you need further proof, here is why:
Addressing ethics and bias in AI requires a multi-stakeholder approach involving collaboration between policymakers, technologists, ethicists, researchers, civil society organizations, and the public. It involves implementing measures such as diversity in dataset collection, transparent and accountable AI algorithms, ethical guidelines and standards, bias detection and mitigation techniques, and ongoing monitoring and evaluation of AI systems' impacts on society. By prioritizing ethics and bias mitigation in AI development, we must strive to create more equitable, trustworthy, and beneficial AI technologies for all.
While bias is considered one of the leading concerns of AI, we rarely hear about another term that can be as significant – it is “prejudice.” Some believe they are one and the same, but bias and prejudice in AI are closely related, yet they have distinct nuances:
Bias refers to the inherent slant or unfairness in the data used to train an AI system. This can arise from various factors like:
Prejudice is a stronger term implying a preconceived negative attitude towards a particular group. AI doesn't inherently hold prejudices, but biased training data can lead it to make discriminatory decisions that reflect those prejudices.
Both bias and prejudice in AI can lead to unfair outcomes, highlighting the importance of using diverse and unbiased datasets to train AI systems.
As AI architects struggle with developing algorithms that directly address bias and prejudice, we must be mindful that they may not come to a collective agreement. This is not as easy to address as some suggest. After all, ethics are derived from societal norms that can change over time. So, as technologists struggle to find fixes to what might be considered “commonly accepted,” others may view such alterations as discriminatory. Unfortunately, many individuals are unaware of their biases and perhaps, prejudices. Can we make AI better than us? And if such a plan or vision exists, might it worsen things? And that is a question for humans to decide before AI does it for us.
Dr. Alan R. Shark is the Executive Director of the Public Technology Institute (PTI), a division of the nonprofit Fusion Learning Partners. and Associate Professor for the Schar School of Policy and Government, George Mason University where he is also an affiliate faculty member at the Center for Advancing Human-Machine Partnership (CAHMP). Shark is a Fellow of the National Academy of Public Administration and Co-Chair of the Standing Panel on Technology Leadership. Shark also hosts the bi-monthly podcast Sharkbytes.net. Dr. Shark acknowledges collaboration with generative AI in developing certain materials.