Since the 1950’s, AI platforms have taken the world by storm. Chatbots such as ChatGPT, Google Bard and Snapchat MyAI are increasingly being geared towards the general public. However, we’re failing to recognize the drawbacks these systems create. AI is highly susceptible to biases, and all who utilize these platforms must recognize and actively work to counter this.
AI bias is real, and it’s dangerous. This can be seen by Google’s online advertising system, which publicized high-paying jobs to men more than women. Similarly, in healthcare, computer-aided diagnostic systems produced less accurate results when examining a Black patient vs. a white patient.
Not only is this damaging on a physical level, it’s also a dangerous omen for the future. AI consumption is rapidly increasing — in a study conducted by Forbes, 73% of businesses use or plan to use these programs. In the classroom, more than half of students expect their AI application to increase in the coming months, as reported by an Inside Higher Education study.
Perhaps the most concerning is data showing that 42% of students and teachers believe that AI creates a more equitable system, according to PR Newswire. This displays our ignorance to the biases that AI perpetuates — in doing so, we’re dooming ourselves to further marginalization.
Now, it’s important to acknowledge that AI is not the sole culprit. In a world where minority groups are sorely underrepresented, software that pulls information from the surrounding environment will naturally create biases based on a preexisting lack of information. True change must come from a deeper level, by examining human and systemic biases. However, due to technology’s increased influence on our lives, it’s imperative that we focus on AI. That starts with critical media literacy, through source analysis and thorough discussion.
Media literacy is so often used that it has lost its meaning in many senses. But it takes on a new role in AI, where the student is not conducting research themselves — rather, having a chatbot do the work. It’s important to add a human touch to activities, to counter bias. Whether copy-and-pasting a prompt into AI or using it to sort data, read through what was created and fact check it. If there’s an inaccuracy, report it. Students can email the Federal Trade Commission, a governmental agency dedicated to protecting Americans from deceptive and unfair practices.
The world we live in is rich in diversity and beauty. It’s time we actively and appropriately represent that, rather than allowing biased AI to further a system of discrimination.