Can AI Replicate Human Biases? 

  • May 21, 2018

    Artificial intelligence promises to increase efficiencies, free us from mundane tasks and improve our lives overall. Some argue that transitioning decisions from humans to computers will also remove biases and create a fairer workplace. While this argument seems compelling on the surface, we must first understand how AI works in the first place.

    AI serves as a catch-all for computers that can make decisions autonomously. However, the technology we refer to as AI is expansive and varies in capabilities. In previous years, people tended to refer to AI in four major categories: reactive, limited memory, theory of mind and self awareness. Now, developers often refer to AI in terms of machine learning or deep learning. A machine’s true ability to learn is still a nascent concept.

    Machine Learning

    In machine learning, computers use algorithms to make predictions based on patterns and data to which they are previously exposed. Deep learning allows for machines to “think” more like humans. It is a subset of machine learning. However, deep learning algorithms can make predictions with noise, missing pieces of information and confusion. Computers can make inferences and address novel situations. True deep learning looks a lot more like the AI we fantasize about in science fiction.

    The AI we use and interact with on a daily basis leverages machine learning to make decision based on historical data and previous observations. By design, machine learning can perpetuate biases already embedded in our society. A story aired by NPR in 2016 provides examples of where biases in AI can be potentially dangerous.

    AI & Hiring Processes

    Lately, HR professionals and their tech counterparts are paying close attention on how AI can affect the hiring process. AI can assist hiring managers by filtering through applications and identifying the best candidates for interviews. However, biases in the programming logic, whether conscious or not, can dismiss excellent candidates without the hiring manager even realizing it. Relying on algorithms takes away human intuition and the ability to take risks that can pay dividends.

    An NBC News article highlights the fact that most AI programming professionals in Silicon valley consist of highly educated white males.While they may program AI that works for them, they often don’t factor in some of the unintended consequences of real customers using the technologies in the market.

    While some of these consequences may be benign, others may prove more severe given the gravity of decision making carried out by AI algorithms. When creating AI programs, really consider who your end users will be. Who will outcomes affect? As we delve further into the digital age, we need to think beyond the short term conveniences that tech can provide and think critically about the bigger picture.

    Comments(0)

    Leave a comment

    Required fields are marked *