Artificial intelligence is beginning to make its way into legal systems throughout the world. At the highest level, artificial intelligence can be used to help judges and others within the judicial system deliver judgments and sentencing free from human bias – or so they claim. The problem with assuming an AI-powered system or platform is free from the biases we all carry in our judgments is that it simply isn’t true. At least not at this point in artificial intelligence development.
The problem lies within the creation of different forms and applications of artificial intelligence. AI-powered devices and systems are created by humans – we input data, teach artificial intelligence how to learn, and apply the information, and then the end-user is, of course, a human. The issues created with human developers and creators inputting information that essentially acts as the core of an AI-powered system is that humans naturally carry their own biases and are likely to interpret and replicate information that reflects such biases.
This may not even be something that is done intentionally, as many of us are completely unaware of the biases that we carry with us every day. So, it becomes easy to see where a problem can be created in the use of artificial intelligence for a justice or legal system. For one, if the data collected is from a sample that is not diverse this can create a massive problem. Consider a country that has a very diverse and dense population such as the United States or the United Kingdom, how realistic is it to collect data from a sample that can represent different genders, races, religions, socioeconomic factors, and levels of education.
Some of these factors, such as the level of intelligence a defendant can demonstrate, plays a great role in how they are judged by the legal system. If an AI-powered device has been properly taught how to recognize this kind of discrepancy, is it possible that our judiciary systems can wrongfully convict or sentence and individual who was not aware of the severity of their crime? Furthermore, while it is certainly possible, and quite easy, to accidentally create a biased system – what about governments or corporations who intentionally trying to create a bias within artificial intelligence?
It is no secret that the law favors those who are not in marginalized and ostracized groups. People of color and those with certain religions are convicted and incarcerated at a rate that is unequal to those who are white and of higher socioeconomic standing. Of course, the other side of this conversation is the use of artificial intelligence to help those who may not have access and resources to hire a lawyer. If this is the case, an individual is usually at the mercy of a public defender or is underrepresented during their trial.
Judicial systems utilizing artificial intelligence to help those who are unable to secure proficient legal guidance can help to ensure that their cases are handled properly with the same amount of detail and respect as those who can afford better legal guidance.