top of page

Fully Realized Project

When I first set down to brainstorm this story, I had a few dates in mind for when it would take place: the year 2100...maybe 2050...or I could go crazy and set it in the 30th century. I was sure that this story was going to be a futuristic thought experiment. Computers deciding the law? That was very far-off, I thought.

Then I did some research, and I saw that it was practically already here.

 

Prisoners of color denied parole because of racist algorithms. Families of color denied housing because of racist algorithms. Teenagers of color, disproportionately Black teenagers, being classified as more dangerous than domestic terrorists.

 

Why was this happening? There are a few reasons why an algorithm might discriminate like a human does, but the common theme is that the people who created these algorithms did not have enough oversight to prevent their own biases from being baked into them. So why are they trusted to be more precise, more correct, more moral than we are on matters of human nature?

 

I wanted to put this  story in a similar life-and-death context to really drive home that discrimination in an algorithm IS the difference between life or death for many people in our current world. Ethics can no longer be an afterthought.

 

The answer to: "Can we make robots less biased than we are?" is currently: Yes. It's possible.

Technology is not universally bad or universally good; it's clear that almost nothing is that simple. It is a tool, and like any tool, in the hands of a person with misguided intentions (or no intention at all) it has a great capacity for harm.

 

Ask more questions about the algorithms you interact with: our YouTube search results, your top news feeds --> and ask who likely wrote them, and who they were likely written for.

 

There is a path forward: but first we need enough people to notice the problem.

For further reading on this topic check out the Research page.

bottom of page