The Plural Network Initiative aims to explore the root causes of gaps and biases in machine learning datasets, educate about their consequences and include everyone in the betterment of AI-driven technologies.
Every day we produce and consume gigabytes worth of data while shopping, working, driving and even sleeping. Our society is increasingly relying on technology and AI-driven processes to optimize how we interact with digital services—and each other. Now that the development pace of artificial neural networks is picking up, we might stand on the brink of a machine learning revolution.
But algorithms are not perfect and the data they’re working with is far from diverse enough to fully reflect the vibrant, colorful world we live in. We humans are visual creatures and how we see the world deeply affects how we interact with it. Now we’re teaching machines to do the same: To look at the world, to try to understand it—and make increasingly important decisions based on their observations. But not only are machines very good at emphasizing pre-existing biases, they’re also often unable to recognize critical gaps in the data we’re feeding them. And if they don’t behave as expected, it’s extremely difficult to look into the black boxes of their minds.
Shortcomings in machine learning applications are mostly causing annoyance in the present, but may have disastrous consequences in the future.
Today, your photo app may have tagged your wedding pictures incorrectly, just because your attire didn’t conform to Western expectations. Tomorrow, a law enforcement drone might detain you based on the color of your skin or facial features it deemed alien or threatening. Biases in how machines see our world will—inevitably—lead to discrimination and marginalisation. They will cause health and safety threats. And they will further solidify existing divisions in our communities.
We at the Plural Network Initiative strongly believe in a healthy and fair digital society. The opportunities provided by artificial intelligence have to be equally accessible to everyone, so that a person’s demographic characteristics do not determine their quality of service and experience. We are committed to uncovering biases and gaps in AI-driven systems. We want to develop tools that catalyze collaboration among diverse communities and show machines the world as we see and experience it.