Words matter. These are the best Kate Crawford Quotes, and they’re great for sharing with your friends.
It is a failure of imagination and methodology to claim that it is necessary to experiment on millions of people without their consent in order to produce good data science.
We should have equivalent due-process protections for algorithmic decisions as for human decisions.
Like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters – from who designs it to who sits on the company boards and which ethical perspectives are included.
With big data comes big responsibilities.
Many of us now expect our online activities to be recorded and analyzed, but we assume the physical spaces we inhabit are different. The data broker industry doesn’t see it that way. To them, even the act of walking down the street is a legitimate data set to be captured, catalogued, and exploited.
Error-prone or biased artificial-intelligence systems have the potential to taint our social ecosystem in ways that are initially hard to detect, harmful in the long term, and expensive – or even impossible – to reverse.
Books about technology start-ups have a pattern. First, there’s the grand vision of the founders, then the heroic journey of producing new worlds from all-night coding and caffeine abuse, and finally, the grand finale: immense wealth and secular sainthood. Let’s call it the Jobs Narrative.
Biases and blind spots exist in big data as much as they do in individual perceptions and experiences. Yet there is a problematic belief that bigger data is always better data and that correlation is as good as causation.
Hidden biases in both the collection and analysis stages present considerable risks and are as important to the big-data equation as the numbers themselves.
There is no quick technical fix for a social problem.
Surveillant anxiety is always a conjoined twin: The anxiety of those surveilled is deeply connected to the anxiety of the surveillers. But the anxiety of the surveillers is generally hard to see; it’s hidden in classified documents and delivered in highly coded languages in front of Senate committees.
We should always be suspicious when machine-learning systems are described as free from bias if it’s been trained on human-generated data. Our biases are built into that training data.
There’s been the emergence of a philosophy that big data is all you need. We would suggest that, actually, numbers don’t speak for themselves.
As AI becomes the new infrastructure, flowing invisibly through our daily lives like the water in our faucets, we must understand its short- and long-term effects and know that it is safe for all to use.
While massive datasets may feel very abstract, they are intricately linked to physical place and human culture. And places, like people, have their own individual character and grain.
Data will always bear the marks of its history. That is human history held in those data sets.
We need a sweeping debate about ethics, boundaries, and regulation for location data technologies.
Self-tracking using a wearable device can be fascinating.
If you’re not thinking about the way systemic bias can be propagated through the criminal justice system or predictive policing, then it’s very likely that, if you’re designing a system based on historical data, you’re going to be perpetuating those biases.
Data and data sets are not objective; they are creations of human design. We give numbers their voice, draw inferences from them, and define their meaning through our interpretations.
Numbers can’t speak for themselves, and data sets – no matter their scale – are still objects of human design.
Data is something we create, but it’s also something we imagine.
We need to be vigilant about how we design and train these machine-learning systems, or we will see ingrained forms of bias built into the artificial intelligence of the future.
Big Data is neither color-blind nor gender-blind. We can see how it is used in marketing to segment people.
Facebook is not the world.