Forget Killer Robots—Bias Is the Real AI Danger

Google’s AI chief isn’t fretting about super-intelligent killer robots. Instead, John Giannandrea is concerned about the danger that may be lurking inside the machine-learning algorithms used to make millions of decisions every minute.

“The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased,” Giannandrea said before a recent Google conference on the relationship between humans and AI systems.

The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it (see “Biased Algorithms Are Everywhere, and No One Seems to Care”).

“It’s important that we be transparent about the training data that we are using, and are looking for hidden biases in it, otherwise we are building biased systems,” Giannandrea added. “If someone is trying to sell you a black box system for medical decision support, and you don’t know how it works or what data was used to train it, then I wouldn’t trust it.”

Sahip olduğumuz çeşitli önyargılar (üstelik belli belirsiz de olabilir) aldığımız kararları etkiliyor. Buna BIAS deniliyor ve insanın şaşırtıcı oranda Cognitive Bias'ları olduğunu biliyoruz. Aynı durum karar mekanizmasına gözü kapalı güvenmeye çalıştığımız yapay zekalar için de geçerli. Öldürücü süper zeki yapay zeka robotlarından önce, bizim için karar verirken BIAS'lardan etkilenen yapay zekalar daha güçlü bir tehdit oluşturabilir.