Monday 12 December 2016

Demerits Of Handing Over Our Decision Making To AI

AI

The lack of transparency around the processes of Google’s search engine has been a preoccupation among scholars since the company began. Long before Google expanded into self-driving cars, smartphones and ubiquitous email, the company was being asked to explain the principles and ideologies that determine how it presents information to us. And now, 10 years later, the impact of reckless, subjective and inflammatory misinformation served up on the web is being felt like never before in the digital era.

Google responded to negative coverage this week by reluctantly acknowledging and then removing offensive autosuggest results for certain search results. Type “jews are” into Google, for example, and until now the site would autofill “jews are evil” before recommending links to several rightwing antisemitic hate sites.

That follows the misinformation debacle that was the US general election. When Facebook CEO Mark Zuckerberg addressed the issue, he admitted that structural issues lie at the heart of the problem: the site financially rewards the kind of sensationalism and fake news likely to spread rapidly through the social network regardless of its veracity or its impact. The site does not identify bad reporting, or even distinguish fake news from satire.

Facebook is now trying to solve a problem it helped create. Yet instead of using its vast resources to promote media literacy, or encouraging users to think critically and identify potential problems with what they read and share, Facebook is relying on developing algorithmic solutions that can rate the trustworthiness of content.

This approach could have detrimental, long-term social consequences. The scale and power with which Facebook operates means the site would effectively be training users to outsource their judgment to a computerised alternative. And it gives even less opportunity to encourage the kind of 21st-century digital skills – such as reflective judgment about how technology is shaping our beliefs and relationships – that we now see to be perilously lacking.

The engineered environments of Facebook, Google and the rest have increasingly discouraged us from engaging in an intellectually meaningful way. We, the masses, aren’t stupid or lazy when we believe fake news; we’re primed to continue believing what we’re led to believe.

The networked info-media environment that has emerged in the past decade – of which Facebook is an important part – is a space that encourages people to accept what’s presented to them without reflection or deliberation, especially if it appears surrounded by credible information or passed on from someone we trust. There’s a powerful, implicit value in information shared between friends that Facebook exploits, but it accelerates the spread of misinformation as much as it does good content.

Every piece of information appears to be presented and assessed with equal weight, a New York Times article followed by some fake news about the pope, a funny dog video shared by a close friend next to a distressing, unsourced and unverified video of an injured child in some Middle East conflict. We have more information at our disposal than ever before, but we’re paralyzed into passive complacency. We’re being engineered to be passive, programmable people.

In the never-ending stream of comfortable, unchallenging personalized info-tainment there’s little incentive to break off, to triangulate and fact check with reliable and contrary sources. Actively choosing what might need investigating feels like too much effort, and even then a quick Google search of a questionable news story on Facebook may turn up a link to a rehashed version of the same fake story.

The “transaction costs” of leaving the site are high: switching gears is fiddly and takes time, and it’s also far easier to passively accept what you see than to challenge it. Platforms overload us with information and encourage us to feed the machine with easy, speedy clicks. The media feeds our susceptibility to filter bubbles and capitalizes on contagious emotions such as anger.

It is crucial for a resilient democracy that we better understand how these powerful, ubiquitous websites are changing the way we think, interact and behave. Democracies don’t simply depend on well-informed citizens – they require citizens to be capable of exerting thoughtful, independent judgment.

This capacity is a mental muscle; only repeated use makes it strong. And when we spend a long time in places that deliberately discourage critical thinking, we lose the opportunity to keep building that skill.


SHARE THIS

Author:

Etiam at libero iaculis, mollis justo non, blandit augue. Vestibulum sit amet sodales est, a lacinia ex. Suspendisse vel enim sagittis, volutpat sem eget, condimentum sem.

0 comments: