Global News

Cannes Lions 2017: Google’s machine learning solutions to very human problem of online bullying

We know these numbers to be true because we have seen and experienced them for ourselves: 72% of the global population of internet users have witnessed harassment and 47% of internet users have become targets of abuse themselves. Even more troubling is the number wrought as a consequence: “Roughly a third of people online say that they self-censor out of fear of what people might do in retaliation for something that they say,” revealed Jared Cohen as a speaker at the Cannes Lions International Festival of Creativity on the topic “Machine Learning for Everyone” on June 20, 2017.

“There’s this awkward irony that it’s in the democratic societies where you want to allow the free flow of information but also protect people from harassment that many publishers and platforms, out of just an inability to figure out how to manage all this, end up shutting down comments and discussion altogether or partially shutting down comments and discussion,” warns Cohen, the president of Jigsaw, the incubator within Google’s holding company Alphabet tasked “to tackle some of the toughest global security challenges facing the world today—from thwarting online censorship to mitigating the threats from digital attacks to countering violent extremism to protecting people from online harassment” as its website proclaims.

A father to two young children, Cohen notes, “There’s also a normative issue…You look at this sort of spread of toxicity and you can’t help but to wonder if this is the frame of reference that they’re going to have for how people talk to each other.”

Sponsor

True enough, in the virtual world where people can log-in anonymously, hide behind avatars, or simply escape accountability with their remoteness, a culture of impunity, disinformation, and intimidation has been fostered. A world without cosnequence is a world without morality. Completely lost to today’s digital generation is the “Blogger’s Code of Conduct” proposed by publisher, investor, and open source evangelizer Tim O’Reilly in 2007 that admonishes, “We won’t say anything online that we wouldn’t say in person.”

Automated solutions

Faced with huge amounts of comments on news articles and online forums—too many for any human to monitor judiciously—Jigsaw, with its Perspective artificial intelligence technology that that uses machine learning to spot harassment on the web, plans to rank messages and comments by a qualifiable property—toxicity—defined as how a message is likely to drive people away from a conversation. Perspective can warn moderators of online discussions of toxic comments as well as warn users themselves when they are about to send toxic messages that can damage relations and incriminate themselves.

However, with some foresight, it seems predictable how trolls and bigots would find loopholes around such a “toxicity” ranking: with polite-said yet nonetheless damaging lies, with racist double entendres that evade artificial intelligence and machine learning, and with bigots feigning having their sensitivities offended and even leaving conversations to shut down critics who would be erroneously identified by artificial intelligence as the culprits. The possible unintended consequence of such a “toxicity” ranking also seem predictable: the censoring of righteous indignation over truly horrific atrocities, hateful messages, and outright lies. It may drive people away from a conversation, but to call lies, bigotry, or hate as anything but what they truly are is to trivialize, normalize, and deodorize them.

Such possible unintended consequences are not without precedent:

  • Faced with huge amounts of content—too much for any human to curate for users of social media—data scientists reduced the problem to a single quantifiable property—popularity—and with it ranked what appeared in people’s timelines. In hindsight, the consequences should have been predictable: The most extreme views got the most rabid likes and shares; the most outrageous fake news with clickbait headlines got the most attention; “click farms” of hundreds of cell phones controlled by a few, “bots” or software that automated and mimicked humans liking and sharing of articles, and paid trolls writing fake news and posting intimidating comments against critics all gamed the system by creating a false impression of popularity for those willing to pay for it, which in turn fooled real people into joining the bandwagon.
  • Faced with a social network of billions—a myriad of divergent cultures, languages, preferences, ideologies, values, and beliefs—data scientists simplified the problem to a single quantifiable property—similarity—and with it filtered what appeared in people’s timelines. In hindsight, the consequences should have been predictable: likeminded people were enclosed and insulated within their own bubble creating echo chambers with few if any divergent opinions to challenge deeply entrenched presumptions and biases resulting in highly fragmented partisan factions that are oblivious to the sentiments of others.

Tools at their disposal 

At the same Lions Live talk on “Machine Learning for Everyone,” David Singleton, the engineering vice president of Google leading the Android Wear and Google Fit teams, championed machine learning that allowed devices to quickly deduce what images users meant to draw with a few squiggles of their fingers on the touch screens of smart watches and phones, helped farmers pick the best cucumbers, musicians create the sounds hybrid instruments, and made the virtual Google Assistant ever more intuitive.

With ever more accurate drawing and handwriting recognition, facial recognition, voice recognition, predictive text, auto translate, and spell check now a reality thanks to machine learning, it begs the question: Why isn’t fact checking, spotting fake news, and debunking hoaxes automated with machine learning and artificial intelligence as well?

Current efforts at stemming disinformation seem doomed to fail: Social networks currently expect users to take the time and effort to report fake news—a process that requires several clicks and questions—and expect volunteer nonpartisan news organizations to manually and painstakingly fact check-check each of these flagged stories. With some foresight, it seems predictable how trolls and bigots could hijack this system by reporting valid news reports that are critical of their side or sabotage the system by overloading it with erroneous reports. How can volunteer netizens and volunteer fact-checking journalists compete against armies of well-financed and for-profit trolls and fake news writers?

Cohen himself asks, “What happens when cyberbullying becomes better organized, better funded and state sponsored?”

If current technology allows the facial recognition of a person among billion of others and spelling corrections of a over a hundred languages, why can’t machines learn to spot fake news and hoaxes that runs counter to verified media reports, natural history and human history, and the known body scientific understanding?

The answer may lie beyond the grasp of programmers and data scientists: that perhaps it is a business decision. Automated fact-checking would, logically, flag down many deeply cherished beliefs.

  • The Earth created in just seven days, all the world’s species contained in one ark, and a great flood submerging the entire world? That would never pass fact checking by artificial intelligence.
  • Abraham hearing voice inside his head to sacrifice his only son and mutilate his genitals? Today such a person would be locked in an asylum for insanity or in a prison for child endangerment and abuse.
  • Commanding the death of people for wearing clothes that mix fabrics, working on the Sabbath, or having relations with the same sex? That is hate speech and incitement to violence by any standard.

All three are to be found in the sacred scriptures in the Bible.

Logically, a fact-checked digital world would be a secular one free from religion and superstition. That would offend billions and may cost billions. It may not be toxic to conversations, but an internet free of lies may be toxic to profits.

Partner with adobo Magazine

Related Articles

Back to top button