• Live
    • Audio Only
  • google plus
  • facebook
  • twitter
  • “Once you open the door, you might wonder what other kinds of things we would be looking for,” Calo said.

    “Once you open the door, you might wonder what other kinds of things we would be looking for,” Calo said. | Photo: Reuters

Published 27 November 2017

Facebook, which has been linked to reduced mental health and life satisfaction in studies, will begin scanning user data to detect suicidal intent.

Facebook plans to expand its pattern recognition software to other countries after its successful tests to detect users with suicidal intent in the United States.

RELATED: 
Google Engineer's 'Manifesto' Exposes Silicon Valley's Racism and Sexism

The new suicide prevention campaign by the social network giant makes sense as a basic palliative, considering the release of several studies linking time spent on Facebook with corrosive effects on one's mental health, sense of well-being and self-esteem.

The technical details of the program remain unclear, but the company noted that the software detects certain phrases that act as red flags such as the questions, “Are you ok?” and “Can I help?”

If the software detects a potential suicide, it alerts a team of Facebook workers who specialize in handling such reports. The system suggests resources to the user or to friends of the person such as a telephone helpline. Facebook workers may even call local authorities to intervene.

During the past month, first responders checked on people more than 100 times after Facebook software detected suicidal intent, boasted Guy Rosen, Facebook’s vice president for product management. The success of the software tests has led the company to roll the software out beyond the United States.

RELATED:
Facebook Denies 'Listening in' on Users' Microphone Chats

The initiative, some argue, represents a form of “empathy-washing” – a term introduced by tech analyst Evgeny Morozov in an article last year for The Guardian about large corporations taking up humanitarian causes as a means to tout their “compassionate” pretenses.

“Empathy-washing initiatives create the false impression that the crisis is under control, with individual ingenuity, finally unlocked by privatized technologies, compensating for the rapidly deteriorating situation on the ground,” Morozov wrote.

“And even if some of them do temporarily relieve the effects of the crises – against their causes, privatized technological solutions are impotent – they also entrench the power of technology platforms as indispensable intermediaries” that are essential in managing a post-crisis landscape.

Last year, when Facebook launched live video broadcasting, videos proliferated of violent acts including suicides and murders, presenting a threat to the company’s image. In May, Facebook said it would hire 3,000 more people to monitor videos and other content.

In April, a report released by Harvard Business Review provided data supporting the argument that excessive use of Facebook leads to negative self-comparisons with other online users, often detracting from more meaningful experiences enjoyed in offline, real-life social networks.

RELATED:
Facebook to Tell Users If They 'Fell' for Russian Propaganda

“Exposure to the carefully curated images from others' lives leads to negative self-comparison, and the sheer quantity of social media interaction may detract from more meaningful real-life experiences," the report says.

“Overall, our results showed that, while real-world social networks were positively associated with overall well-being, the use of Facebook was negatively associated with overall well-being,” added researchers Holly Shakya with the University of California, San Diego, and Nicholas Christakis with the Human Nature Lab at Yale University.

“These results were particularly strong for mental health; most measures of Facebook use in one year predicted a decrease in mental health in a later year. We found consistently that both liking others’ content and clicking links significantly predicted a subsequent reduction in self-reported physical health, mental health, and life satisfaction.”

RELATED:
Social Media Use Linked to Depression Among US Young Adults

Facebook enjoys access to a vast amount of personal data extracted from its 2.1 billion users, which it claims to use mainly for targeted advertising. However, this is the first time that the company has admitted that it would systematically scan conversations for patterns of harmful behavior.

One exception is its efforts to spot suspicious conversations between children and adult sexual predators. Facebook sometimes contacts authorities when its automated screens detect salacious or inappropriate language.

But it may be more difficult for tech firms to justify scanning conversations in other situations, said Ryan Calo, a University of Washington law professor who writes about tech.

“Once you open the door, you might wonder what other kinds of things we would be looking for,” Calo said.

Rosen declined to say if Facebook was considering pattern recognition software in other areas, such as non-sex crimes.


Comment
0
Comments
Post with no comments.