Every social media network has its underbelly, and the one on
Periscope, Twitter’s live-video app, might be uglier than most: On any given
day, users appear to flock to broadcasts from minors and encourage them to
engage in sexual and inappropriate behavior. Worried Periscope users have been ringing the
alarm for more
than a year, and Twitter has reaffirmed its zero-tolerance policy
against child exploitation after reporters have followed up. But if the company
has been working any harder to enforce that policy, its efforts don’t appear to
have scrubbed out the grime.
Some Periscope users were routinely pursuing children who had logged
on to the platform to play games like truth or dare with others. It took
pseudonym-cloaked commenter's less than six minutes to persuade a girl, broadcasting
with a friend and playing truth or dare on a public forum recently, to lift her
shirt and show her breast. “Fully out,” typed one user, right before the girl
revealed herself. “But with shirt up…” instructed another, before the girl did
it again. The girls, both of whom had braces and appeared to be younger than
18, said they loved to roller-skate, mentioned their homeroom class, and said
they didn’t know what an “underboob” was after being asked to show some. It’s
not clear whether the users directing the girls were also minors or were
adults. But whatever the age of the commenter's, their behavior was in violation
of Periscope’s standards, which bars users from engaging in sexual acts and
“directing inappropriate comments to minors in a broadcast.”
In another alarming video, a
pair of girls who described themselves as sisters (one said she was 14, and the
other appeared to be several years younger) were asked to show their bras and
their underwear and pressured by multiple commenter's to continue to strip.
“Dare y’all to play rock, paper, scissors, and loser has to flash,” said one
viewer, after both girls had already shown their underwear.
Launched in 2015, Periscope
makes it easy for anyone to start a broadcast that others can watch live and
send comments to the broadcaster while he or she is filming. Commenter's can
also send broadcasters hearts to show that they’re enjoying the live content.
As you Periscope, you can see the comments and hearts in response to your
stream. There is also a private stream function, which is only available to
users who follow each other. In incidents like the ones described above,
commenter's routinely ask the young broadcaster to follow them, perhaps hoping
to engage in a private video stream.
Although concerned Periscope
users have been alerting the company that some people were using its app to
coax children into inappropriate behavior for more than a year—and in July,
the BBC
even aired an investigation into how users on Periscope were
pressuring children with sexually explicit messages—children and teenagers can
still be swamped with requests from viewers to do things like take off their
shirts and pants, show their underwear, show their feet, kiss other kids, do
handstands, and answer lewd questions. In other words, it’s clear the company
hasn’t figured out how to solve the problem. Periscope officials said, “We have a strong content moderation policy and
encourage viewers to report comments they feel are abusive. We have zero
tolerance for any form of child sexual exploitation.”
It’s not that Periscope hasn’t
done anything. On Nov. 27, about five months after the BBC report,
Periscope rolled out an update to
its reporting tool that allows users to flag potentially inappropriate content.
The updated tool includes a category for “child safety,” as well as a way to
flag “sexually inappropriate” comments by users talking to broadcasters on
live streams. In that announcement,
Periscope said that since “the beginning of 2017, we have banned more than
36,000 accounts for engaging or attempting to engage inappropriately with
minors.” This announcement, however, came in the form of a post on Medium
(where Periscope only has 116 followers), which the company tweeted out
five days after publishing it, after updating it to add details on the new
reporting tools. In the app itself, there was no announcement or indication
that the new feature existed that I’ve been able to find, suggesting that many
Periscope users might be unaware of the updated reporting tool.
Periscope was contacted on Nov. 30, 2017 to ask about explicit interactions with minors on the platform and what the
company is doing to solve the problem. In response, Periscope encouraged people to
report any problematic videos found in the future and said that it has “a team
that reviews each and every report and works as quickly as possible to remove
content that violates our Community
Guidelines.” Periscope was asked about the size of the team, which
Periscope said in its recent Medium post is expanding, and was asked for more
information about what else the company is doing about this kind of content. Periscope officials did not respond. A Department of Justice official was asked if he was aware of and had taken any actions regarding
this activity on Periscope. A spokeswoman said, “As a matter of policy, the
U.S. Department of Justice generally neither confirms nor denies the existence
of an investigation.”
A Periscope official did
say that Periscope was “working to implement new technology” that is supposed to help
detect accounts that are potentially violating the company’s policy and improve
the reporting process—though at the moment, it’s not clear whether that
software is running or the company is relying on user reporting alone. (When
pressed on that question, Periscope did not respond.) Due to the live nature of
the videos, it’s probably hard for Periscope to know exactly when a new one
pops up that features a minor and attracts predatory commenter's, though the
platform has removed
live broadcasts while they are happening in the past. “Unless
they’ve got keywords down really tightly to know what constitutes a grooming
message, … automated detection may be a little harder to do just via existing
algorithmic tools,” Thomas Holt, a criminal justice professor at Michigan State
University who specializes in computer crimes, told me. That means that having
a reporting feature to help target accounts for removal is critically
important, as is having staff to review the user reports. But, according to
Holt, the efficacy of those reporting tools depends on how much users are even
aware they exist. Kids might not even know when a pedophile is attempting to
lure them into sexual acts, or even that it’s wrong and should be reported. And
again, even a strong reporting regime clearly isn’t enough.
Videos of children being lured
into sexual or inappropriate behavior on Periscope can rack up more than 1,000
views. The videos tend to follow a pattern: Once the stream starts, dozens of
Periscope users flock into the comments, as if they had been alerted either on
Periscope or via a separate forum outside of Periscope, suggesting some level
of coordination. This type of swarming is common, according to Holt: “Multiple
people will often start to send sexual requests, questions, or content in an
attempt to exert a degree of social pressure on the person to respond to a
request.” This makes the request seem more normal, Holt says, and can
manipulate a child to respond to a sexual request to please the group.
One place within Periscope that
had become a hive for this kind of misbehavior was the “First Scope” channel,
which curated streams from people using the platform for the first time, according to
Geoff Golberg, a former active Periscope user who has been vocal in calling
attention to the problem of inappropriate behavior directed toward minors on
the app. That channel was removed in November, months after Golberg sent emails
to the company (which he tweeted
out) about the potential of minors being sexually exploited in the
channel.*
While
it’s good that Periscope is taking some degree of action, Holt says that the
risk posed by virtually every social media platform—particularly ones that are
more reliant on images than text, since text is easier to patrol with
software—means it’s critically important for parents to understand what their
kids are doing when they’re online, and to have conversations with them about
what apps they use, what constitutes bad behavior, and how to report it.
Periscope isn’t the only popular social media site struggling to moderate how
kids use the app. Last month, the New
York Times reported how
the YouTube Kids app hosted and recommended videos with disturbing animations
of characters killing themselves and committing other violent acts. On
Periscope, though, the dangers are heightened because of the live, instant
nature of the broadcasts, which can put a mob of predators in conversation with
children before there’s time to intervene.
In
many ways Periscope is a remarkable service, allowing anyone to share what
they’re doing in real time with viewers around the world, whether it’s a
confrontation with law enforcement or a hot-air balloon ride. But it also
facilitates behavior that calls into question the utility of the entire
enterprise—and how capable the company is of curbing that behavior effectively,
either through moderation or software. Over at
Alphabet, YouTube is attempting to fix the problems on YouTube Kids by hiring
more moderators. Twitter and Periscope should do even more than that. The
safety of some of its most vulnerable users is at stake.
No comments:
Post a Comment