Managing the Dark Side of Social

9 July 2013 07:29 am Views - 18414

Facebook faced a massive wave of protest over the past days over postings on its social network that many claimed (rightly) degraded women, including content on groups such as “this is why Indian girls get raped.” In response, over 40 women’s groups and individuals started the FBrape campaign, which called for Facebook to remove content that portrays violence to women positively. The initiative has generated thousands of tweets and online petition signatures, including emails targeting to brands that were unknowingly running advertising on some of these pages.

Details

Facebook has an existing policy in place that enables users to report any content that may be perceived as vulgar and distasteful. In fact, Facebook has regularly removed content from the network, most notably things perceived as being hateful to various ethnic groups. However, with the network now reaching over a billion people, Facebook will need to reassure both users and advertisers that they can accurately and swiftly review and manage the massive amount of content produced by the social graph – last reported to be around 2.5 billion pieces of content and 500+ terabytes of data each day.  Furthermore, they will need to find a logical and objective way to evaluate what content is ultimately offensive. While nobody would argue around the disgraceful pages highlighted by FBrape, other content may prove to be more contentious and opaque.

Implications

The Internet has always had a chaotic, “wild west” element to it since the early days. Indeed, there are still many dark corners and ill-intentioned people online, and some periodically find their way into mainstream destinations like Facebook. Publishers and agencies have worked over the years to develop and/or embrace technology that provides advertisers with a much higher degree of brand security so their ads will not appear on pages with inappropriate content, everything ranging from sexually explicit images to news event (e.g., an airline ad next to airplane crash news report).  Specialist suppliers like Evidon and DoubleVerify are expanding and getting vastly better at protecting brands at scale, i.e., across massive amounts of media inventory in real-time. In simple terms, every media page is scanned for offensive content, which can be defined and dialed-up or down by the advertiser, and a decision is then made in real-time by rules whether the page is acceptable for the brand’s advertising. While great comfort to advertisers, a few words of caution need to be shared. First, no technology works 100% of the time, and the dark side of the Internet continues to use technology and other means to bypass security systems. So it’s a continuous and never ending battle, and an unfortunate but persistent risk of online advertising.  Second, on Facebook, these technologies are constrained due to the network’s “closed” system as well as their targeting model around demographics, categories, and interests in a content ecosystem that is largely consumer-generated not professionally published.

Summary

The FBrape campaign has brought the issue to the public forefront, and may be the catalyst needed for Facebook to both improve brand security to advertisers and further eliminate offensive content. Facebook has already responded with an admission that their systems need to be updated. In the meantime, Mindshare and GroupM will continue to collaborate with the Facebook Client Council, which is due to meet imminently, as well as the industry’s leading technologies to strengthen brand protection.  However, in the end, it will be up to Facebook to deliver or at least facilitate this protection layer in their ecosystem. If they don’t they may end up losing the “likes” of both advertisers and people.