Why a human touch is needed in Facebook moderation

May 30, 2011
Posted by: Matthew Hammer, VP- Marketing

Written by former LiveWorld employee, @BryanPerson.
Should you depend on filtering tools to protect your brand against Facebook spam? In our view: no. And in a comment to an outstanding post from Intel’s Ekaterina Walter earlier today on “How to Respond to Facebook Attacks,” I explain why.
Here’s the crux of my comment:

I would suggest that businesses and brands use filters as *a* tool in Facebook moderation. But they shouldn’t expect that filters will adequately replace the need for human eyes to review those comments in context, and then take the necessary subsequent action, either on the Page (delete the post, respond to the comment) or internally (escalate to the brand manager, community manager, or appropriate team for discussion and potential action).
In our testing, for example, we found that Facebook’s own filters often mischaracterized legitimate comments as spam, while letting actual spam comments through.

Why Facebook filters alone don’t equal good moderation

There’s a real danger when brands — including those with high-volume Pages — assume that filters are tantamount to effective Facebook moderation. They’re not. Here’s what filters miss:

  • Nuance and context. Within the culture of some brands’ Facebook Pages or even as a response to a particular status update, bad language may be acceptable, or at least permissible. In other instances, a comment with multiple links to an external site — something which typically triggers a red flag with auto-filters  — may not be spam at all, but rather a detailed explanation or note from a concerned fan.
  • Inappropriate images and videos. Browse through the Facebook Walls of brands that allow fans to upload their own photos and videos to the Wall, and it won’t take long to come across offensive photos and video content that Facebook filters aren’t catching.
  • Crafty spammers and trolls. These up-to-no-good “fans” quickly learn when specific words or phrases are being auto-deleted, and come up with a new syntax to bypass the filters.
  • Brand response. Just because a comment isn’t spam doesn’t mean it should languish unanswered on the Wall for several hours, or over a night or weekend, either. Particularly for customer-service issues, a prompt response may be expected or warranted.
  • Escalation. Whether in the case of an evolving crisis/attack or the general need for a decision/response from someone other than the frontline moderator or community manager, procedures should be in place to quickly route content to the appropriate subject-matter expert or decision maker.

Filters for Facebook work best when they support, rather than replace, the efforts of trained and skilled human moderators, who understand the context of the Wall comments; how they fit, or don’t, with the tone, culture, and accepted norms of the brand and the Page; and when/whether/from whom a public response is needed.
This post is part on ongoing “31 Days of Facebook Marketing” series from LiveWorld, a social media agency that offers moderation, insight, and community programming Facebook services for Fortune 1000 brands.