How brands can solve their Facebook XXX content problems

April 17, 2011
Posted by: Peter Friedman, Founder, Chairman & CEO

Bad content online is not unusual and not a show-stopper for brands.
Spam, problematic content such as bad language, hate threats, and harassing behavior online are not unusual or bad things to pin on Facebook. They’re just the nature of online community, and have been since the early days in the mid-’80s through AOL and the ’90s to now. With more people and more activity, it’s natural that Facebook would have more such content in absolute and percentage terms. Similarly in the offline world, large crowds and interactive venues can easily generate bad content and behavior.
The online solution is similar to offline; it rests in proactively setting story, tone, and context with rules of the road (guidelines) and appropriate levels of management (moderation) — supported by, but not dependent on, the limited impact of technology (such as moderation tools and spam filters).
(Our credentials and associated disclosure: LiveWorld is a leader in moderation, management, and creation of online communities for Fortune 500 brands. Our experience stretches back 27 years, and we’ve delivered over 1.5 million hours of moderation. We know spam, and we know how to deal with it.)

The filter myth

First let’s put the filter story to rest. It’s great that Facebook has filters; they help — but only a little. We’ve run our own tests, and found that, like all filters, Facebook filters miss a lot of bad content and block a lot of good content — at least 20% of each. That’s actually pretty good as filters go, but it means a well-managed Facebook Page is still going to need humans to keep the good and get rid of the bad.
Our experience simply does not support anyone’s claims of filters dealing with 95% of spam. Filters can’t be expected to accomplish that because a) they cannot determine the subtle nature of context (only human eyes can do that), and b) users who are determined to place spam, bad language, hate words, or other problematic content will figure out ways around the filters. Sometimes, even if posts are clearly bad, getting rid of them through automation, but not addressing the users or their behavior creates an even bigger problem. In this respect, automated technology misapplied is a worst practice.

Culture first

The most important best practice is to proactively establish a culture, one that meets your customers’ expectations, while fitting your brand’s values. You can’t expect people to behave a certain way at your party, if you don’t let them know what kind of a party it is: edgy, formal, country, rave-like. You have to tell them by proactively stimulating, guiding, and participating in conversations; setting the design, content, and activities of the page; and choosing what user content to feature. This work should be done by trained and skilled community engagement specialists (sometimes called community managers) in the context of a thought-out, written-out community programming plan. This plan isn’t just a conversation calendar, but an architecture that defines the cultural model, done by strategists (often a different role than the community manager). Many brands today are struggling with where to locate this role — outsource to a social media agency or do it in-house (and if so, where). Either approach can work, but it’s a specialized skill that requires a specific personality type, training, dedicated focus, and a cultural context in the company that supports the role well.

Rules of the road, traffic cops, and round-the-clock moderation

The second best practice is to establish rules of the road (or moderation guidelines), publish them, and communicate to people why you are removing content.
Third, is to have an organized professional team of moderators supported by moderation-specific workflows and escalation paths (especially important for pharma, financial, and CPG companies). These people should be trained and managed in a specialized process flow — ideally, using moderation tools optimized for speed, quality, escalation, and tracking.
While moderation can be done in-house, we find that most brands benefit more from the scale and focus of speciality providers, who are staffed to moderate your Facebook Page or online community at all hours of the day, including 3:00am (see video).
 

The good and the bad of Facebook (and any social network) are just the nature of community — online and off. And so is the solution.