10 Actual Ways To Fix Social Media Right Now

October 18, 2021
Posted by: Peter Friedman, Founder, Chairman & CEO

Lots of talk goes on these days about what’s wrong with Facebook and social media. Much of the conversation focuses on teen self-esteem, misinformation, and privacy.  People make generalized fix-it statements, such as “break it up” or “get rid of engagement algorithms,” that realistically won’t solve the key issues. Some suggestions would likely have very negative unintended consequences. For example, simply changing section 230 to make the social networks liable for damaging user content would likely cause a shutdown of user content and/or an extensive vetting process.  These effects would wreak havoc on millions of businesses and hundreds of millions of users, stirring unending complaints, lawsuits, and chaos over what content is allowed and what isn’t.
But little has been suggested in the way of actionable steps that can be implemented, to fix the problems.
The challenge is how we give people the social media experiences they want and provide society with the positive benefits, while eliminating or at least mitigating and managing the serious and significant quantity of negative effects. This challenge faces Facebook and the networks it owns (Facebook, Instagram, Messenger, WhatsApp, Oculus), and all social media networks, including Google, Amazon, and any other online networks, communities, and groups. The big issues of the day are teen self-esteem, misinformation, and privacy. Such issues will not be addressed by breaking up Facebook. There may be market power reason to do that, but a broken-in-parts Facebook, or multiple newly empowered competitors will all have the same drivers, behaviors, challenges, and issues. Most importantly, the motivations, drivers, and tribalism for the audience around self-esteem, misinformation, and privacy will not change. Advertiser motivation for reaching audiences will not change. However, a series of actions that increase transparency, empower users with choice, and proactively educate and program with positive content, can directly impact the core drivers of these issues.

Ten actions that can fix or at least mitigate and manage the problems with social media

Hint: #10 of the actions listed is the most important, and it has more impact than all the rest put together. It involves no technology, and the simplest of legislation. But it requires the most parental, academic, business, and political will.
Three actions can impact teen girls’ self-esteem (and that of women of all ages and some men too).

Action 1: Label retouched and filtered photos as “Retouched; May Not Be Real”

Why: A top social media destroyer of teen girl self-esteem is the ubiquitous use of retouching apps and filters to change appearance to an alleged perfect beauty. Today the touch of a button will smooth skin, then re-form lips, eyebrows, eyes, facial structure, and body shape. Societal pressure for this false ideal of beauty is not new or caused by social media. Indeed, the landmark 2004 Dove Campaign for Real Beauty (for which I was privileged to lead the online community/social media element) took on these issues and resonated with women around the world.
However, today apps and social media have democratized and spread the false beauty ideal from Madison Avenue to smart phones around the world. Every way a girl turns on social media, there are friends, strangers, influencers, and celebrities, all retouched to a false beauty, with ever-building followers and likes creating the pressure to be like them. To be false, that is; not to be who she really is. An early 30s friend of mine is attractive inside and outside by any measure. I pushed back on her use of these filters, telling her she was attractive without them. She responded, “You must see something, I don’t see.” I told her I see exactly what she really looks like. She pulled out her auto-retouched, non-real self on the phone and said, “This is how I see myself.” No wonder recently reported research shows about 1/3 of teen girls with body image issues feel social media makes them feel worse regarding body images issues.  Noting that a detailed review of that same research shows that there are more aspects of social media that make more teen girls feel better about themselves. The real challenge and opportunity is to leverage the positive producing dynamics of social media to minimize the negatives.
How: Make the retouching and false beauty ideal transparent by slapping a label on all these photos, noting that they’ve been retouched and are not real. Via legislation and/or rules supported by the app stores, major software companies, and social media companies, require that the instant these apps modify a photo, a digital watermark is embedded. When such photos are uploaded to Instagram or any social network, the watermark is detected, and the label applied. Many, perhaps most photos on social media, will end up with this label. But it will be a banner before the girls that says this is not real beauty; its altered. Combine this practice with proactive education on social media to establish status based on unretouched images and intrinsic value.
Precedent: Such technology exists today and is used to detect online copyright infringement of films and music. CVS, one of the largest US distributors of cosmetics, is resetting the cultural norm on this same subject. All model photos on CVS store brand cosmetics are unretouched and marked as such with a symbol and the words “Unaltered Beauty.” Some major cosmetics brands such as Revlon have joined in, marking their unretouched model photos on products in CVS stores with that same symbol and “Unaltered Beauty.”
CVS Unaltered Beauty

Action 2: Remove the ability for users to see how many likes and followers that others’ posts have

Why: Social networks are a society with cultural status measures. A major currency of status and value in social networks is the number of followers and likes. All those retouched photos, along with the always happy great lives our teens see on everyone else’s page, set a desire for similar acknowledgement. (Most people only post positive parts of their daily lives.) When these high numbers of followers and likes can’t be reached, self-esteem is further broken. When the numbers are reached and a person discovers they’re of little real emotional value, again self-esteem is broken. Removing this false currency of value breaks the cycle. Perhaps making followers and likes invisible seems counter to transparency. But we are merely removing a false value measure to allow the true value of people and content to shine through— false because such metrics do not represent the true worth of people. And also false because a great many of these followers and likes are not real.
Indeed, for people with very large follower numbers, there is a good chance that as many as half are fake or might as well be—generated by bots and click farms or bought with ads, but not genuine. When a social network scrubs a public official’s followers down, the politico screams bloody censorship and bias. But the reality is that only a few of their fake followers have been removed. That’s right, politicians. All those millions of social media followers you claim as proof of your support and power are in large part not real. How does learning that effect your self-esteem? If politicians focused more on the content of their ideas than their comparative quantity of followers and likes, we’d all be better off.
In this model, users and businesses will still see these metrics on their back-end administrative tools. This is important to the millions of businesses and creators that depend on social media for their communications and sales. It’s also important to the many influencers and digital creatives who depend on sponsorships. But these metrics don’t serve the consumer audience well, other than to perpetuate a false and esteem defeating sense of value.
How: This is very straightforward. Some social networks can literally flip a switch to make this change. Others will have to do a little, but very doable, coding.
Precedent: Right now, today, Facebook has piloted this model in other countries.

Action 3:  Establish parental approval and connection to a parent page for pages/feeds for anyone under 18

Why: Here we are empowering parents to have transparency into their children’s social media use to guide and empower their use for positive experiences.
How: The social networks can require all accounts for those under 18 to have parental permission, with a specialized link back to an authenticated parental page with some controls over the content and settings used. Surely a great many teens will try to circumvent this, claiming they are over 18. But many won’t. Indeed, when the Child Online Protection Act (COPA) was passed, with a “Nobody Under 13” rule, the ten to thirteen-year-olds of the online world vanished, accompanied by a surge in 14-year-olds. But that change wasn’t implemented with a mechanism for participation with parental approval. In this model, most teens are not prevented from using the services at all; they just require parental permission, which most will not risk skipping. Teens who do misrepresent their age can be found out and removed from the services via monitoring and moderation. Additionally, Apple and Google can add smart phone features that don’t allow access to social media apps without parental permission.
Precedent: Parental controls on cable TV boxes and smartphones.
Parental Controls Screen
Actions 4 to 7 in our list can impact misinformation, especially in politics.

Action 4: Provide algorithm choices

The social networks can create and offer whatever they want, as long as it includes a choice of, a) people and pages I follow, all their content, and received in chronological order, b) random, and c) optionally no ads, in combination with either a or b. A user’s algorithm choice must be displayed prominently on the screen, so they are ever conscious of it.
Why: Much has been made about getting rid of the engagement algorithms. This won’t have much if any impact on teen self-esteem. But it can impact misinformation and polarized tribalism. Even there, just dropping the algorithm alone and out of context will have limited impact and may have unintended consequences. First, we must understand the core of the problem isn’t the algorithms themselves but how they amplify what people are already doing and already want. The engagement algorithms are effective because for the most part they give people what they want. People like to have their existing beliefs reinforced. They like forming tribes, whether they are positive, negative, or just organized around subject matter.
Many people seek and are drawn to outrage. Certainly, the polarized, misinformation politics we have today existed before social media. It was and continues to be dramatically built up by cable TV opinion heads as well. Just dropping the engagement algorithm could drive more people into more private groups on social media, where the intensity of polarized tribalism can be greater, more insulated, and more prone to negative rhetoric and violence. Further, these algorithms enable creators, publishers, and advertisers to reach people with sets of information the users want and have more relevance to them—such as ethnic and cultural context, medical information, and professional or hobby interests. For tens of millions of businesses, this is a critical lifeline, especially for smaller businesses that depend on social media to level the playing field with their behemoth corporate competitors.
How: We can address these dynamics by having the social networks provide users a) a choice of algorithms for their account and b) require that each account have a mix of at least two algorithms applied, with the user able to tune the mix. The lesser applied algorithm must be applied at least 20% of the time. It should not be a surprise that most people will choose the algorithm that gives them exactly what they see now. Therefore, a minimum of two algorithms must be applied. One of the available choices must be content only from friends and accounts a user follows with 100% of the content from those provided, and in chronological order. The user tool to make these choices must be easy to use and access and well promoted. Whatever the choices the user makes must be clearly labeled on their newsfeed. For example, 60% engagement reinforcing what you already like and believe, 20% opposite of what you already like and believe, 20% followed friends and accounts. This algorithm choice model puts the users in control of their experience, but it also prevents the user from having a singular experience. It further ensures the user is constantly aware of what they chose. All this still allows creators, publishers, and advertisers to reach users and for those users to get what they want. (See more below on opt-in and out). Finally, give users a no-ads choice which can be in exchange for a monthly fee that replaces the social networks ad revenue.
Precedent: Right now, today, Facebook provides settings that allow users to change the nature of their newsfeed. The solution is not comprehensive enough, nor easy enough, and not many people know about it. But the basic technology and approach already exist. Assorted websites, Kindle, and streaming channels today give users a choice of ads or paying a subscription fee.

Action 5: Make advertising transparent

Why: Facebook is the most effective marketing medium in history. It can micro-target ads based on user profiles, content, interests, and actions (views, likes, shares, comments, search). It does this not only inside Facebook’s mega population networks, but across the web, due to advertising and registration agreements supported by distributed technology. Using AI, the system can establish look-alike profiles, meaning they are determined based on similarity of profiles and behavior that suggest likely good targets for a particular ad. Advertisers can even upload customer (or voter) lists, which the system will then use to further refine the targeting to meet marketing or political objectives.
Once reached, a user is easily drawn along an emotion-driven path to other content, pages, and groups, driving viral spread of the message and resulting beliefs. Tens of millions of advertisers, large and small, use this system to drive their businesses. When it’s ethically used, generally everyone wins—advertiser, customer, and Facebook. Still, even when all this is well intentioned, a user may find it to be manipulative and problematic.
The big problem is in bad actors who use the system’s power to bad ends by deliberately pushing false product information, political disinformation, and/or fear and hate. Ads can be dark posts, meaning only those targeted see them. With ads hidden from public view, a political group can target completely opposite false negative ads to different groups. Once seen, the negative ads lead people to pages or groups that build on the story, creating that much more of a self-reinforcing polarized effect. None of this is dependent on the engagement algorithm, although it does help and enhance at points. The solution to this issue is to make the ads totally transparent, so people understand what they are seeing, where it came from, and can discern much of the misinformation and manipulation when it happens.
How: Every advertiser (not just election or political campaigns) and every news/information publisher, of any size, on a platform of any size, needs to prove who they are, with a government ID and a bank account or credit card information. Each must have a registered page on the platform. There must be just one master page for each entity, though they can have additional linked pages for specific products and themes, as long as they are all visibly and obviously linked to the master page. All this should be in place so that we can really tell who’s behind these ads and what they are really up to.
Every ad and every piece of published content has to have an obvious icon and link back to the registered page of its producer. This makes it possible for a user to drill through with one click to see where an ad comes from. In addition to the current ad, the entire history of ads and content from that advertiser or publisher has to be listed on the registered page or linked to page on the platform. This way a user can also see what else a given advertiser is saying to other people and the history behind it.
This information will help curtail the abuse in which a political candidate takes one position or attack with one set of voters and an opposite, conflicting position with another set. Last, there should be links to see what third-party fact checkers have to say about it, their articles, fact check ratings, and also any user ratings of the accuracy of the content. All of this information should immediately be available to all users and researchers in real time via updated searchable databases. With this model, just as with television ads, everything can be seen by the opposition, the press, and the government agencies responsible for managing and ensuring compliance with election laws, commercial advertising laws, and regulations relating to fair and balanced news and information.
Precedent: Facebook and Google already have some of this ad transparency, though much more is needed per above.

Action 6: Authenticate political ads

Why: Bad political actors, foreign and domestic, are the worst abusers of social media advertising power—manipulating and polarizing the electorate with false and distorted information, fear, and hate-baiting to the point of dividing the country and even families against themselves.
How: First, political ads and content must conform to the above transparency rules. Second, for the US, they must be authenticated with banking and other information as being US citizens and US companies or US registered political groups. The latter two must have US citizens behind them. Third, even though the US election system currently allows dark money campaign funding, in this model the originating source of funding (from companies and people) for social media advertising must be listed
Precedent: Political ads in other media, such as TV, must state their source.

Action 7: Deploy extensive and accessible fact checking

Why: False, often sensationalized information is rampant on social media. Whether Yellow Journalism circa 1900, modern day cable opinion shows, or algorithm enhanced social media, such sensationalized content draws more people and spreads like wildfire. Whether political, health, or academic, the false information is the fuel for many of the problems at hand. Empowering users to find and know the true facts can dismantle the problems from the inside out.
How: The social networks must dramatically step up funding for independent fact checkers, and label not just posts, but also the people and entities that post the content and ads with fact vs. falsehood ratings. The ratings must be visible on posts, ads, and author profile pages. Plus, they should include easy drill-down to the fact checking sources and the verified facts.
Precedent: News organizations regularly fact check their own stories, those from other groups, and statements of politicians. Major web sites such as eBay rate users for trustworthiness, value of content, and other factors.
Action 8 impacts privacy

Action 8: Empower users with line-Item opt-In and always-on opt-out

Why: Some social networks provide users with limited choice of the kinds of content and ads they will see and the types of personal data that will be collected. But it’s an obscure process and too categorical. Data collection itself isn’t bad. Indeed, most users are willing to trade data about themselves to get the benefits of these services for free..  Many want everything tailored to them based on that data. The challenge is how to allow these user benefits, and enable viable business models for the social media companies and the tens of millions of advertisers, while avoiding or managing privacy issues. Simply locking down data collection deprives users of much wanted and valuable content such as healthcare information, ethnic and cultural context, or just professional or hobby interests.
How: By providing explicit opt-in check boxes for each type of user data, content interest, and tracked behavior, we can empower users to protect their privacy as they wish, while getting what they want. Still enabling businesses to effectively market.  Social networks can provide opt-in features with a text field for each of name, email address, age, gender, race, religion, medical conditions, interests, conversations, and so on. This way, users consciously decide what data they are willing to turn over. As an example, the line-item opt-in empowers the diabetes patient to allow medical content to be collected and in turn get relevant coaching, treatment, and medication content.
Line Item Opt-In
While GDPR is not a regulation in all countries, it includes a great opt-out rule. It requires online companies to provide the user all the information they have about that user if the user asks for it. GDPR also requires a service provider to offer an opt-out, sometimes referred to as delete everything you have on me, or a right to be forgotten. All these are good requirements. But again, just having such features is not explicit enough. People simply forget what they have signed on for; as such, there should be not just a mechanism to opt-out, but one that is explicitly always available. To ensure this, the rules should be that the service provider has to provide an annual opt-in renewal to users, unless the user explicitly agrees to automatic renewal and gets an annual reminder that they’ve done so. By taking these approaches, not only do we ensure the users know about their opt-in, but we avoid government or the services mandating a time limit on keeping data, such as a 2-year limit. Such restrictions have the unintended negative consequence of diminishing the value the services and companies can bring that users very much want.
Precedent: Facebook today offers a range of choices on the kind of content one can get. This range has to be made much more comprehensive, more line-item, easy, and accessible. Email marketing and website alerts, plus app alerts on smart phones all have opt-out features. Europe’s GDPR requires that users are allowed to opt-out completely at any time.
Critical to affecting all the issues are actions 9 and 10.

Action 9:  Hold the social networks accountable

Why: For these solutions to work, the social networks must be accountable to implement and manage them, and then monitor compliance and report abuse and issues to government authorities for prosecution. To avoid unintended and negative consequences it’s not optimal for government to specifically regulate what a company can collect, how long it can keep it, or whom it can target. Nor should the government arbitrarily make the social media companies liable for the bad actions of their users. That would lead to negative unintended consequences such as suppressing wanted free speech, creativity, people connecting, etc.
Here’s where rules, policies, laws, and government come into play. The social networks and platforms must create the tools and set up the policies as we’ve described. They must also empower their users to help monitor for abuse and provide them ways of easily reporting it. Then the social networks have to be accountable for reviewing such reports and escalating illegal activity to government agencies. Just as the pharmaceutical industry has to watch for drug adverse events and report them to the FDA. Pharma companies do this every day in social media to manage comply with those regulations. The social networks can also step up, monitor, and report abuses on their own systems.
How: The government should require the social networks to have the proper transparency, tools, and policies, including monitoring and reporting illegal actions by third parties and users. They must also empower their users to help monitor for abuse and provide them ways of easily reporting it.
As an example, let’s take the scenario where two companies are targeting advertising based on race or ethnicity. The first one is providing culturally relevant information to African Americans that they very much want. The second one is using the micro-targeting to practice housing discrimination, essentially keeping their ads and offers away from a minority group. The answer is not to take the targeting capability or engagement algorithm away and thus lose the benefits of the first case, but to better monitor abuses such as in the second case. Algorithms can easily track any use of race or religion for ad targeting and serve them up for human agents to review. Human agents can determine if the ads are essentially supportive of the interests of the users or discriminatory or hateful. In the abuse cases, the advertisers can be reported to legal authorities for prosecution.
At its core, this is no different than reporting and prosecuting any such discrimination in advertising and business practices, whether in magazines, direct mail, or even in-person sales. We just have to understand that on social media, it is easier to target improperly, but it is also easier to catch and prosecute if we have the right rules and tools to do so.
Precedent: The pharmaceutical industry has to watch for drug adverse events and report them to the FDA. Pharma companies do this every day in social media to manage their regulations.

Action 10: Proactively educate and program positive content

Why: All these actions can help, but the ultimate cause of the issues and the ultimate solution rests in ourselves, the users. Social media is not the root cause of the problems. We are — across social, TV, print, and all media. The social networks alone cannot solve these problems. We must all step up. That we even allow our daughters to measure themselves based on looks and superficial ratings is on us. That we have for many decades allowed them to be inundated with and driven by advertising that celebrates falsely constructed beauty. That we look at manipulative advertising and disinformation and accept it on face value. That we seek and polarize ourselves into self-reinforcing eco-systems and turn away from seeing and hearing other views and verifying facts.
How: Starting at kindergarten there must be classes in body image and self-esteem, civics, critical thinking, and navigating information in the digital-social age. Mental health counselors must be available for kids and teens and without any stigmatization.  The social networks must proactively provide positive programming that establishes positive cultural status for each of these dimensions instead of the superficial norms in place now. Substantial increases in human moderation (supported by, but not abdicating to technology) for guideline violations and human proactive engagement must be put in place. We have to let go of the false idea that we can simply tech our way out of these issues.
Value of positive content on social media graph
Precedent: The US used to have civics class at multiple grade levels. The Dove Campaign For Real Beauty turned the cosmetics industry on its head by celebrating women as being beautiful just as they are and empowering women, mothers, and daughters to share their body image issues, with status conferred on those who let go of superficial measures and supported each other. That program and many others in social media have used extensive moderation and engagement to create positive cultural models and reasoned dialogue.
Dove Campaign For Real Beauty
Social media is extraordinarily powerful. The majority of social media impact is neutral to positive. We benefit from friends and families being connected, forming deeper relationships, enhancing self-esteem, and overcoming the ill effects of being in a marginalized group. We appreciate health education (aka doctors sharing Covid information), disasters managed, lives saved, unheard voices heard, and positive political movements energized. But that power is also used for bad. With billions of users everyday, it only takes a small percentage to have a very big absolute impact. Even 1% of 3 billion daily users is 30 million people, who with bad intentions can do a lot of damage in a day. Bad actors can leverage the power of social media to drive misinformation, hate, harassment, negative political movements, and violence. The viral effects and brain chemistry of social interaction often makes people feel positive about themselves even as it breaks the self-esteem of others.
Much has recently been made of the dynamic that people flock to, engage, and spread negative content without regard for substance and accuracy. The current media and political dialogue about social media demonstrate a real-time example of this same effect. People are often focusing on the negatives of social media, disregarding the positives, and missing the substance that can lead us to solutions. The solution to the problems with Facebook and social media is to first understand the real underlying causes. Then to use the power of the medium to overcome them.

###

Peter Friedman, Founder and CEO of LiveWorld, is a social media expert and industry pioneer with over thirty-five years in the space having founded and still leading the longest standing social media business in the world, LiveWorld. Prior to LiveWorld he was the Vice President & General Manager of Apple’s online services division. He has extensive experience in content moderation, social media experiences for children, teen girls, and women, CPG, retail, healthcare, and regulated industries. He has overseen the creation and management of hundreds of social media programs across the globe, impacting hundreds of millions of people.  His work includes award winners such as the social media part of the original Dove Campaign For Beauty and social media programs for AbbVie, American Express, AOL, Apple, AT&T, Campbells Soup, eBay, Disney, HBO, Marriott, MINI Cooper, Mount Sinai Medical System, Pfizer,  Procter & Gamble, Scholastic, Talk City, The Gap, and Walmart, among many others. He is the author of the books “The CMO’s Social Media Handbook, A Step By Step Guide To Marketing In The Social Media Age” and “Is Privacy Dead In The Digital Age And What To Do About It.