← Back to All Episodes
Episode 14

The Social Media Clarity Podcast

The Social Media Clarity Podcast

15 minutes of concentrated analysis and advice about social media in platform and product design

🎧 Listen to this episode:

Download MP3

LinkedIn’s Scarlet Letter - Episode 14

image

Marc, Scott, and Randy discuss LinkIn’s so-called SWAM (Site Wide Automatic Moderation) policy and Scott provides some tips on moderation system design…

[There is no news this week in order to dig a little deeper into the nature of moderating the commons (aka groups).]

Additional Links:

Transcript

John Marc Troyer: Hi, this is John Mark Troyer from VMware, and I’m listening to the Social Media Clarity podcast.

Randy: Welcome to episode 14 of the Social Media Clarity podcast. I’m Randy Farmer.

Scott: I’m Scott Moore.

Marc: I’m Marc Smith.

Marc: Increasingly, we’re living our lives on social-media platforms in the cloud, and in order to protect themselves, these services are deploying moderation systems, regulations, tools to control spammers and abusive language. These tools are important, but sometimes the design of these tools have unintended consequences. We’re going to explore today some choices made by the people at LinkedIn in their Site Wide Automatic Moderation system known as SWAM. The details of this service are interesting, and they have remarkable consequences, so we’re going to dig into it as an example of the kinds of choices and services that are already cropping up on all sorts of sites, but this one’s particularly interesting because the consequence of losing access to LinkedIn could be quite serious. It’s a very professional site.


Scott: SWAM is the unofficial acronym for Site Wide Automated Moderation, and it’s been active on LinkedIn for about a year now. Its intent is to reduce spam and other kinds of harassment in LinkedIn groups. It’s triggered by a group owner or a group moderator removing the member or blocking the member from the group. The impact that it has is that it becomes site wide. If somebody is blocked in one group, then they are put into what’s called moderation in all groups. That means that your posts do not automatically show up when you post, but they go into a moderation queue and have to be approved before the rest of the group can see them.

Randy: Just so I’m clear, being flagged in one group means that none of your posts will appear in any other LinkedIn group without explicit approval from the moderator. Is that correct?

Scott: That’s true. Without the explicit approval of the group that you’re posting to, your posts will not be seen.

Randy: That’s interesting. This reminds me of the Scarlet Letter from American Puritan history. When someone was accused of a crime, specifically adultery, they would be branded so that everyone could tell. Regardless of whether or not they were a party to the adultery, a victim, you were cast out, and this puts a kind of cast-out mechanism, but unlike then, which was an explicit action that the community all knew about, a moderator on a LinkedIn group could do this by accident.

Scott: From a Forbes article in February, someone related the story that they had joined a LinkedIn group that was for women, and despite it having a male group owner and not explicitly stating that the group was for women only. The practice was that if men joined the group and posted, the owner would simply flag the post just as a way of keeping it to being a woman-only group. Well, this has the impact that simply because the rules were not clear and the behavior was not explicit, then this person was basically put into moderation for making pretty much an honest mistake.

Randy: And this person was a member of multiple groups and now their posts would no longer automatically appear. In fact, there’s no way to globally turn this off, to undo the damage that was done, so now we have a Scarlet Letter and a non-existent appeals process, and this is all presumably to prevent spam.

Scott: Yeah, supposedly.

Randy: So it has been a year. Has there been any response to the outcry? Have there been any changes?

Scott: Yes. It seems that LinkedIn is taking a review. They’ve made a few minor changes. The first notable one is that moderation is temporary, so it can last a undetermined amount of time up to a few weeks. The second one is that it seems that they’ve actually expanded how you can get flagged to include any post, contribution, comments that are marked as spam or flagged as not being relevant to the group.

Randy: That’s pretty amazing. First of all, shortening the time frame doesn’t really do anything. You’re still stuck with a Scarlet Letter, only it fades over months.

Marc: So there’s a tension here. System administrators want to create code that essentially is a form of law. They want to legislate a certain kind of behavior, and they want to reduce the cost of people who violate that behavior, and that seems sensible. I think what we’re exploring here is unintended consequences and the fact that the design of these systems seem to lack some of the features that previous physical world or legal relationships have had, that you get to know something about your accuser. You get to see some of the evidence against you. You get to appeal. All of these are expensive, and I note that LinkedIn will not tell you who or which group caused you to fall into the moderation status. They feel that there are privacy considerations there. It is a very different legal regime, and it’s being imposed in code.

Randy: Yes. What’s really a shame is they are trying to innovate here, where in fact there are best practices that avoid these problems. The first order of best practice is to evaluate content, not users. What they should be focusing on is spam detection and behavior modification. Banning or placing into moderation, what they’re doing, does neither. It certainly catches a certain class of spammer, but, in fact, the spam itself gets caught by the reporting. Suspending someone automatically from the group they’re in or putting them into auto-moderation for that group if they’re a spammer should work fine.

Also, doing traffic analysis on this happening in multiple groups in a short period of time is a great way to identify a spammer and to deal with them, but what you don’t need to do is involve volunteer moderators in cleaning up the exceptions. They can still get rid of the spammers without involving moderators handling the appeals because, in effect, there is an appeals process. You appeal to every single other group you’re in, which is really absurd because you’ve not done anything wrong there - you may be a heavy contributor there. We’ve done this numerous places: I’ve mentioned before on the podcast my book Building Web Reputation Systems. Chapter 10 describes how we eliminated spam from Yahoo Groups without banning anyone.

Marc: I would point us to the work of Elinor Ostrom, an economist and social theorist, who explored the ways that groups of people can manage each other’s behavior without necessarily imposing draconian rules. Interestingly, she came up with eight basic rules for managing the commons, which I think is a good metaphor for what these LinkedIn discussion groups are.

  1. One is that there is a need to “Define clear group boundaries.” You have to know who’s in the group and who’s not in the group. In this regard, services like LinkedIn work very well. It’s very clear that you are either a member or not a member.
  2. Rule number two, “Match rules governing use of common goods to local needs and conditions.” Well, we’ve just violated that one. What didn’t get customized to each group is how they use the ban hammer. What I think is happening that comes up in the stories where you realize somebody has been caught in the gears of this mechanism is that people have different understandings of the meaning of the ban hammer. Some of them are just trying to sweep out what they think of as just old content, and what they’ve just done is smeared a dozen people with a tar that will follow them around LinkedIn.
  3. Three is that people should “Ensure that those affected by the rules can participate in modifying the rules.” I agree that people have a culture in these groups, and they can modify the rules of that culture, but they aren’t being given the options to tune how the mechanisms are applied and what the consequences of those mechanisms are. What if I want to apply the ban hammer and not have it ripple out to all the other groups you’re a member of?