Updated: 2023 Originally published in OSNews
Why do people troll? Can we prevent trolling or limit the damage trolls do? Here are some thoughts on trollology derived from academic studies and web research.
Trolls divert online discussions into non-productive, off-topic venues. They pose as part of a community only to disrupt it. Trolling is anti-social behavior.
Some of the techniques trolls use to accomplish their objectives are:
The traditional definition of trolling includes intent. That is, trolls purposely disrupt online forums and social media groups. This definition is too narrow. Whether someone intends to disrupt a thread or not, the results are the same if they do. Thoughtful discussions degenerate into insults.
The distinction between intentional trolls and unintentional trolls is mainly useful in deciding how to defeat them, since their motivations differ.
Let’s talk about intentional trolls.
Some are motivated by political, financial, or ideological gain. For example, political trolls participate in online communities run by opponents to disrupt them. Sometimes this takes the form of a concern troll, a person who appears sympathetic to the cause being discussed but who is actually trying to sow doubt among the believers. An example is the Congressional staffer who was forced to resign after he posted to the opposing political party's website under an assumed name.
How about financial and ideological trolling? Years ago, trolls posted falsely about a corporate buy-out at Yahoo Finance that caused an immediate 31% gain in the stock of telephone equipment company PairGain. The hoax was quickly exposed and the stock deflated. Wired claims that anti-Scientology protests sometimes take the form of trolling. We’re all familiar with Linux trolls who disrupt Windows threads, and Windows trolls who disrupt Linux discussions.
Then there are the cases of astroturfing, also called astrotrolling. Former Whole Foods CEO John Mackey was caught doing this. His anonymous self “quickly became an outspoken regular on the board, praising and defending Whole Foods with the equally enthusiastic virulence used to attack and shame the company’s competitors and nay-sayers.”
Trolls sometimes defame individuals. One victim was the late 60 Minutes commentator
Andy Rooney, whose name was signed
to a racist rant he didn’t write. Another was John
Seigenthaler, eminent journalist and former Kennedy aide,
in the Kennedy assassinations by a false Wikipedia post.
The perpetrator was caught. Few of us
non-famous folks would have had the resources to
counteract such character assassination. Some trolls
even mock the dead and deface online memorials.
Claire Hardaker explores the psychological motivations of trolls in her Ph.D. thesis Trolling in Asynchronous Computer-Mediated Communication. She concludes that “trolls intention(s) is/are to cause disruption and/or to trigger or exacerbate conflict for the purposes of their own amusement.”
Dr. Tom Postmes, Dutch professor of social psychology and book editor of Individuality and the Group, has a contrarian take. He argues that instead of contravening social standards, trolls conform to them. It’s just that the social standards to which they’re attuned are specific to a certain web subculture.
Another way to consider trolling from is Dr. Phil's favored viewpoint: People only engage in repeated behavior if it pays off for them. What is the pay-off for trolling? Experts cite:
Intentional trolls brag that they do it for the lulz (for fun). Their
braggadocio usually masks the reasons above.
Most of us have unintentionally trolled at one time or other. Perhaps we posted while in a bad mood or under stress. Or we posted hastily or without editing. We’ve all written something at 3 am that we might not have upon reflection.
Where unintentional trolling becomes a problem is when a
person engages in such behavior repeatedly because he
doesn’t recognize that he’s trolling. Some people think
it’s cool to post snappy put-downs. Or they casually
question the intelligence or sincerity of others. Or they
Some of these people would be surprised to be called trolls. Yet when they post like this they are trolling just as surely as the intentional troll. Why? Because their posts have the same effect. They sidetrack useful discussion into offensive, heated exchanges.
Some who repeatedly troll but don’t mean to lack social sensitivity. Discussion requires give-and-take. Some posters aren’t socially mature. Some can’t accept or handle disagreement.
While most participants consider online interaction to be for the
positive interchange of ideas, some people don’t. They see
it as a vehicle to meet their personal needs. They place
their needs above concern for
others. Their motto is “I’ll post whatever pleases me," and too bad
about everyone else. This is a selfish understanding of social
If this isn’t obvious, try treating people like this in real life. You won’t have many friends or much success in dealing with people. Acting this way online has the same effects. It’s a form of trolling.
Unintentional trolling can be as destructive as the purposeful kind. “By their fruits ye shall know them.”
The problem with trolling is that a small minority can destroy a web site’s usefulness for the majority of well-intentioned, well-behaved participants.
Some web sites eliminate trolls by not allowing comments. For certain kinds of blogs or online magazines this can be a good solution. But for most sites this is unacceptable because it prevents the growth of online community.
A few web sites defeat trolls by posting only a selected comments. Print newspapers followed this model for years. Advice columns come to mind. The columnist selects a few reader comments to which to reply. No others make print. Some websites, zines, and online journals still follow this model.
If you want to allow all comments but eliminate trolling, one solution is to pre-moderate. Only after a moderator approves comments are they posted. This is very effective with competent moderators but it requires lots of time. It also hampers discussion if it delays postings. Post-moderating comments eliminates the time lag but still incurs the labor costs. Inappropriate comments may get brief airplay.
Software can eliminate the labor requirement for
moderators while still imposing some order. The software
has to integrate compatibly with the
comment software, as well as being intelligent enough to be
accurate. I've found that software does an excellent job at
eliminating profanity, ads, and the like, but is often less
successful at countering skilled trolls.
Many communities informally police themselves to curtail trolls. The common maxim “Please don’t feed the trolls” argues that if troll comments are ignored intentional trolls will leave and go where they provoke results. “Don’t take the troll bait” works best when the bait is obvious and the discussion participants are more sophisticated than the trolls.
Some communities successfully drive
trolls out through the opposite approach, providing negative
feedback. Some trolls will leave if an entire community
discounts their input in a way that they feel diminishes them.
Social media and forum participants can complain about trolls to board administrators. Even sites lacking hands-on moderation will often respond if they get feedback indicating that trolls threaten the group interest. Admins can warn trolls and/or drop their user ids. IP addresses can help identify intentional trolls who post under multiple ids, or who create new ids after their original one is terminated. How effective these techniques are often depend on the respective skills and persistence of the administrators versus the trolls.
Some forums offer tools that allow readers to filter out
troll comments. The Ignore
function in some discussion groups comes to mind. Many communities
allow up or down voting of comments, or some other
community-driven value indicator. Posters who are consistently
down-voted often leave the group.
With unintentional trolls, often just bringing inappropriate behavior to their attention will solve the problem. After all, they are not purposely being disruptive. Where I’ve moderated as admin, I’ve found that polite but direct communication works best: “We value your contributions but you need to be more respectful of others in how you express them.” If someone won’t respond to polite entreaties, they are trolls (of whatever kind) and are stopped from posting or have their access terminated.
Intentional trolls won’t stop if you ask them. They hide behind anonymity. It's a shield for their misbehavior. Most would not post the way they do if they were not anonymous. Thus mechanisms that eliminate anonymity and enforce personal responsibility often deter them.
Amazon deters trolling through a qualification system. One has must quality to post by providing personal information, a verifiable email address, and a verifiable credit card. Other web sites qualify commenters through paid memberships, technical quizzes, or using real names in posts.
The WELL is one of the oldest online forum communities.
a high level of discourse by requiring a paid subscription
and the use of one’s real name in postings. Most WELL
comments can only be read by fellow members but there are
Executives from companies that make money from advertising, like Facebook and Google, often argue that we should entirely eliminate anonymity from the web. The cite trolling as the reason, but it's easy to see that their real motives are commercial. After all, authenticated personal advertising represents their holy grail.
The problem with eliminating anonymity from the web is that its benefits outweigh the damage trolls do. Most people do not want their real name on every comment they ever post, which would then be available to every person, corporation, or government entity for the rest of their lives. How many people would freely post under such conditions?
The public has become astute enough about privacy to realize that even innocuous comments could have unanticipated consequences. Online prices could be set by what you reveal about yourself in discussions. Whether you get a job or can rent an apartment might be decided by comments you made years ago online. Whistleblowers and dissidents could be exposed and penalized.
Eliminating anonymity online means eliminating privacy. Ultimately, it produces many new, more serious problems, even if it reduces trolling.
Some countries have legislated against trolling. In the U.K., section 127 of the Communications Act 2003 says it is an offence to send messages that are “grossly offensive or of an indecent, obscene or menacing character.” Several people have been jailed under its provisions. In the U.S., First Amendment rights make prosecution for troll speech rare. But trolls take heed: every state has laws cyberharassment, cyberbullying, and cyberstalking.
Trolling isn’t going away. Yet there are some good techniques to reduce trolling and its impact. As an online participant, your ultimate recourse is to leave a trolled group or forum and join a community more to your liking.
Unintentional trolling is an essential but overlooked
part of the problem. It is rarely discussed or even
acknowledged, which is why I’ve specifically identified it
here. Sometimes people troll and don’t realize it. Unless
an administrator can get them to understand that their behaviors
are inappropriate, those who unintentionally troll can damage
online discussions every bit as much as those who
troll with malicious intent.