Content Moderation Technology: What Publishers Actually Need


Content moderation is expensive, necessary, and mostly thankless work. Publishers want technology to solve it. Technology can help, but it can’t replace human judgment.

The Moderation Challenge

Even modest publishing sites can generate hundreds of comments daily. Newsletters with reply-enabled emails get responses. Social media amplifies reach but also attracts trolls and spam.

Unmoderated spaces quickly become unusable. But comprehensive human moderation is expensive and doesn’t scale.

What Automated Moderation Can Do

Spam filtering is quite good. Modern tools catch 95%+ of obvious spam, bot comments, and promotional garbage.

Profanity filtering works but it’s blunt. You can block offensive words, but context matters. Automated systems struggle with nuance.

Link and image filtering catches many bad actors. Lots of spam includes suspicious links or inappropriate images that automated systems can identify.

Pattern recognition can flag potential problems: new accounts posting aggressive content, similar comments from multiple accounts, suspicious engagement patterns.

What It Can’t Do

Understand context and intent. A word or phrase that’s offensive in one context might be perfectly fine in another. Automated systems don’t get this.

Handle cultural and linguistic nuance. What’s acceptable varies by community, topic, and culture. Rules-based systems are too rigid.

Deal with sophisticated bad actors. People who want to abuse your platform will learn to game automated filters.

Major Platforms and Tools

Disqus is common but has mixed reputation. Easy to implement, reasonable spam filtering, but users often dislike it and it can hurt site performance.

Coral by Mozilla (now Viafoura Coral) is open-source and more privacy-focused. Requires technical capability to host and maintain.

Facebook Comments was popular but fewer publishers use it now due to Facebook’s declining relevance and privacy concerns.

Native comment systems built into WordPress, Ghost, or other CMS platforms work but typically need additional moderation plugins.

The Hybrid Approach

Most successful publishers use a combination:

Automated pre-filtering to catch obvious spam and abuse.

Human review of flagged content.

Community moderation where trusted users help identify problems.

Clear community guidelines that set expectations.

Moderation Staffing

Small publishers might manage with one person spending a few hours weekly on moderation. Medium-sized publishers need dedicated moderation staff or clear rotation among editorial team.

Large publishers need proper moderation teams with shifts covering different time zones and clear escalation procedures.

Outsourcing moderation is possible but tricky. Moderators need to understand your publication’s voice, standards, and community. Generic moderation services often make poor decisions.

The Closing Comments Trend

Many publishers have simply turned off comments. This eliminates moderation burden but also eliminates community engagement and discussion value.

Some are taking a middle path: comments on selected articles, disabled elsewhere. This focuses moderation resources where engagement is most valuable.

Others are moving discussion to email newsletters or social platforms where engagement is higher and moderation is easier.

Australian defamation law makes publishers potentially liable for defamatory comments on their platforms. This is more stringent than many other jurisdictions.

You can’t just claim “we’re a platform, not a publisher” anymore. If defamatory content appears in your comments, you may be liable if you don’t remove it promptly after being notified.

This makes moderation not just a community management issue but a legal necessity.

Newsletter Reply Moderation

Reply-enabled newsletters create moderation challenges. You’ll get genuine reader responses mixed with spam, abuse, and bizarre messages.

Email filtering helps but isn’t perfect. Someone needs to actually read replies, respond to legitimate ones, and filter out garbage.

Many publishers discover that reply-enabled newsletters create more work than they anticipated. But the engagement value can be worth it.

User Registration and Verification

Requiring registration before commenting reduces spam and abuse. It also reduces engagement, sometimes significantly.

Social login (sign in with Google, Facebook, etc.) is a middle ground. Lower barrier than custom registration, but still adds friction.

Email verification catches many bots but determined bad actors will verify email addresses.

What Actually Works

Publishers with successful comment communities typically have:

Clear, enforced community guidelines.

Active moderation that’s visible to the community.

Engaged editorial staff who participate in discussions.

Technology that handles obvious spam so humans can focus on judgment calls.

Willingness to ban repeat offenders and deal with blowback.

The Real Question

Do you actually want comments? Be honest about the value they provide versus the cost to maintain them.

If comments aren’t driving engagement, subscriptions, or community value, turning them off might be the right call.

But if you’re committed to building community, you need proper moderation tools and processes. There’s no shortcut.