Moderation Workflow

Review, moderate, and manage community content

Effective moderation ensures your feedback portal remains constructive and professional. This guide covers the complete moderation workflow for ideas, comments, and user content.

Moderation Overview

Why Moderation Matters:

  • Maintains community standards
  • Removes inappropriate content
  • Prevents spam and abuse
  • Protects brand reputation
  • Ensures productive discussions

What Can Be Moderated:

Content TypeActions Available
IdeasHide, edit, delete
CommentsHide, delete, edit
UsersWarn, suspend, ban
ReportsReview, act, dismiss

Moderation Access:

  • Employees and above can moderate
  • Some actions require admin role
  • All actions are logged

Moderation Queue

Accessing the Queue:

AdminModerationQueue

Queue Contents:

  • Reported content (user reports)
  • Flagged content (automatic detection)
  • Pending review items
  • Escalated issues

Queue Filters:

  • Type: Ideas, comments, users
  • Status: Pending, reviewed, escalated
  • Product: Filter by product
  • Priority: High, medium, low
  • Age: Time since report

Working the Queue:

  1. Review oldest items first
  2. Check all context
  3. Take appropriate action
  4. Document decision
  5. Move to next item

Handling Reports

Report Types:

TypeReported For
SpamPromotional, irrelevant
OffensiveHate speech, harassment
InappropriateOff-topic, misleading
DuplicateAlready submitted
OtherCustom reason

Reviewing a Report:

  1. Click report in queue
  2. Read reporter's reason
  3. View the reported content
  4. Check reporter history
  5. View author history
  6. Make informed decision

Actions:

  • Dismiss: False report, no action
  • Warn: Inform author of violation
  • Remove: Hide or delete content
  • Escalate: Send to senior moderator

Moderating Ideas

Idea Moderation Actions:

Hide Idea:

  • Removes from public view
  • Author can still see it
  • Use for: Minor violations, under review
  • Reversible action

Edit Idea:

  • Modify title or description
  • Remove offensive language
  • Fix formatting issues
  • Author notified of changes

Delete Idea:

  • Permanent removal
  • Use for: Serious violations, spam
  • Cannot be undone
  • Author notified

Merge as Duplicate:

  • Combine with existing idea
  • Preserves votes
  • Author notified

Best Practices:

Document reason for action

Give author opportunity to fix

Be consistent in enforcement

Consider cultural context

Moderating Comments

Comment Actions:

Hide Comment:

  • Makes invisible to public
  • Author still sees it
  • Quick action for review

Delete Comment:

  • Permanent removal
  • Use for clear violations
  • Notify author if appropriate

Edit Comment:

  • Redact specific content
  • Fix privacy issues
  • Mark as edited

Comment Moderation Triggers:

  • Offensive language detection
  • Spam patterns
  • User reports
  • Keyword filters

Handling Disputes:

If author disputes moderation:

  1. Review the original content
  2. Check context of discussion
  3. Consult guidelines
  4. Escalate if uncertain
  5. Communicate decision

User Moderation

User Actions:

ActionEffectDuration
WarningNotifies userOne-time
SuspensionTemp blockDays/weeks
BanPermanent blockForever

Issuing a Warning:

  1. Go to user profile
  2. Click ModerateWarn
  3. Select reason
  4. Add custom message
  5. Send warning

Suspending a User:

  1. Access user profile
  2. ModerateSuspend
  3. Set duration
  4. Add reason
  5. Apply suspension

User cannot:

  • Submit ideas
  • Vote
  • Comment
  • But can still view

Banning a User:

  • Permanent exclusion
  • Requires admin approval
  • Reserved for severe cases
  • Can be appealed

Moderation Guidelines

Standard Guidelines:

Always Remove:

  • Hate speech or discrimination
  • Personal attacks or harassment
  • Illegal content
  • Personal information exposure
  • Spam or advertising

Review Case-by-Case:

  • Competitor mentions
  • Strong language
  • Off-topic discussions
  • Cultural references
  • Satire or jokes

Generally Allow:

  • Constructive criticism
  • Feature comparisons
  • Frustration expression
  • Questions about policy
  • Positive feedback

Documentation:

For each action, record:

  • What was the violation
  • What action was taken
  • Why this action was chosen
  • Any follow-up needed

Escalation Process

When to Escalate:

  • Unclear guidelines apply
  • Legal concerns
  • PR risk
  • User disputes decision
  • Pattern of abuse

Escalation Path:

  1. Moderator: Initial review
  2. Senior Moderator: Complex cases
  3. Admin: Policy questions
  4. Legal/PR: Sensitive issues

How to Escalate:

  1. Select the content
  2. Click Escalate
  3. Choose escalation level
  4. Add detailed notes
  5. Submit for review

Tracking Escalations:

  • View in ModerationEscalated
  • Filter by status
  • See resolution times
  • Track outcomes

Automation & Filters

Automatic Detection:

Spam Filters:

  • Link patterns
  • Repetitive content
  • Known spam phrases
  • New user behavior

Language Filters:

  • Profanity list
  • Offensive terms
  • Custom keywords
  • Regex patterns

Configuring Filters:

  1. Go to SettingsModeration
  2. Enable desired filters
  3. Add custom words/patterns
  4. Set sensitivity level
  5. Choose action (flag, hide, delete)

Filter Actions:

ActionEffect
FlagAdd to queue
HoldAwait approval
HidePublish but hidden
BlockPrevent posting

Review Flagged Content:

  • Regularly check flagged items
  • Adjust filters to reduce false positives
  • Add exceptions as needed