Content Moderation (UGC)
1. Overview
This process reviews user‑generated content (UGC) such as product reviews, questions & answers, and uploaded images. Each piece of content is examined against moderation policies. Content that is harmful, illegal, or irrelevant is flagged for review or removed outright. A clear list of actions taken is produced for the Trust & Safety team.
2. Business Value
- Protects the brand’s reputation by removing offensive or dangerous material.
- Reduces legal risk by removing personal data, defamation, and illegal content.
- Improves customer experience by keeping product pages relevant and trustworthy.
3. Operational Context
- When: Whenever a batch of new UGC is collected (e.g., daily upload from the website, or a manual export of recent submissions).
- Who: Trust & Safety Specialists and their supervisors.
- Frequency: Typically once per day or each time a new batch is ready for review.
4. Inputs
4.1 User Content Feed
- Name/Label: User Content Feed
- Type: List of content items (each item describes one piece of UGC).
4.1.1 Item Structure (single‑level)
| Field | Description | Example |
|---|
| Item Identifier | A short human‑readable label that uniquely identifies the piece of content in this batch. | “Review #112” |
| Content Type | The kind of content. Choose from: Review, Question, Answer, Image. | “Review” |
| Content Text | The written text of the item (only for Review, Question, Answer). Leave blank for images. | “The battery died after two days.” |
| Image URL | Link to the image file (only for Image type). Leave blank for text items. | https://example.com/img/123.jpg |
| Author Name | Name of the user who submitted the content (or Anonymous). | “John Doe” |
| Date Posted | Date the content was posted, in YYYY‑MM‑DD format. | 2025‑08‑01 |
| Associated Product | Name of the product or service the content refers to. | “Wireless Bluetooth Headphones” |
| Existing Flags (optional) | Any previously applied moderation flag. Leave blank if none. | “Flag‑Spam” |
| Notes (optional) | Additional information the reviewer wishes to capture. | “Submitted via mobile app.” |
Note: The list may contain any number of items. If an item lacks a required field, it will be routed to Manual Review (see Section 8).
5. Outputs
5.1 Moderation Action List
- Name/Label: Moderation Action List
- Contents: One entry per content item that requires action. Each entry includes:
- Item Identifier
- Action – Flag (requires further review) or Remove (must be deleted)
- Reason – a brief explanation drawn from the prohibited‑content list (e.g., “Harassment”, “Personal Data”, “Spam”).
- Notes (optional) – any extra comment for the reviewer.
- Formatting Rules: Use a bullet‑point list. Example format:
- Item: <Item Identifier> – Action: <Flag/Remove> – Reason: <Reason> – Note: <optional>.
5.2 Summary Report
- Name/Label: Summary Report
- Contents: Summary statistics for the batch processed, including:
- Total items reviewed.
- Number flagged.
- Number removed.
- Number flagged for manual review (e.g., missing data or ambiguous).
- Any notable observations (e.g., “High volume of profanity in reviews”).
- Formatting Rules: Present as a bulleted list:
- Total items: X
- Flagged: Y
- Removed: Z
- Manual review required: N
- Notes: <text>
6. Detailed Plan & Execution Steps
- Collect the User Content Feed (the list described in Section 4).
- Validate Mandatory Fields for each item:
- Item Identifier, Content Type, Date Posted, and either Content Text or Image URL must be present.
- If any required field is missing, add the item to the Manual Review list with reason “Missing required information” and skip to step 9.
- Read the Content:
- For Review, Question, and Answer, read the Content Text.
- For Image, access the Image URL (assume the link is viewable).
- Apply Moderation Rules (see Appendix C). For each item:
a. Scan the text (or visual content) for any prohibited content (e.g., profanity, hate speech, personal data, spam, sexual/violent material).
b. Determine if the content is irrelevant (e.g., off‑topic, promotional without context).
c. Decision Logic:
- Remove: Content that is illegal, contains personal data, explicit sexual/violent content, or clear spam.
- Flag: Content that contains profanity, harassment, or is off‑topic but not illegal.
- Ambiguous (e.g., borderline language, unclear image): Add to Manual Review with reason “Ambiguous content – requires human judgment”.
- Record the Action in the Moderation Action List: add an entry with the Item Identifier, chosen Action, Reason, and any notes.
- Update Counters for total reviewed, flagged, removed, and manual‑review items.
- Generate the Summary Report using the counters from step 6.
- Perform Validation & Quality Checks (see Section 7).
- If any validation fails (e.g., missing reason, mismatched totals), do not produce the final output. Instead, flag the entire batch as Error – Validation Failed and list the problems in the Summary Report under “Notes”.
7. Validation & Quality Checks
- Field completeness: Every item in the Moderation Action List must have an Item Identifier, Action, and Reason.
- Reason validity: Reason must match one of the categories listed in Appendix C.
- Count accuracy: Totals in the Summary Report must exactly match the number of items recorded in the Moderation Action List and any manual‑review entries.
- No duplicate actions: Each Item Identifier appears only once in the list.
- Manual review check: Items sent to manual review must include a clear reason (e.g., “Missing required information”, “Ambiguous content”).
- Final sanity check: Ensure the total number of items processed equals the sum of flagged, removed, and manual‑review items.
If any check fails, the SOP stops and produces an Error status with a detailed note in the Summary Report.
8. Special Rules / Edge Cases
| Situation | Action | Reason |
|---|
| Content contains personal data (e.g., address, phone, ID) | Remove | Direct violation of privacy policy. |
| Content includes profanity but no personal data | Flag | May be reviewed for contextual appropriateness. |
| Content is off‑topic (e.g., unrelated joke) | Flag | Irrelevant to product. |
| Content includes sexual or violent imagery | Remove | Illegal or policy‑violating. |
| Content contains hate speech or harassment | Remove | Prohibited content. |
| Content is clearly spam or advertising | Remove | Unallowed promotional content. |
| Ambiguous language or unclear image | Manual Review | “Ambiguous content – requires human judgement”. |
| Missing mandatory fields (e.g., no Item Identifier) | Manual Review | “Missing required information”. |
| Image cannot be accessed (broken link) | Manual Review | “Unable to retrieve image”. |
| Duplicate Item Identifier in the same batch | Manual Review | “Duplicate identifier – potential duplicate entry”. |
| No items provided in the feed | Error | “Empty content feed – process cannot proceed”. |
| Content already has an Existing Flag that indicates removal (e.g., “Flag‑Spam”) | Remove | Follow existing flag. |
| Content in a language not understood by the reviewer | Manual Review | “Language not recognized – needs translation”. |
| Content is a duplicate of an already‑removed item | Remove (or Flag if not already removed) | Prevents re‑posting. |
Failure Scenario: If the process encounters a critical error (e.g., system cannot read any items), generate an Error status and produce a Summary Report that lists “Critical failure – no items processed”. No Moderation Action List is produced.
9. Example
Input (User Content Feed)
-
Item Identifier: Review #101
Content Type: Review
Content Text: “This product is sh*tty! The battery died after one day. Worst purchase.”
Author: John Doe
Date: 2025‑07‑28
Product: Wireless Bluetooth Headphones
-
Item Identifier: Q&A #202
Content Type: Question
Content Text: “Can I return this? My order #12345, address: 456 Oak St, phone: 555‑1234.”
Author: Jane Smith
Date: 2025‑07‑30
Product: Wireless Bluetooth Headphones
-
Item Identifier: Image #303
Content Type: Image
Image URL: https://example.com/ugc/image1.jpg (image shows a meme unrelated to the product)
Author: Anonymous
Date: 2025‑08‑01
Product: Wireless Bluetooth Headphones
Expected Output
Moderation Action List
- Item: Review #101 – Action: Flag – Reason: Contains profanity (harassment) – Note: “Consider reviewer’s history”.
- Item: Q&A #202 – Action: Remove – Reason: Personal data (order number, address, phone) – Note: “Privacy violation”.
- Item: Image #303 – Action: Flag – Reason: Irrelevant content – Note: “Image does not relate to product”.
Summary Report
- Total items reviewed: 3
- Flagged: 2 (Review #101, Image #303)
- Removed: 1 (Q&A #202)
- Manual review required: 0
- Notes: “All actions conform to policy. No missing fields.”
Appendix A – FAQ
Q1: What if a review contains both profanity and personal data?
A: The content is removed because personal data overrides the need for further review.
Q2: How do I handle a piece of content that is borderline offensive?
A: Flag the content for a senior reviewer to decide. Use “Ambiguous – needs human judgment” as the reason.
Q3: What if an image is broken or the URL is dead?
A: Add the item to Manual Review with the reason “Image not accessible”.
Q4: Are there any exceptions for product‑related memes?
A: If the meme is clearly related to the product and does not contain prohibited content, no action is required.
Q5: How often should the moderation policies be updated?
A: Review and update the policy at least quarterly or after a significant incident.
Q6: Who is responsible for the final decision on flagged items?
A: The Trust & Safety Manager reviews all flagged items within 24 hours and decides to keep, edit, or delete.
Q7: Can a “Flag” be escalated to a “Remove”?
A: Yes, if a senior reviewer determines the content violates a higher‑level policy (e.g., new legal requirement).
Q8: What if a user repeatedly posts prohibited content?
A: Record the user's name in a separate “Repeat Offender” log (not part of this SOP) for further action.
Appendix B – Glossary
| Term | Definition |
|---|
| User‑Generated Content (UGC) | Any content (text, image, video) submitted by a consumer or user on a platform. |
| Moderation | The process of reviewing UGC against policies and deciding to keep, flag, or remove it. |
| Flag | Mark a piece of content for further human review (not a final removal). |
| Remove | Delete the piece of content from the platform because it violates a policy. |
| Personal Data | Information that can identify an individual (e.g., name, address, phone, email, order numbers). |
| Harassment | Content that attacks or intimidates a person or group. |
| Spam | Unsolicited commercial content or repetitive posting. |
| Irrelevant | Content that does not pertain to the product or service context. |
| Ambiguous | Content that cannot be clearly classified as acceptable or unacceptable without additional context. |
Appendix C – Prohibited Content List
| Category | Description | Example |
|---|
| Harassment | Any threatening, insulting, or demeaning language aimed at an individual or group. | “You are a worthless piece of trash.” |
| Hate Speech | Content that targets a protected group based on race, religion, gender, sexual orientation, etc. | “All [group] are idiots.” |
| Sexual Content | Nude or sexual content that is not appropriate for the product context. | Graphic images, explicit language. |
| Violence | Graphic descriptions or images of physical harm, gore, or threats. | “He was stabbed to death.” |
| Personal Data | Any personal identifiers: name, address, phone, email, order numbers, IP addresses, etc. | “My email is john@example.com.” |
| Spam / Advertising | Unsolicited promotional material, including self‑promotion, affiliate links, and repeated postings. | “Buy cheap watches at http://...”. |
| Profanity | Strong language that is offensive, including slurs and vulgar terms. | “This is sh**ty.” |
| Off‑Topic | Content that does not relate to the product or service. | A meme about cats on a headphone product page. |
| Misleading Information | False claims about a product’s features, safety, or compliance. | “This product contains 100 % gold.” |
Appendix D – Moderation Guidelines (Decision Flow)
- Identify the content type (text or image).
- Search for prohibited categories (see Appendix C).
- If any Personal Data, Harassment, Hate Speech, Sexual Content, Violence, or Spam is found → Remove.
- If only Profanity → Flag (unless combined with personal data).
- If content is Irrelevant (off‑topic) → Flag.
- If content is Ambiguous → Manual Review (reason: “Ambiguous content”).
- Validate that the action is recorded with a correct reason from the list.
- If the content is acceptable (no prohibited items) → No action needed.
Formatting Notes for Output
- Use neutral, professional tone.
- Do not add system‑generated IDs.
- Use simple bullet lists; no tables in the final data output.
Additional Notes
- Manual Review Queue: Items flagged for manual review should be exported to the team’s review queue with the reason “Manual review required”.
- Documentation: Keep a log of the batch name (e.g., “UGC batch 2025‑08‑11”) in your records for audit purposes.
- Continuous Improvement: After each batch, note any patterns of recurring violations to inform policy updates.
**