13 cases matching your filters.
Source
Appeal
Category
Guideline 1.2 - Safety - User-Generated Content. Your app includes user-generated content features but lacks the ability to filter objectionable material and block abusive users.
Fix: Added real-time profanity filter for chat messages, user blocking and muting, report button on every message and stream, and a CSAM detection system for images. Also added a visible content policy use...
Guideline 1.2 - Safety - User-Generated Content. Your app allows users to upload photos but does not scan for CSAM or inappropriate content.
Fix: Implemented PhotosPickerItem with automatic CSAM scanning via Apple frameworks. Added server-side content classification using ML for nudity and violence detection. Implemented user reporting system a...
Guideline 1.2 - Safety - User-Generated Content. Your recipe sharing app does not moderate user-uploaded food photos for inappropriate content.
Fix: Added image classification to verify uploaded photos contain food. Non-food images flagged for review. Implemented photo reporting, user blocking, and community guidelines. NSFW detection runs on all...
Guideline 1.2 - Your AI tutoring app allows unmoderated conversations between the AI and minors without content safety measures.
Fix: Added age-appropriate content filtering specific to educational contexts. Restricted the AI to only discuss academic subjects with topic detection. Added parental controls and conversation history vis...
Guideline 1.2 - Safety - Your AI image generation app can create photorealistic images of real people without consent safeguards.
Fix: Added face detection to generated images and blocked generation of recognizable real people. Implemented prompt filtering for celebrity and public figure names. Added watermarks to AI-generated images...
Guideline 1.2 - Safety - User-Generated Content. Your app allows users to create and share AR effects but does not moderate them for appropriateness.
Fix: Implemented AR effect review pipeline: user-created effects go through automated ML screening and manual review before being publicly available. Added report button on every shared effect. Offensive e...
Guideline 1.2 - Safety - User-Generated Content. Your AI-generated content feature produces content that could be objectionable without moderation.
Fix: Added prompt filtering using a blocklist and ML classifier. Output images scanned for inappropriate content before display. Added watermark to all AI-generated images. Implemented user reporting for o...
Guideline 1.2 - Safety - Your AI chatbot app can generate harmful, explicit, or misleading content without adequate content filtering.
Fix: Implemented multi-layer content filtering: pre-prompt moderation, OpenAI content policy flags, and post-response scanning. Added a report button for outputs. Created a comprehensive blocklist for harm...
Guideline 1.2 - Safety - User-Generated Content. Your app allows users to post content but does not include sufficient content moderation and reporting features.
Fix: Implemented content reporting with multiple report categories, added AI-powered content moderation for text and images, created a moderation queue for admins, added user blocking functionality, and in...
Guideline 1.2 - Safety - User-Generated Content. Your flashcard app allows users to create and share decks but some shared decks contain inappropriate content.
Fix: Implemented content review for shared decks: text scanning for inappropriate language, image classification ML model, and manual review queue for flagged content. Private decks remain unmoderated but...
Guideline 1.2 - Safety - User-Generated Content. Your app does not have a mechanism to report offensive content or block abusive users.
Fix: Added three-dot menu on every message with Report and Block options. Implemented report categories (spam, harassment, inappropriate content). Added admin dashboard for moderating reports. Users get fe...
Guideline 1.1 - Safety - Objectionable Content. Your app contains user-generated content that could include offensive material without adequate moderation.
Fix: Implemented robust content moderation: AI-based text filtering, image classification for inappropriate content, user reporting system, and 24/7 moderation queue. Cleaned up all existing test content b...
User-generated content moderation requirements for iOS apps
Fix: Discord implemented NSFW server restrictions on iOS, required age verification for certain content, and enhanced content filtering.