Understanding Risk Scores
Learn how Cirvia Parental's AI system assesses content and assigns risk scores to help you prioritize your child's online safety.
Risk Score Scale
🟢 Low Risk (Scores 1-4)
Normal childhood online activity
What it means:
- Typical conversations and interactions
- Age-appropriate content and language
- General monitoring for awareness
Common examples:
- "Good game everyone!"
- "What's your favorite Roblox game?"
- Sharing memes or funny images
- Normal competitive gaming chat
Your action: Content is logged for your review but doesn't require immediate attention.
🟡 Medium Risk (Scores 5-7)
Content that warrants closer attention
What it means:
- Potentially concerning language or behavior
- Social conflicts that might need guidance
- Content that could escalate if not addressed
Common examples:
- "Nobody likes you anyway"
- Sharing personal information (school name, age)
- Mild inappropriate language
- Arguments with other users
Your action: Review content within 24 hours and consider talking with your child about appropriate online behavior.
🔴 High Risk (Scores 8-10)
Serious safety concerns requiring immediate attention
What it means:
- Direct threats to your child's safety
- Predatory behavior from adults
- Serious cyberbullying or harassment
- Content that may require law enforcement involvement
Common examples:
- "Want to meet up in person?"
- Requests for photos or personal information
- Sexual content or advances
- Threats of violence or self-harm
Your action: Investigate immediately, document evidence, and take appropriate protective measures.
AI Analysis Categories
Harassment Detection
What we look for:
- Bullying and intimidation tactics
- Repeated negative targeting
- Exclusion and social manipulation
- Threats and aggressive language
Risk factors that increase score:
- Personal attacks about appearance, abilities, or identity
- Coordinated group harassment
- Threats of physical harm
- Doxxing or sharing personal information maliciously
Sexual Content Detection
What we look for:
- Inappropriate sexual language or requests
- Sharing or requesting intimate images
- Sexual predation tactics
- Age-inappropriate romantic advances
Risk factors that increase score:
- Adult requesting private communication with child
- Requests for photos, especially "special" or private ones
- Sexual language directed at children
- Grooming behaviors (gifts, special attention, secrecy)
Violence Detection
What we look for:
- Threats of physical harm
- Graphic violent content
- Self-harm references
- Dangerous activities or challenges
Risk factors that increase score:
- Specific threats with details
- Images of weapons or violence
- Self-harm planning or methods
- Encouragement of dangerous activities
Hate Speech Detection
What we look for:
- Discriminatory language based on identity
- Hate symbols or references
- Organized hate group activity
- Targeting based on race, religion, gender, etc.
Risk factors that increase score:
- Systematic targeting of individuals
- Use of known hate symbols or terminology
- Encouragement of discrimination or violence
- Recruitment for hate groups
Context Factors
Platform-Specific Considerations
Gaming Platforms (Roblox, Minecraft):
- Competitive language is common and scored lower
- Focus on interactions outside of normal gameplay
- Private messages weighted more heavily than public chat
Social Media (Instagram, TikTok):
- Public posts get additional scrutiny
- Comments on photos analyzed for appropriateness
- Direct messages prioritized for safety
Messaging Apps (Discord, WhatsApp):
- Private conversations carry higher weight
- Group dynamics and peer pressure considered
- Adult-child interactions flagged more aggressively
Age-Appropriate Adjustments
Younger Children (Under 10):
- Lower tolerance for any concerning content
- Educational conversations about online behavior
- Focus on stranger danger and sharing information
Tweens (10-13):
- Balance between independence and protection
- Attention to social dynamics and peer pressure
- Cyberbullying prevention and response
Teens (13+):
- Respect for developing autonomy
- Focus on serious safety threats
- Preparation for adult online responsibility
How Scores Are Calculated
AI Analysis Process
-
Content Preprocessing
- Text is analyzed for meaning and context
- Images scanned for inappropriate content
- User interaction patterns considered
-
Multi-Factor Assessment
- Base content analysis (language, imagery)
- Context evaluation (platform, relationship)
- Historical pattern recognition
- Risk escalation factors
-
Score Assignment
- Weighted scoring based on severity
- Platform-specific adjustments
- Age-appropriate considerations
- Final risk level determination
Learning and Improvement
Continuous Enhancement:
- AI learns from new online safety threats
- Regular updates to detection algorithms
- Parent feedback helps refine accuracy
- Integration of latest research on child safety
Real Examples
Low Risk Examples
Score 2 - Gaming Chat:
"gg everyone, that was fun!"
- Positive social interaction
- Appropriate gaming language
- No safety concerns
Score 3 - Social Media:
"Can't wait for the weekend! 😊"
- Normal personal sharing
- Age-appropriate content
- Standard social media interaction
Medium Risk Examples
Score 5 - Mild Conflict:
"You're so bad at this game, just quit"
- Mild negative behavior
- Could hurt feelings but not dangerous
- Opportunity for teaching moment about kindness
Score 6 - Information Sharing:
"I go to Lincoln Middle School in the 7th grade"
- Sharing personal information publicly
- Not immediately dangerous but risky
- Needs conversation about privacy
High Risk Examples
Score 8 - Inappropriate Request:
"Send me a photo of yourself, don't tell anyone"
- Clear predatory behavior
- Request for secrecy is major red flag
- Immediate parent intervention required
Score 10 - Direct Threat:
"I know where you live, I'm coming to hurt you"
- Specific threat with apparent capability
- Immediate safety concern
- Law enforcement contact may be necessary
Taking Action Based on Scores
Low Risk Response
- Monitor trends over time
- Casual conversations about online experiences
- Positive reinforcement for good digital citizenship
Medium Risk Response
- Review full context of the interaction
- Talk with your child about what happened
- Provide guidance on better responses
- Monitor more closely for related incidents
High Risk Response
- Document evidence immediately
- Ensure child's immediate safety
- Contact appropriate authorities if needed
- Restrict access to dangerous platforms/contacts
- Seek professional help if trauma occurred
Frequently Asked Questions
"Why did harmless content get a medium score?"
Sometimes context matters more than the specific words. Our AI considers:
- Who is sending the message (adult vs. peer)
- Platform where it occurred (public vs. private)
- Pattern of behavior (isolated vs. repeated)
- Your child's age and vulnerability factors
"Can I adjust the sensitivity of scoring?"
Yes! In your dashboard settings, you can:
- Adjust notification thresholds
- Focus on specific content types
- Customize platform-specific settings
- Set age-appropriate sensitivity levels
"What if I disagree with a score?"
- Use the feedback option on each incident
- Your input helps improve the AI accuracy
- Consider if additional context might affect the assessment
- Contact support for scoring questions
Best Practices
Daily Monitoring
- Focus on high-risk incidents first
- Scan medium-risk content for patterns
- Use low-risk data to understand normal activity
Weekly Reviews
- Look for trends in scoring over time
- Identify platforms that generate more concerns
- Adjust settings based on your family's needs
Teaching Moments
- Use incidents as conversation starters
- Explain why certain content is concerning
- Help your child develop good judgment
- Celebrate positive online interactions
Next Steps
- Managing Incidents → - Learn how to respond effectively to different risk levels
- Dashboard Overview → - Explore all dashboard features
- Demo Mode → - Practice with sample incidents safely
Understanding risk scores empowers better protection! Use this knowledge to prioritize your attention and have more effective conversations with your child about online safety.