AI Governance
Transparency about how we use AI to build your applications, our data handling practices, and our commitment to responsible AI.
Our Responsible AI Principles
Transparency
We clearly disclose when AI generates content. Every app created shows it was built with OverSkill AI.
Human Control
AI suggests; you approve. You can modify, reject, or regenerate any output. You decide when to deploy.
Safety by Design
Prompt injection filtering, content moderation, prohibited content categories, and rate limiting prevent abuse.
Accountability
All AI interactions are logged. You own your generated apps. We don't train on your data.
Fairness
Regular bias testing, diverse templates, accessibility guidelines in prompts, and ongoing monitoring.
AI Model Providers
Anthropic (Claude)
Claude Sonnet 4.5
Google (Gemini)
Gemini 3 Pro
Data Handling Commitments
What We Do
- Process your prompts to generate application code and content
- Store your generated applications securely for deployment
- Log interactions for debugging and improving service quality
- Apply safety filters to prevent harmful content generation
- Provide full export capability for your data and applications
What We Don't Do
- No training on your data: Your prompts and applications are never used to train AI models
- No data selling: We never sell your data to third parties
- No cross-customer data sharing: Your data is isolated from other users
- No unauthorized access: Only you and authorized team members can access your applications
AI Safety Measures
Prompt Injection Filtering
All user inputs are filtered to prevent prompt injection attacks that could manipulate AI behavior or extract sensitive information.
Content Moderation
Generated content is checked against safety guidelines. Applications that attempt to generate harmful, illegal, or inappropriate content are blocked.
Rate Limiting
API rate limits prevent abuse and ensure fair resource allocation. This protects the platform from attacks and maintains service quality.
Important: AI Limitations
AI-generated applications may contain errors, produce unexpected results, or require human review and modification. OverSkill provides tools to assist in application development, but:
- AI models can make mistakes or produce inconsistent outputs
- Generated code should be reviewed before production use
- You are responsible for testing and validating applications
- AI cannot guarantee compliance with specific regulations
- Human oversight is essential for critical applications
Questions About Our AI Practices?
We're committed to transparency. If you have questions about how we use AI or handle your data, please reach out.
[email protected]