AI Practice Sprint Guidelines
Rules and best practices for safe, productive AI experimentation during the sprint.
The AI Practice Sprint is a structured two-week experiment. Everything produced with AI follows the principle: AI-drafted, human-approved. No AI output should be shipped, shared externally, or treated as final without human review.
- All AI-generated content must be reviewed before use.
- Iterate on your prompts -- the first output is rarely the best.
- Document what works and share your findings with the team.
Review every piece of AI-generated output before using it in your workflow.
Share your learnings, useful prompts, and workflows with the team through the feed.
Ask mentors for help when you're stuck or unsure about an approach.
Experiment safely in sandbox environments. Try different tools and techniques.
Log your daily check-ins and token usage to help us understand sprint progress.
Give feedback on mentor sessions to help improve the experience for everyone.
Don't paste PII (personally identifiable information) or company secrets into AI tools.
Don't deploy untested AI-generated code to production environments.
Don't send AI-drafted documents externally without thorough human review and approval.
Don't share API keys, credentials, or access tokens in prompts or posts.
Don't assume AI output is factually correct -- always verify claims and data.
Operations
HR, Legal, Finance, IT, InfoSecAll externally-facing documents generated with AI must go through legal review before distribution.
Financial models produced by AI should be validated against known benchmarks before use.
Product
Engineering, Product, Design, Marketing, GrowthNo AI-generated code should be pushed to production without code review and testing.
AI-generated marketing copy must be reviewed for brand voice and accuracy before publishing.
Daily Ops
Calendar, Email, Personal ProductivityKeep personal data out of AI prompts. Use anonymized or generic examples instead.
Review AI-drafted emails carefully before sending, especially to external recipients.
OK to paste into AI tools:
- Public documentation and publicly available information
- Generic code snippets and templates (no secrets embedded)
- Anonymized or synthetic data sets
- Internal process descriptions (non-confidential)
NOT OK to paste into AI tools:
- Customer PII (names, emails, phone numbers, addresses)
- API keys, passwords, tokens, or credentials
- Financial data, salary information, or revenue figures
- Proprietary algorithms or trade secrets
Anonymization Tips:
- Replace real names with generic labels (e.g., "User A", "Client B")
- Use placeholder domains (@example.com) instead of real email addresses
- Round or fuzz numerical values when exact figures are not needed
If something goes wrong during the sprint -- whether it is a data leak, an unintended publication, or a tool behaving unexpectedly -- follow this escalation path:
- 1Stop immediately. Do not try to fix it on your own if data may be compromised.
- 2Contact your stream owner. They can assess the severity and coordinate next steps.
- 3Escalate to leadership if the stream owner is unavailable or if the issue affects multiple teams.
- 4Document the incident so the team can learn from it and improve safeguards.