🌳 ThreatForest [samples-agentic-attack-tree-generator]¶
AI powered threat modeling and attack tree generator
Get comprehensive threat models for your application, with autonomous AI agents that analyze, generate, and visualize attack trees mapped to MITRE ATT&CK

✨ What is ThreatForest?¶
🚀 Quick Example¶
Generate comprehensive attack trees in minutes:
Prerequisites
Before starting, ensure you have Python 3.11+ and an LLM provider configured. AWS Bedrock is fully supported and recommended.

🎯 Key Features¶
Intelligent Analysis
Repository Scanning
Scanner Agent autonomously navigates your project using Strands tools to discover:
- Architecture diagrams and documentation
- Technology stack and cloud provider
- Data flows and trust boundaries
- Auth mechanisms and entry points
Threat Identification
Threat Agent reads scanner context and produces a structured threat list from:
- ThreatComposer workspaces (
.tc.json) - JSON, YAML, and Markdown formats
- AI-generated threats when no file exists
Parallel Analysis
Per-threat pipeline runs concurrently for every identified threat:
- Attack tree generation
- MITRE ATT&CK TTP mapping (ATTACK-BERT embeddings)
- Mitigation recommendations
💼 Use Cases¶
🛡️ Security Teams
Automate threat modeling, generate attack trees, map to MITRE ATT&CK for compliance
🔄 DevSecOps
Integrate into CI/CD, analyze changes, generate security documentation
🏗️ Architects & Developers
Understand security implications, identify vulnerabilities early, learn attack patterns
📋 Compliance & Auditors
Document threats, demonstrate due diligence, generate compliance reports
📊 What You Get¶
⭐ Interactive Dashboard
Interactive dashboard with graph visualization
Features:
- Visual network graph with pan/zoom
- Interactive node exploration
- Real-time filtering and search
- MITRE ATT&CK technique details
- Expandable mitigation strategies
- Export and sharing capabilities
🔒 Privacy & Security¶
Data Privacy
ThreatForest sends application context to your configured LLM provider for analysis. AWS Bedrock provides enterprise-grade data handling. For other providers, review their data policies.
Best Practices:
- Use AWS Bedrock for production workloads (officially supported)
- Remove secrets and credentials from project files before analysis
- Review generated output for any sensitive information
- Store outputs in secure, access-controlled locations
🆘 Need Help?¶
📚 Documentation
Browse comprehensive guides and API references
🐛 Report Issues
Found a bug? Have a feature request?
❓ FAQ
Frequently asked questions