A collection of defensive and offensive security tools, research projects, and internal R&D maintained by Red Specter.
Built for security teams, incident responders, and authorized researchers.
Detect → Block → Contain → Prove.
After 30+ years in technology—from MS-DOS to AI security—I've built AI Shield: an 18-module framework for AI threat detection, forensics, and incident response.
Compliant with:
→ EU AI Act
→ ISO/IEC 42001 (AI Management System)
→ NIST AI Risk Management Framework
→ ISO/IEC 23894 (AI Risk Management)
The decision: Partnership with a compliance platform vs. open source release.
The pattern is clear:
- Kubernetes commoditized containers
- Terraform forced AWS to adapt
- Prometheus disrupted monitoring
Open source wins in security tooling because security teams trust code they can audit.
If AI Shield goes open source:
- Enterprises deploy it for free (global AI compliance covered)
- They ask: "Why am I paying £15K-£40K/year for compliance platforms when operational security is free?"
- Compliance platforms become "documentation tools"
The alternative: Strategic partnership with one platform. Exclusive integration. Competitive moat.
Decision timeline: Under evaluation through February 2026.
CISOs and platform executives—take note.
- Strategic Update
- Overview
- At a Glance
- Featured: Red Specter AI Shield
- Public Tools
- Private R&D
- Why This Matters Now
- Usage & Access
- Responsible Use & Legal
- Contact & Collaboration
Red Specter focuses on practical visibility and response across:
- Botnet activity and early-stage DDoS signals
- C2-style outbound behaviour and beaconing
- Sudden service exposure and brute-force patterns
- AI-era risks: shadow AI usage, prompt injection, data leakage, and model integrity
- Fast containment and evidence-first reporting
This profile README is a high-level inventory with links to each repo.
| Category | Count | Status |
|---|---|---|
| Public Tools | 13 | Open Source |
| Private R&D Modules | 20+ | Internal |
| Integrated Platform (AI Shield) | 18 modules | Production Ready |
| Operational Playbooks | 18 | Production |
| Case Pack Exports | CSOAI-ready | Tamper-Evident |
A fully integrated, production-ready platform for AI security governance and incident response.
Red Specter AI Shield unifies 18 security modules—from prevention to forensic response—into a single deployable suite.
It provides the operational security layer that compliance platforms cannot deliver.
Current compliance platforms provide:
- Policy templates and questionnaires
- Documentation frameworks
- Audit trail management
What they don't provide:
- Tamper-evident evidence chains
- Forensic artifact collection
- Real-time incident response capabilities
- Demonstrable operational controls
Regulations demand operational proof:
- EU AI Act Article 12: Logging and traceability requirements
- ISO/IEC 42001: Demonstrable AI management controls
- NIST AI RMF: Continuous monitoring and risk management
- ISO/IEC 23894: Operational risk assessment
AI Shield delivers what regulations actually require—not just documentation.
✅ Integrated Platform: 18 modules on a unified event schema (RS Event v1)
✅ Forensic Evidence: Automated, tamper-evident case packaging (timeline + IOCs + hashes)
✅ Operational Coverage: 18 playbooks mapped to modules + sanity coverage checker
✅ CSOAI-ready Export: Submission summary + export bundle + checksums (Watchdog-style evidence packaging)
✅ Local GUI: 4-button web interface for golden-path workflows (Preflight → Init → Build → Verify)
✅ Cross-Platform: Full Linux/WSL2 support, automated Windows installer available
✅ Compliance Ready: EU AI Act, ISO/IEC 42001, NIST AI RMF, ISO/IEC 23894 alignment
For compliance platform executives: Partnership discussions
For CISOs: Technical briefings on operational capabilities
For security teams: Architecture and integration details
- Botnet Radar — Host-level botnet/DDoS early warning and scoring
- DDoS Flood Sentinel — UDP flood / carpet detection heuristics and alerts
- Port Surge Guardian — Sudden listening-port exposure change alerts
- C2 Hunter — Outbound monitoring for C2-like behaviour
- Threat Recon Watcher — Brute-force / high-volume IP detection from logs
- Offensive Framework — Ethical lab toolkit for recon → reporting (authorized testing only)
- ScriptMap — Script inventory and supply-chain visibility
- Email OSINT — Passive domain-based email intelligence
- AI Breach Monitor — Detects likely sensitive data leaks in AI prompt logs
- AI Endpoint Guard — Endpoint visibility into AI tool usage
- AI Usage Watchdog — Privacy-first Linux telemetry for AI/LLM usage signals
- AI Firewall Proxy — Policy-enforcing proxy to control and log AI model access
- Evidence Collector — DFIR/pentest evidence ledger into structured case files
(Internal and restricted. Not for public distribution without authorization.)
- Breach Containment Switch — One-command web containment + evidence snapshot
- AI ShadowOps Detector — Covert AI usage detection with evidence logs
- AI Jailbreak IDS — Prompt-injection / jailbreak intent detection with logging
- Agentic Action Gatekeeper — Policy enforcement + circuit breaker for agent actions (framework-agnostic gateway with auditable decisions)
- Red Specter Scrambler — Reverse-proxy chokepoint + tripwire scoring to disrupt agentic/automated intrusion workflows (traps, RS Event v1 alerts, evidence packs)
- Cognitive Drift Sentinel — Model behaviour drift monitoring over time
- Ransomware Canary Sentinel — Pre-encryption mass-change alerts without encryption
- LLM Memory Forensics Kit — Scans AI memory/log dumps for risky indicators + tamper-evident reports
- Log Anomaly Sentinel — Rare command and log pattern detection
- Beacon Detector — Timed C2 beaconing detection
- Companion Sentinel — Manipulation/dependency pattern detection in AI companion chats
- Botnet Radar Pro — Enterprise-tier botnet scoring and enrichment
- Kernel Trust Sentinel — Kernel trust posture + module/tracing cross-checks (rootkit-deception indicators) → RS Event v1 evidence
- PoisonWatch — Defensive poisoning/backdoor scanner for datasets & RAG corpora (prompt-injection + obfuscation heuristics) → RS Event v1
- Phish Interceptor (Defensive) — .eml phishing/BEC triage → IOCs + RS Event v1 + tamper-evident case pack
- AI Decision Provenance — Cryptographic decision logging for AI accountability
- Takedown Dossier Generator — Converts telemetry into evidence-ready takedown packs (IOCs, timeline, templates, tamper-evident hashes)
- Deepfake Verification Guard — Liveness + out-of-band verification packs for voice/video fraud (includes Ticket/QR Verification Pack)
- Red Defender — Autonomous multi-agent defensive AI prototype
- Red Specter Lab — Internal lab scripts, SOPs, and tooling backbone
The threat landscape has fundamentally shifted with AI-powered attacks reaching industrial scale.
Red Specter's stance is simple:
- Detect early — Spot threats before they cause damage
- Block where you can — Prevent unauthorized actions at policy chokepoints
- Contain fast — Rapid response with one-command containment
- Prove everything — Evidence-first: hashes, timelines, tamper-evident case packs
- AI-powered phishing reaching unprecedented sophistication
- Shadow AI usage creating compliance and data leakage risks
- Prompt injection attacks bypassing traditional security controls
- Model poisoning targeting RAG systems and fine-tuned models
- Deepfake fraud in voice/video authentication
- Regulatory pressure demanding demonstrable operational controls
AI Shield addresses these threats with integrated prevention, detection, and forensic response—delivering the operational layer that compliance frameworks require but platforms cannot provide.
- Follow each repo's README, licensing, and usage notes
- Open source and available for authorized security testing
- Contributions welcome via issues and pull requests
- Restricted to internal staff and vetted partners
- Do not attempt to run or distribute without approval
- Contact for technical briefings or strategic discussions
- Status: Strategic evaluation phase
- Access: Currently private—partnership discussions or technical briefings by request
- Timeline: Decision under evaluation through February 2026
Important Notice:
Some tooling and research can be misused. You must:
✅ Follow all applicable laws and regulations
✅ Obtain written authorization for offensive testing
✅ Follow employer/client policies and procedures
✅ Get explicit permission before testing systems you do not own
✅ Use tools only for legitimate security research and authorized assessments
Red Specter tools are provided for:
- Authorized security testing and research
- Incident response and forensic investigation
- Defense capability development
- Educational purposes in controlled environments
Not for:
- Unauthorized access or testing
- Malicious purposes or illegal activities
- Violation of computer fraud and abuse laws
- Bypassing security controls without permission
By using these tools, you agree to use them responsibly and legally.
Strategic Inquiries:
- 🛡️ Compliance Platform Executives: Partnership discussions
- 🔬 CISOs: Technical briefings on AI Shield capabilities
- 🔧 Security Teams: Architecture and integration details
- 💼 Enterprise Consulting: AI security governance and compliance
- LinkedIn: Richard Barron
- Email: Contact via LinkedIn
- GitHub: @RichardBarron27
- Location: London, UK 🇬🇧
We welcome contributions!
- 🐛 Bug Reports: Open an issue with reproduction steps
- 💡 Feature Requests: Describe your use case and proposal
- 🔧 Pull Requests: Fork, branch, PR with tests + docs
- 📖 Documentation: Help improve guides and examples
Contribution Guidelines:
- Check existing issues before opening new ones
- Follow the code style of the existing project
- Include tests for new functionality
- Update documentation as needed
- Keep PRs focused on a single feature/fix
Built by Red Specter Security Research | London, UK | 2024-2026
⚡ Innovation Beyond Belief 🔥


