Thank you for your interest in contributing to DeepCamera! This project is evolving into an open-source AI skill platform for SharpAI Aegis.
The best way to contribute is by building a new skill. Each skill is a self-contained folder under skills/ with:
SKILL.md— declares parameters (rendered as UI in Aegis) and capabilitiesrequirements.txt— Python dependenciesscripts/— entry point using JSON-lines stdin/stdout protocol
See skills/detection/yolo-detection-2026/ for a complete reference implementation.
- Camera providers: Eufy, Reolink, Tapo, Ring
- Messaging channels: Matrix, LINE, Signal
- Automation triggers: MQTT, webhooks
- AI models: VLM scene analysis, SAM2 segmentation, depth estimation
- Use GitHub Issues
- Include your platform, Python version, and steps to reproduce
- Fix typos, improve clarity, add examples
- Add platform-specific setup guides under
docs/
git clone https://github.com/SharpAI/DeepCamera.git
cd DeepCamera
# Work on a skill
cd skills/detection/yolo-detection-2026
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt- Python: follow PEP 8
- Use type hints where practical
- Add docstrings to public functions
By contributing, you agree that your contributions will be licensed under the MIT License.