The goal of this project is to implement the following paper: https://www-sop.inria.fr/reves/Basilic/2016/FLB16/fidelity_simplicity.pdf
We also implemented several new ideas, such as a convolutional neural network with synthetic data augmentation for preprocessing.
Guess what: the entire library can run in your browser !
If you want a guided tour of how the library works, go here:
If you just want to test it for yourself, with your own drawings, go here:
The library is also available as a pip package:
pip install "sketchy-svg[onnx]"
# or use uv: uv add "sketchy-svg[onnx]"
There is no command line interface. You can easily build your own, to get inspiration look at src/sketchy_svg/viz inside the class Demo
To install the dependencies, install uv and run:
uv sync --extra onnx To launch the notebooks, run uv run marimo edit ., it should open the notebooks in your browser.
The CNN denoiser can be trained from notebooks/cnn.py.
1. Install training dependencies
On a machine with a GPU (Linux, default CUDA torch):
uv sync --group trainOn a CPU-only machine (e.g. your local machine):
uv sync --group train --extra cpu2. Run the notebook
uv run marimo edit notebooks/cnn.pySet USE_RAY = False to train locally, or USE_RAY = True to offload to a remote Ray cluster (set RAY_ADDRESS accordingly). The trained model is exported to src/sketchy_cnn/model.onnx at the end of the notebook.
How the SVG dataset is downloaded
data/svg_dataset.csv is tracked in git so no download is needed in most cases.
If you need to refresh it, the dataset comes from OmniSVG/MMSVG-Icon on HuggingFace. Create a token at https://huggingface.co/settings/tokens and add it to a .env file at the project root:
HF_TOKEN=hf_...
Then use the download button in notebooks/svg_dataset.py.
If you want, you can read the Presentation
