This is a vibe coding app based on a functional specification document (FSD).
GraphHopper-based prototype for visualizing motorway distance to eligible HPC chargers.
- Motorways: Overpass API (single query, cached locally)
- Distance computation: configurable (
graphhopperexact default,exit_basedoptional) - Preprocessing: Python pipeline with restartable artifacts
- Map delivery: MBTiles + tile server + MapLibre GL JS
- Python 3.9+
- Docker Desktop with Compose (Mac/Linux pipeline + local dev)
- Internet access for Overpass API
BNetzA charging station register (data/raw/bnetza/)
- Download the latest CSV from the Bundesnetzagentur Ladesäulenkarte
- Place in
data/raw/bnetza/ - Update the filename in
config/default.yaml→paths.charger_csvif your date differs
Germany OSM PBF (data/raw/osm/germany-latest.osm.pbf) — only needed for GraphHopper import
- Download from Geofabrik (~4.4 GB)
GraphHopper JAR (tools/graphhopper/graphhopper-web-*.jar) — only needed when running GraphHopper natively (without Docker)
- Download the latest
graphhopper-web-*.jarfrom the GraphHopper releases page - Place it in
tools/graphhopper/with its original filename
python3 -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
python3 -m pip install --upgrade pip
python3 -m pip install -e .Only for local frontend (optional):
cd frontend && npm install && cd ..Requires Docker Desktop. GraphHopper runs inside the compose network.
docker compose up graphhopper --remove-orphans # first run imports graph (~10 min), then serves on :8989
docker compose run --rm pipeline # runs all stages; skips already-completed artifactsImport completion is persisted at tools/graphhopper/.import-complete — restarts do not re-import.
For Germany import set Docker Desktop memory to at least 10 GB (16 GB recommended).
Reset and re-import if needed:
rm -f tools/graphhopper/.import-complete
rm -rf tools/graphhopper/graph-cache
docker compose up graphhopper --remove-orphansPipeline outputs:
data/intermediate/— stage artifacts (cached between runs)data/processed/hpc_distance_segments.geojsondata/processed/hpc_sites.geojsondata/processed/hpc_distance.mbtilesdata/processed/hpc_sites.mbtilesdata/processed/run_metadata.json
Requires Java 17+ and the GraphHopper JAR + Germany PBF (see Prerequisites above).
One-time graph import + start server:
powershell -ExecutionPolicy Bypass -File .\scripts\start_graphhopper.ps1First run imports the graph (~10 min for Germany). Subsequent runs start the server immediately.
Uses config/graphhopper.yml and writes graph cache to tools/graphhopper/graph-cache/.
Run the pipeline:
powershell -ExecutionPolicy Bypass -File .\scripts\run_pipeline.ps1Uses config/local.yaml (points to http://127.0.0.1:8989). Tippecanoe is skipped on Windows — copy the GeoJSON outputs to a Mac and run the Mac pipeline step to produce mbtiles.
Reset graph cache:
Remove-Item -Recurse -Force tools\graphhopper\graph-cache
Remove-Item tools\graphhopper\.import-completedocker compose up --build api tileserver- API:
http://localhost:8000 - Tile server:
http://localhost:8080
Docker (recommended):
docker compose up --build frontend api tileserverLocal:
cd frontend
npm run devOpen http://localhost:5173.
- Route stage is parallelized (
routing.max_workers) and logs progress with ETA every 30 s. - GraphHopper exact mode uses BallTree + adaptive lower-bound pruning (provably optimal result, no fixed top-K approximation). Requires Contraction Hierarchies enabled in
config/graphhopper.yml. - Motorways are fetched from Overpass in a single query and clipped to the Germany polygon before sampling.
- Overpass cache refresh requires confirmation when older than 90 days (config-driven).
- An HPC stations layer (
hpc_sites) is generated from the BNetzA CSV. - Frontend uses vector tiles for distance lines and clustered GeoJSON for HPC stations.
After the Windows pipeline finishes, copy these intermediate artifacts to the Mac
(same paths under data/intermediate/):
data/intermediate/01_motorways_clipped.geojson
data/intermediate/02_directional_sample_points.json
data/intermediate/03_charger_checksum.json
data/intermediate/03_eligible_chargers.json
data/intermediate/05_route_distances.json
These files form a coupled cache set. Copying only 05_route_distances.json is not
safe because stages 2 and 3 must match the point ids and charger set used when routing.
With the full set present, the pipeline reuses stages 1-5 and regenerates only the
processed GeoJSON, run metadata, and mbtiles outputs.
Then on Mac, run tippecanoe only (no GH needed, no PBF needed):
docker compose run --rm --no-deps pipeline--no-deps skips starting the graphhopper container. With the intermediate cache set
in place, the pipeline reuses stages 1-5 and only rebuilds downstream processed
outputs and mbtiles.
Run the full pipeline on Mac first (produces mbtiles). Then:
./deploy/prepare_render_assets.shThis stages mbtiles into deploy/render/tileserver/data/ and hpc_sites.geojson into deploy/render/api-data/.
Commit and push:
git add deploy/render
git commit -m "chore: update pipeline outputs"
git pushRender builds Docker images from your repo on every push (configured via render.yaml Blueprint).
If service names differ from defaults, adjust VITE_TILESERVER_BASE in render.yaml.