Summary
The _load_model function in wrapper.py fails when a client requests model_id: 'stabilityai/sd-turbo' (non-SDXL). All three loading strategies are attempted and all fail. The pipeline falls back to the previously loaded model. 22 occurrences across 3 test stream sessions in the last 12 hours (staging).
Error Details
level=ERROR location=wrapper.py:1169:_load_model
message=_load_model: All loading methods failed for model 'stabilityai/sd-turbo'.
Last error: Invalid `pretrained_model_name_or_path` provided. Please set it to a valid URL.
Followed by:
level=ERROR location=pipeline.py:142:_reload_pipeline
message=[update_params] Error reloading pipeline, falling back to previous params:
_load_model: All loading methods failed for model 'stabilityai/sd-turbo'.
Last error: Invalid `pretrained_model_name_or_path` provided. Please set it to a valid URL.
Loading Sequence (from logs)
_load_model: Attempting to load with AutoPipeline from_pretrained...
⚠ WARNING: stabilityai/sd-turbo does not appear to have a file named model_index.json.
_load_model: Attempting to load with SD from_single_file...
⚠ WARNING: SD from_single_file failed: Invalid pretrained_model_name_or_path. Please set it to a valid URL.
_load_model: Attempting to load with SDXL from_single_file...
⚠ WARNING: SDXL from_single_file failed: Invalid pretrained_model_name_or_path. Please set it to a valid URL.
ERROR: All loading methods failed
Stack Trace
File "/app/live/streamdiffusion/.venv/lib/python3.11/site-packages/streamdiffusion/wrapper.py", line 1174, in _load_model
RuntimeError: _load_model: All loading methods failed for model 'stabilityai/sd-turbo'.
Last error: Invalid `pretrained_model_name_or_path` provided. Please set it to a valid URL.
Context
- Affected streams:
aiJobTesterStream-1774367307005720297, aiJobTesterStream-1774365236335977691, aiJobTesterStream-1774357901550169619 (AI job tester, not real users)
- Node:
ai.nightnode.net
- Pipeline:
streamdiffusion-sdxl
- First seen: ~13:11 UTC 2026-03-24; Last seen: ~15:48 UTC 2026-03-24
- Frequency: 22 errors across 3 stream sessions
Root Cause
stabilityai/sd-turbo (the non-SDXL variant) is being passed as model_id. It lacks a model_index.json file on HuggingFace Hub so AutoPipeline.from_pretrained() fails. The from_single_file() fallback also fails because HF hub paths aren't supported for that loader.
The system supports stabilityai/sdxl-turbo correctly (which loads via AutoPipeline). This is likely a client/tester misconfiguration.
Suggested Fix
- Add validation in
_reload_pipeline / param update path: check if model_id == 'stabilityai/sd-turbo' and either remap to stabilityai/sdxl-turbo or return a clear validation error before attempting load.
- Or: add
sd-turbo to a blocklist/allowlist at the API layer with a helpful error message pointing to the correct model ID.
- Improve the error message emitted to the monitoring event to be more actionable.
Observed Impact
- Non-fatal: pipeline falls back to previous model. User-visible effect is that the requested model change silently fails.
- Monitoring event of type
error is emitted to the stream, so the client can detect it.
Grafana Reference
fal.ai Logs Dashboard — 2026-03-24, 13:00–16:00 UTC
Summary
The
_load_modelfunction inwrapper.pyfails when a client requestsmodel_id: 'stabilityai/sd-turbo'(non-SDXL). All three loading strategies are attempted and all fail. The pipeline falls back to the previously loaded model. 22 occurrences across 3 test stream sessions in the last 12 hours (staging).Error Details
Followed by:
Loading Sequence (from logs)
Stack Trace
Context
aiJobTesterStream-1774367307005720297,aiJobTesterStream-1774365236335977691,aiJobTesterStream-1774357901550169619(AI job tester, not real users)ai.nightnode.netstreamdiffusion-sdxlRoot Cause
stabilityai/sd-turbo(the non-SDXL variant) is being passed asmodel_id. It lacks amodel_index.jsonfile on HuggingFace Hub soAutoPipeline.from_pretrained()fails. Thefrom_single_file()fallback also fails because HF hub paths aren't supported for that loader.The system supports
stabilityai/sdxl-turbocorrectly (which loads via AutoPipeline). This is likely a client/tester misconfiguration.Suggested Fix
_reload_pipeline/ param update path: check ifmodel_id == 'stabilityai/sd-turbo'and either remap tostabilityai/sdxl-turboor return a clear validation error before attempting load.sd-turboto a blocklist/allowlist at the API layer with a helpful error message pointing to the correct model ID.Observed Impact
erroris emitted to the stream, so the client can detect it.Grafana Reference
fal.ai Logs Dashboard — 2026-03-24, 13:00–16:00 UTC