-
Dev environments
+
Run development environments
- Before training or deploying models, ML engineers need interactive GPU access to experiment and debug.
+ If you or your agent need a development environment with a GPU, let dstack create you a dev environment.
-
dstack's dev environments let you connect desktop IDEs such as VS
+ If you plan to work with it yourself, you can access it using your desktop IDE such as VS
Code, Cursor, and
- Windsurf directly to cloud or on-prem GPUs.
+ Windsurf. dstack apply prints both the IDE URL and SSH command.
@@ -267,10 +564,10 @@
Dev environments
-
Single-node & distributed tasks
+
Run training or batch jobs at any scale
- Run training or batch workloads on a single GPU, or scale to multi-GPU and multi-node clusters using simple task configurations.
+ Run training or batch workloads on a single GPU, or scale to multi-GPU and multi-node clusters using simple task configurations.
dstack automates cluster provisioning, resource allocation, and job scheduling.
@@ -288,8 +585,42 @@
Single-node & distributed tasks
-

+
+
+
+
type: task
+name: train-distrib
+
+nodes: 2
+
+python: 3.12
+env:
+ - NCCL_DEBUG=INFO
+files: .
+commands:
+ - uv pip install -r requirements.txt
+ - |
+ torchrun \
+ --nproc-per-node=$DSTACK_GPUS_PER_NODE \
+ --node-rank=$DSTACK_NODE_RANK \
+ --nnodes=$DSTACK_NODES_NUM \
+ --master-addr=$DSTACK_MASTER_NODE_IP \
+ --master-port=12345 \
+ multinode.py 50 10
+
+resources:
+ gpu: H100:8
+ shm_size: 24GB
+
+
+
+
+
+
+
+
+
+
@@ -297,20 +628,54 @@ Single-node & distributed tasks
-

+
+
+
+
type: service
+name: qwen3-235b
+
+image: lmsysorg/sglang
+env:
+ - HF_TOKEN
+ - MODEL_ID=Qwen/Qwen3-235B-A22B-FP8
+commands:
+ - |
+ python3 -m sglang.launch_server \
+ --model-path $MODEL_ID \
+ --tp $DSTACK_GPUS_NUM \
+ --reasoning-parser qwen3 \
+ --port 8000
+
+port: 8000
+model: Qwen/Qwen3-235B-A22B-FP8
+
+resources:
+ gpu: H100:8
+ disk: 500GB..
+
+
+
+
+
+
+
+
+
+
-
Scalable model inference
+
Run high-performance model inference
With dstack, you can deploy models as secure,
- auto-scaling, OpenAI-compatible endpoints, integrating with top open-source serving frameworks.
+ auto-scaling, OpenAI-compatible endpoints, integrating with top open-source serving frameworks
+ such as SGLang, vLLM,
+ TensorRT-LLM, or any other.
- dstack supports disaggregated prefill/decode and cache-aware routing, providing
+ dstack enables Disaggregated Prefill/Decode and cache-aware routing, providing
production-grade, optimized inference.
@@ -331,107 +696,6 @@
Scalable model inference
-
-
-
Loved by world-class AI teams
-
-
-
-
-

-
-
Wah Loon Keng
-
-
Sr. AI Engineer @Electronic Arts
-
-
- With dstack, AI researchers at EA can spin up and scale experiments without touching
- infrastructure. It supports everything from quick prototyping to multi-node training on any cloud.
-
-
-
-
-
-

-
-
Aleksandr Movchan
-
-
ML Engineer @Mobius Labs
-
-
- Thanks to dstack, my team can quickly tap into affordable
- GPUs and streamline our workflows
- from testing and development to full-scale application deployment.
-
-
-
-
-
-

-
-
Alvaro Bartolome
-
-
ML Engineer @Argilla
-
-
- With dstack it's incredibly easy to define a configuration
- within a
- repository
- and run it without worrying about GPU availability. It lets you focus on
- data and your research.
-
-
-
-
-
-

-
-
Park Chansung
-
-
ML Researcher @ETRI
-
-
- Thanks to dstack, I can effortlessly access the top GPU
- options across
- different clouds,
- saving me time and money while pushing my AI work forward.
-
-
-
-
-
-

-
-
Eckart Burgwedel
-
-
CEO @Uberchord
-
-
- With dstack, running LLMs on a cloud GPU is as
- easy as running a local Docker container.
- It combines the ease of Docker with the auto-scaling capabilities of K8S.
-
-
-
-
-
-

-
-
Peter Hill
-
-
Co-Founder @CUDO Compute
-
-
- dstack simplifies infrastructure provisioning and AI
- development.
- If your team is on the lookout for an AI platform, I
- wholeheartedly recommend dstack.
-
-
-
-
-
-
FAQ
@@ -564,6 +828,107 @@ FAQ
+
+
+
Trusted by thousands of engineers across 100+ AI-first companies
+
+
+
+
+

+
+
Wah Loon Keng
+
+
Sr. AI Engineer @Electronic Arts
+
+
+ With dstack, AI researchers at EA can spin up and scale experiments without touching
+ infrastructure. It supports everything from quick prototyping to multi-node training on any cloud.
+
+
+
+
+
+

+
+
Aleksandr Movchan
+
+
ML Engineer @Mobius Labs
+
+
+ Thanks to dstack, my team can quickly tap into affordable
+ GPUs and streamline our workflows
+ from testing and development to full-scale application deployment.
+
+
+
+
+
+

+
+
Alvaro Bartolome
+
+
ML Engineer @Argilla
+
+
+ With dstack it's incredibly easy to define a configuration
+ within a
+ repository
+ and run it without worrying about GPU availability. It lets you focus on
+ data and your research.
+
+
+
+
+
+

+
+
Park Chansung
+
+
ML Researcher @ETRI
+
+
+ Thanks to dstack, I can effortlessly access the top GPU
+ options across
+ different clouds,
+ saving me time and money while pushing my AI work forward.
+
+
+
+
+
+

+
+
Eckart Burgwedel
+
+
CEO @Uberchord
+
+
+ With dstack, running LLMs on a cloud GPU is as
+ easy as running a local Docker container.
+ It combines the ease of Docker with the auto-scaling capabilities of K8S.
+
+
+
+
+
+

+
+
Peter Hill
+
+
Co-Founder @CUDO Compute
+
+
+ dstack simplifies infrastructure provisioning and AI
+ development.
+ If your team is on the lookout for an AI platform, I
+ wholeheartedly recommend dstack.
+
+
+
+
+
+
@@ -575,10 +940,12 @@
Get started in minutes
-
Install dstack on your laptop via uv
- and start it using the CLI, or deploy it anywhere with the dstackai/dstack Docker image.
+
+ Install dstack on your laptop with uv,
+ or deploy it anywhere using the dstackai/dstack Docker image.
+
-
Set up backends or SSH fleets, then add your team.
+
Bring your compute via backends or SSH fleets, then bring your team.
Get started in minutes
@@ -611,9 +987,7 @@
dstack Sky
-
Don't want to host the dstack server or want to get the
- cheapest GPUs
- from the marketplace?
+
Hosted by us. Bring your own cloud, or tap into marketplace.
@@ -636,19 +1010,18 @@
dstack Enterprise
-
Looking for a self-hosted dstack with SSO, governance
- controls, and enterprise support?
+
Self-hosted with SSO, air-gapped setup, and dedicated support.
diff --git a/docs/overrides/main.html b/docs/overrides/main.html
index 9597dd06f..f35bbf540 100644
--- a/docs/overrides/main.html
+++ b/docs/overrides/main.html
@@ -191,7 +191,7 @@
{% endblock %}
{% block announce %}
-Infrastructure orchestration is becoming an
agent skill
+
Let agents manage infra and orchestrate workloads
{% endblock %}
{% block footer %}