Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
61 changes: 20 additions & 41 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,43 +38,21 @@ It streamlines development, training, and inference, and is compatible with any
<img src="https://dstack.ai/static-assets/static-assets/images/dstack-architecture-diagram-v11.svg" width="750" />
</picture>

### Installation
### Launch the server

> Before using `dstack` through CLI or API, set up a `dstack` server. If you already have a running `dstack` server, you only need to [set up the CLI](#set-up-the-cli).
> Before using `dstack` through CLI or API, set up a `dstack` server. If you already have a running `dstack` server, you only need to [install the CLI](#install-the-cli).

#### Set up the server

##### Configure backends

To orchestrate compute across GPU clouds or Kubernetes clusters, you need to configure backends.

Backends can be set up in `~/.dstack/server/config.yml` or through the [project settings page](https://dstack.ai/docs/concepts/projects#backends) in the UI.

For more details, see [Backends](https://dstack.ai/docs/concepts/backends).
To orchestrate compute across GPU clouds or Kubernetes clusters, you need to [configure backends](https://dstack.ai/docs/concepts/backends).

> When using `dstack` with on-prem servers, backend configuration isn’t required. Simply create [SSH fleets](https://dstack.ai/docs/concepts/fleets#ssh-fleets) once the server is up.

##### Start the server

You can install the server on Linux, macOS, and Windows (via WSL 2). It requires Git and
The server can be installed on Linux, macOS, and Windows (via WSL 2). It requires Git and
OpenSSH.

##### uv

```shell
$ uv tool install "dstack[all]" -U
```

##### pip

```shell
$ pip install "dstack[all]" -U
```

Once it's installed, go ahead and start the server.

```shell
$ dstack server

Applying ~/.dstack/server/config.yml...

The admin token is "bbae0f28-d3dd-4820-bf61-8f4bb40815da"
Expand All @@ -84,27 +62,18 @@ The server is running at http://127.0.0.1:3000/
> For more details on server configuration options, see the
[Server deployment](https://dstack.ai/docs/guides/server-deployment) guide.

### Install the CLI

<details><summary>Set up the CLI</summary>

#### Set up the CLI
<details><summary>If the CLI is not installed with the server</summary>

Once the server is up, you can access it via the `dstack` CLI.
Once the server is up, you can access it via the `dstack` CLI.

The CLI can be installed on Linux, macOS, and Windows. It requires Git and OpenSSH.

##### uv

```shell
$ uv tool install dstack -U
```

##### pip

```shell
$ pip install dstack -U
```

To point the CLI to the `dstack` server, configure it
with the server address, user token, and project name:

Expand All @@ -113,12 +82,22 @@ $ dstack project add \
--name main \
--url http://127.0.0.1:3000 \
--token bbae0f28-d3dd-4820-bf61-8f4bb40815da

Configuration is updated at ~/.dstack/config.yml
```

</details>

### Install AI agent skills

Install [skills](https://skills.sh/dstackai/dstack/dstack) to help AI agents use the `dstack` CLI and edit configuration files.

```shell
$ npx skills add dstackai/dstack
```

AI agents like Claude, Codex, and Cursor can now create and manage fleets and submit workloads on your behalf.

### Define configurations

`dstack` supports the following configurations:
Expand All @@ -133,7 +112,7 @@ Configuration can be defined as YAML files within your repo.

### Apply configurations

Apply the configuration either via the `dstack apply` CLI command or through a programmatic API.
Apply the configuration via the `dstack apply` CLI command, a programmatic API, or through [AI agent skills](#install-ai-agent-skills).

`dstack` automatically manages provisioning, job queuing, auto-scaling, networking, volumes, run failures,
out-of-capacity errors, port-forwarding, and more &mdash; across clouds and on-prem clusters.
Expand Down
15 changes: 15 additions & 0 deletions docs/assets/javascripts/termynal.js
Original file line number Diff line number Diff line change
Expand Up @@ -119,6 +119,21 @@ class Termynal {

line.removeAttribute(`${this.pfx}-cursor`);
}
// Keep cursor visible if the last input has no output after it
let lastInputIdx = -1;
let hasOutputAfter = false;
for (let i = this.lines.length - 1; i >= 0; i--) {
if (this.lines[i].getAttribute(this.pfx) === 'input') {
lastInputIdx = i;
break;
}
if (this.lines[i].textContent.trim()) {
hasOutputAfter = true;
}
}
if (lastInputIdx >= 0 && !hasOutputAfter) {
this.lines[lastInputIdx].setAttribute(`${this.pfx}-cursor`, this.cursor);
}
this.addRestart()
this.finishElement.style.visibility = 'hidden'
this.lineDelay = this.originalLineDelay
Expand Down
41 changes: 34 additions & 7 deletions docs/docs/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ description: How to install the dstack server and CLI
If you don't want to host the `dstack` server (or want to access GPU marketplace),
skip installation and proceed to [dstack Sky](https://sky.dstack.ai). -->

## Set up the server
## Launch the server

The server can run on your laptop or any environment with access to the cloud and on-prem clusters you plan to use.

Expand Down Expand Up @@ -70,11 +70,11 @@ The server can run on your laptop or any environment with access to the cloud an

For more details on server deployment options, see the [Server deployment](guides/server-deployment.md) guide.

### Configure backends
!!! info "Configure backends"

> To orchestrate compute across GPU clouds or Kubernetes clusters, you need to configure [backends](concepts/backends.md).
To orchestrate compute across GPU clouds or Kubernetes clusters, you need to configure [backends](concepts/backends.md).

## Set up the CLI
## Install the CLI

Once the server is up, you can access it via the `dstack` CLI.

Expand Down Expand Up @@ -175,7 +175,7 @@ Once the server is up, you can access it via the `dstack` CLI.
> If you get an error similar to `2: command not found: compdef`, then add the following line to the beginning of your `~/.zshrc` file:
> `autoload -Uz compinit && compinit`.

### Configure the default project
### Configure the project

To point the CLI to the `dstack` server, configure it
with the server address, user token, and project name:
Expand All @@ -195,7 +195,7 @@ Configuration is updated at ~/.dstack/config.yml

This configuration is stored in `~/.dstack/config.yml`.

### Install agent skills
## Install AI agent skills

Install [skills](https://skills.sh/dstackai/dstack/dstack) to help AI agents use the `dstack` CLI and edit configuration files.

Expand All @@ -207,7 +207,34 @@ $ npx skills add dstackai/dstack

</div>

Agent skills are experimental. Share your feedback via [GitHub issues](https://github.com/dstackai/dstack/issues).
AI agents like Claude, Codex, and Cursor can now create and manage fleets and submit workloads on your behalf.

<div class="termy">

```shell
▐▛███▜▌ Claude Code v2.1.83
▝▜█████▛▘ Opus 4.6 (1M context) · Claude Team
▘▘ ▝▝ ~/skills-demo

$ /dstack

dstack skill loaded. How can I help? For example:

- Apply a configuration (*.dstack.yml)
- Check run status (dstack ps)
- Manage fleets, volumes, or services
- Create or edit a dstack configuration
- Troubleshoot provisioning or connectivity issues

What would you like to do?

$
```

</div>

!!! info "Feedback"
We're actively improving Skills and would love your feedback in [GitHub issues](https://github.com/dstackai/dstack/issues).

!!! info "What's next?"
1. See [Backends](concepts/backends.md)
Expand Down
Loading