Skip to content

Commit 6fc77eb

Browse files
release: 0.7.0-alpha.1 (#326)
Automated Release PR --- ## 0.7.0-alpha.1 (2026-03-28) Full Changelog: [v0.6.1-alpha.1...v0.7.0-alpha.1](v0.6.1-alpha.1...v0.7.0-alpha.1) ### ⚠ BREAKING CHANGES * eliminate GET /chat/completions/{completion_id} conformance issues * rename agents API to responses API * eliminate /files/{file_id} GET differences ### Features * Add stream_options parameter support ([b4c2f15](b4c2f15)) * eliminate /files/{file_id} GET differences ([1f28d73](1f28d73)) * eliminate GET /chat/completions/{completion_id} conformance issues ([dad9f54](dad9f54)) * **internal:** implement indices array format for query and form serialization ([6694121](6694121)) * **responses:** add cancel endpoint for background responses ([d9bc91a](d9bc91a)) ### Bug Fixes * **deps:** bump minimum typing-extensions version ([50ea4d7](50ea4d7)) * **inference:** improve chat completions OpenAI conformance ([147b88b](147b88b)) * **pydantic:** do not pass `by_alias` unless set ([f6836f9](f6836f9)) * remove duplicate dataset_id parameter in append-rows endpoint ([d6a79d0](d6a79d0)) * sanitize endpoint path params ([9b288d5](9b288d5)) ### Chores * **ci:** skip lint on metadata-only changes ([b096c2c](b096c2c)) * **internal:** tweak CI branches ([1df7e26](1df7e26)) * **internal:** update gitignore ([0e98cfd](0e98cfd)) * **internal:** version bump ([f468096](f468096)) * **tests:** bump steady to v0.19.4 ([f5ad8f8](f5ad8f8)) * **tests:** bump steady to v0.19.5 ([55689e1](55689e1)) * **tests:** bump steady to v0.19.6 ([87cb87e](87cb87e)) * **tests:** bump steady to v0.19.7 ([10f6ed7](10f6ed7)) ### Refactors * remove fine_tuning API ([021bd5e](021bd5e)) * remove tool_groups from public API and auto-register from provider specs ([c0df2dc](c0df2dc)) * rename agents API to responses API ([f5c27db](f5c27db)) * rename rag-runtime provider to file-search ([94a14da](94a14da)) * **tests:** switch from prism to steady ([23d591c](23d591c)) --- This pull request is managed by Stainless's [GitHub App](https://github.com/apps/stainless-app). The [semver version number](https://semver.org/#semantic-versioning-specification-semver) is based on included [commit messages](https://www.conventionalcommits.org/en/v1.0.0/). Alternatively, you can manually set the version number in the title of this pull request. For a better experience, it is recommended to use either rebase-merge or squash-merge when merging this pull request. 🔗 Stainless [website](https://www.stainlessapi.com) 📚 Read the [docs](https://app.stainlessapi.com/docs) 🙋 [Reach out](mailto:support@stainlessapi.com) for help or questions --------- Co-authored-by: stainless-app[bot] <142633134+stainless-app[bot]@users.noreply.github.com>
1 parent 3eb32ab commit 6fc77eb

File tree

105 files changed

+1152
-5040
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

105 files changed

+1152
-5040
lines changed

.github/workflows/ci.yml

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,14 @@
11
name: CI
22
on:
33
push:
4-
branches-ignore:
5-
- 'generated'
6-
- 'codegen/**'
7-
- 'integrated/**'
8-
- 'stl-preview-head/**'
9-
- 'stl-preview-base/**'
4+
branches:
5+
- '**'
6+
- '!integrated/**'
7+
- '!stl-preview-head/**'
8+
- '!stl-preview-base/**'
9+
- '!generated'
10+
- '!codegen/**'
11+
- 'codegen/stl/**'
1012
pull_request:
1113
branches-ignore:
1214
- 'stl-preview-head/**'
@@ -18,7 +20,7 @@ jobs:
1820
timeout-minutes: 10
1921
name: lint
2022
runs-on: ${{ github.repository == 'stainless-sdks/llama-stack-client-python' && 'depot-ubuntu-24.04' || 'ubuntu-latest' }}
21-
if: github.event_name == 'push' || github.event.pull_request.head.repo.fork
23+
if: (github.event_name == 'push' || github.event.pull_request.head.repo.fork) && (github.event_name != 'push' || github.event.head_commit.message != 'codegen metadata')
2224
steps:
2325
- uses: actions/checkout@v6
2426

@@ -34,7 +36,7 @@ jobs:
3436
run: ./scripts/lint
3537

3638
build:
37-
if: github.event_name == 'push' || github.event.pull_request.head.repo.fork
39+
if: (github.event_name == 'push' || github.event.pull_request.head.repo.fork) && (github.event_name != 'push' || github.event.head_commit.message != 'codegen metadata')
3840
timeout-minutes: 10
3941
name: build
4042
permissions:

.release-please-manifest.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
{
2-
".": "0.6.1-alpha.1"
2+
".": "0.7.0-alpha.1"
33
}

.stats.yml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
configured_endpoints: 108
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-1b387ba7b0e0d1aa931032ac2101e5a473b9fa42975e6575cf889feace342b80.yml
3-
openapi_spec_hash: a144868005520bd3f8f9dc3d8cac1c22
4-
config_hash: ef1f9b33e203c71cfc10d91890c1ed2d
1+
configured_endpoints: 94
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-7b856674124b79094ac28a6ac451d7a67b5ddd74aebecd5e468a1f8ccfd13bd1.yml
3+
openapi_spec_hash: a5ca7c4dac274c534338a9b3f5d388c0
4+
config_hash: 7d5765272a641656f8231509937663a7

CHANGELOG.md

Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,53 @@
11
# Changelog
22

3+
## 0.7.0-alpha.1 (2026-03-28)
4+
5+
Full Changelog: [v0.6.1-alpha.1...v0.7.0-alpha.1](https://github.com/llamastack/llama-stack-client-python/compare/v0.6.1-alpha.1...v0.7.0-alpha.1)
6+
7+
### ⚠ BREAKING CHANGES
8+
9+
* eliminate GET /chat/completions/{completion_id} conformance issues
10+
* rename agents API to responses API
11+
* eliminate /files/{file_id} GET differences
12+
13+
### Features
14+
15+
* Add stream_options parameter support ([b4c2f15](https://github.com/llamastack/llama-stack-client-python/commit/b4c2f15b16872730a9c254b1b2dfc02aba223a71))
16+
* eliminate /files/{file_id} GET differences ([1f28d73](https://github.com/llamastack/llama-stack-client-python/commit/1f28d730824b6cb721415985194c5f4567e42ea7))
17+
* eliminate GET /chat/completions/{completion_id} conformance issues ([dad9f54](https://github.com/llamastack/llama-stack-client-python/commit/dad9f546400133d34a0cd650a227800be78b0d1f))
18+
* **internal:** implement indices array format for query and form serialization ([6694121](https://github.com/llamastack/llama-stack-client-python/commit/6694121eee689fb7033704bad2b698a4640e2431))
19+
* **responses:** add cancel endpoint for background responses ([d9bc91a](https://github.com/llamastack/llama-stack-client-python/commit/d9bc91afecb64ec27b97d37699d5ff6c1222d369))
20+
21+
22+
### Bug Fixes
23+
24+
* **deps:** bump minimum typing-extensions version ([50ea4d7](https://github.com/llamastack/llama-stack-client-python/commit/50ea4d7fd98a86726f6825d911507b7fc96e2e60))
25+
* **inference:** improve chat completions OpenAI conformance ([147b88b](https://github.com/llamastack/llama-stack-client-python/commit/147b88b44eb83bceb7cd6204cd79d8dafe8f8e7a))
26+
* **pydantic:** do not pass `by_alias` unless set ([f6836f9](https://github.com/llamastack/llama-stack-client-python/commit/f6836f9dacef1b9b26774fcfaf82689ae00f374a))
27+
* remove duplicate dataset_id parameter in append-rows endpoint ([d6a79d0](https://github.com/llamastack/llama-stack-client-python/commit/d6a79d0a830bad4e82b70d7ab9e007ebc16e0f05))
28+
* sanitize endpoint path params ([9b288d5](https://github.com/llamastack/llama-stack-client-python/commit/9b288d553ae83860fbe1d8ee9352532ed04ddd9b))
29+
30+
31+
### Chores
32+
33+
* **ci:** skip lint on metadata-only changes ([b096c2c](https://github.com/llamastack/llama-stack-client-python/commit/b096c2ce513a5d2de9a17e7841609feb30d1b0b2))
34+
* **internal:** tweak CI branches ([1df7e26](https://github.com/llamastack/llama-stack-client-python/commit/1df7e2605e78572eccc53aa8db1e44d987106a9b))
35+
* **internal:** update gitignore ([0e98cfd](https://github.com/llamastack/llama-stack-client-python/commit/0e98cfdcf7779ca24ef4dbd7e9e8d9c75fa2a751))
36+
* **internal:** version bump ([f468096](https://github.com/llamastack/llama-stack-client-python/commit/f46809696ddf1f179cc26984facfcbb7f9264730))
37+
* **tests:** bump steady to v0.19.4 ([f5ad8f8](https://github.com/llamastack/llama-stack-client-python/commit/f5ad8f801078d79c03ec7723cd64b1c9895def2d))
38+
* **tests:** bump steady to v0.19.5 ([55689e1](https://github.com/llamastack/llama-stack-client-python/commit/55689e1ddee55d81efff681dbb3523b0ed09d658))
39+
* **tests:** bump steady to v0.19.6 ([87cb87e](https://github.com/llamastack/llama-stack-client-python/commit/87cb87e8ecd52d95b5a375f8b4c00f5837e4feeb))
40+
* **tests:** bump steady to v0.19.7 ([10f6ed7](https://github.com/llamastack/llama-stack-client-python/commit/10f6ed745b38d89be2d6a5eb007427b015e84e23))
41+
42+
43+
### Refactors
44+
45+
* remove fine_tuning API ([021bd5e](https://github.com/llamastack/llama-stack-client-python/commit/021bd5e6138574884befe6f20ba86ceeefee1767))
46+
* remove tool_groups from public API and auto-register from provider specs ([c0df2dc](https://github.com/llamastack/llama-stack-client-python/commit/c0df2dcf9bb38600f73db746dc38d3277e74e7b9))
47+
* rename agents API to responses API ([f5c27db](https://github.com/llamastack/llama-stack-client-python/commit/f5c27db9d2716098a116d516cc5ad673ee621988))
48+
* rename rag-runtime provider to file-search ([94a14da](https://github.com/llamastack/llama-stack-client-python/commit/94a14dad88ed55d3f2baf1de8eb30ba529fb9818))
49+
* **tests:** switch from prism to steady ([23d591c](https://github.com/llamastack/llama-stack-client-python/commit/23d591c70549c7f00b7be136a19893dbdd65f43c))
50+
351
## 0.6.1-alpha.1 (2026-03-13)
452

553
Full Changelog: [v0.5.0-alpha.2...v0.6.1-alpha.1](https://github.com/llamastack/llama-stack-client-python/compare/v0.5.0-alpha.2...v0.6.1-alpha.1)

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ $ pip install ./path-to-wheel-file.whl
8585

8686
## Running tests
8787

88-
Most tests require you to [set up a mock server](https://github.com/stoplightio/prism) against the OpenAPI spec to run the tests.
88+
Most tests require you to [set up a mock server](https://github.com/dgellow/steady) against the OpenAPI spec to run the tests.
8989

9090
```sh
9191
$ ./scripts/mock

README.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -253,11 +253,12 @@ from llama_stack_client import LlamaStackClient
253253

254254
client = LlamaStackClient()
255255

256-
client.toolgroups.register(
257-
provider_id="provider_id",
258-
toolgroup_id="toolgroup_id",
259-
mcp_endpoint={"uri": "uri"},
256+
response_object = client.responses.create(
257+
input="string",
258+
model="model",
259+
prompt={"id": "id"},
260260
)
261+
print(response_object.prompt)
261262
```
262263

263264
## File uploads

api.md

Lines changed: 8 additions & 80 deletions
Original file line numberDiff line numberDiff line change
@@ -18,47 +18,6 @@ from llama_stack_client.types import (
1818
)
1919
```
2020

21-
# Toolgroups
22-
23-
Types:
24-
25-
```python
26-
from llama_stack_client.types import ListToolGroupsResponse, ToolGroup, ToolgroupListResponse
27-
```
28-
29-
Methods:
30-
31-
- <code title="get /v1/toolgroups">client.toolgroups.<a href="./src/llama_stack_client/resources/toolgroups.py">list</a>() -> <a href="./src/llama_stack_client/types/toolgroup_list_response.py">ToolgroupListResponse</a></code>
32-
- <code title="get /v1/toolgroups/{toolgroup_id}">client.toolgroups.<a href="./src/llama_stack_client/resources/toolgroups.py">get</a>(toolgroup_id) -> <a href="./src/llama_stack_client/types/tool_group.py">ToolGroup</a></code>
33-
- <code title="post /v1/toolgroups">client.toolgroups.<a href="./src/llama_stack_client/resources/toolgroups.py">register</a>(\*\*<a href="src/llama_stack_client/types/toolgroup_register_params.py">params</a>) -> None</code>
34-
- <code title="delete /v1/toolgroups/{toolgroup_id}">client.toolgroups.<a href="./src/llama_stack_client/resources/toolgroups.py">unregister</a>(toolgroup_id) -> None</code>
35-
36-
# Tools
37-
38-
Types:
39-
40-
```python
41-
from llama_stack_client.types import ToolListResponse
42-
```
43-
44-
Methods:
45-
46-
- <code title="get /v1/tools">client.tools.<a href="./src/llama_stack_client/resources/tools.py">list</a>(\*\*<a href="src/llama_stack_client/types/tool_list_params.py">params</a>) -> <a href="./src/llama_stack_client/types/tool_list_response.py">ToolListResponse</a></code>
47-
- <code title="get /v1/tools/{tool_name}">client.tools.<a href="./src/llama_stack_client/resources/tools.py">get</a>(tool_name) -> <a href="./src/llama_stack_client/types/tool_def.py">ToolDef</a></code>
48-
49-
# ToolRuntime
50-
51-
Types:
52-
53-
```python
54-
from llama_stack_client.types import ToolDef, ToolInvocationResult, ToolRuntimeListToolsResponse
55-
```
56-
57-
Methods:
58-
59-
- <code title="post /v1/tool-runtime/invoke">client.tool_runtime.<a href="./src/llama_stack_client/resources/tool_runtime.py">invoke_tool</a>(\*\*<a href="src/llama_stack_client/types/tool_runtime_invoke_tool_params.py">params</a>) -> <a href="./src/llama_stack_client/types/tool_invocation_result.py">ToolInvocationResult</a></code>
60-
- <code title="get /v1/tool-runtime/list-tools">client.tool_runtime.<a href="./src/llama_stack_client/resources/tool_runtime.py">list_tools</a>(\*\*<a href="src/llama_stack_client/types/tool_runtime_list_tools_params.py">params</a>) -> <a href="./src/llama_stack_client/types/tool_runtime_list_tools_response.py">ToolRuntimeListToolsResponse</a></code>
61-
6221
# Responses
6322

6423
Types:
@@ -409,7 +368,12 @@ Methods:
409368
Types:
410369

411370
```python
412-
from llama_stack_client.types import DeleteFileResponse, File, ListFilesResponse
371+
from llama_stack_client.types import (
372+
DeleteFileResponse,
373+
File,
374+
ListFilesResponse,
375+
FileContentResponse,
376+
)
413377
```
414378

415379
Methods:
@@ -418,7 +382,7 @@ Methods:
418382
- <code title="get /v1/files/{file_id}">client.files.<a href="./src/llama_stack_client/resources/files.py">retrieve</a>(file_id) -> <a href="./src/llama_stack_client/types/file.py">File</a></code>
419383
- <code title="get /v1/files">client.files.<a href="./src/llama_stack_client/resources/files.py">list</a>(\*\*<a href="src/llama_stack_client/types/file_list_params.py">params</a>) -> <a href="./src/llama_stack_client/types/file.py">SyncOpenAICursorPage[File]</a></code>
420384
- <code title="delete /v1/files/{file_id}">client.files.<a href="./src/llama_stack_client/resources/files.py">delete</a>(file_id) -> <a href="./src/llama_stack_client/types/delete_file_response.py">DeleteFileResponse</a></code>
421-
- <code title="get /v1/files/{file_id}/content">client.files.<a href="./src/llama_stack_client/resources/files.py">content</a>(file_id) -> object</code>
385+
- <code title="get /v1/files/{file_id}/content">client.files.<a href="./src/llama_stack_client/resources/files.py">content</a>(file_id) -> str</code>
422386

423387
# Batches
424388

@@ -442,42 +406,6 @@ Methods:
442406

443407
# Alpha
444408

445-
## PostTraining
446-
447-
Types:
448-
449-
```python
450-
from llama_stack_client.types.alpha import (
451-
AlgorithmConfig,
452-
ListPostTrainingJobsResponse,
453-
PostTrainingJob,
454-
)
455-
```
456-
457-
Methods:
458-
459-
- <code title="post /v1alpha/post-training/preference-optimize">client.alpha.post_training.<a href="./src/llama_stack_client/resources/alpha/post_training/post_training.py">preference_optimize</a>(\*\*<a href="src/llama_stack_client/types/alpha/post_training_preference_optimize_params.py">params</a>) -> <a href="./src/llama_stack_client/types/alpha/post_training_job.py">PostTrainingJob</a></code>
460-
- <code title="post /v1alpha/post-training/supervised-fine-tune">client.alpha.post_training.<a href="./src/llama_stack_client/resources/alpha/post_training/post_training.py">supervised_fine_tune</a>(\*\*<a href="src/llama_stack_client/types/alpha/post_training_supervised_fine_tune_params.py">params</a>) -> <a href="./src/llama_stack_client/types/alpha/post_training_job.py">PostTrainingJob</a></code>
461-
462-
### Job
463-
464-
Types:
465-
466-
```python
467-
from llama_stack_client.types.alpha.post_training import (
468-
JobListResponse,
469-
JobArtifactsResponse,
470-
JobStatusResponse,
471-
)
472-
```
473-
474-
Methods:
475-
476-
- <code title="get /v1alpha/post-training/jobs">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">list</a>() -> <a href="./src/llama_stack_client/types/alpha/post_training/job_list_response.py">JobListResponse</a></code>
477-
- <code title="get /v1alpha/post-training/jobs/{job_uuid}/artifacts">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">artifacts</a>(job_uuid) -> <a href="./src/llama_stack_client/types/alpha/post_training/job_artifacts_response.py">JobArtifactsResponse</a></code>
478-
- <code title="post /v1alpha/post-training/jobs/{job_uuid}/cancel">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">cancel</a>(job_uuid) -> None</code>
479-
- <code title="get /v1alpha/post-training/jobs/{job_uuid}/status">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">status</a>(job_uuid) -> <a href="./src/llama_stack_client/types/alpha/post_training/job_status_response.py">JobStatusResponse</a></code>
480-
481409
## Benchmarks
482410

483411
Types:
@@ -558,7 +486,7 @@ Methods:
558486

559487
- <code title="get /v1beta/datasets/{dataset_id}">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">retrieve</a>(dataset_id) -> <a href="./src/llama_stack_client/types/beta/dataset_retrieve_response.py">DatasetRetrieveResponse</a></code>
560488
- <code title="get /v1beta/datasets">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">list</a>() -> <a href="./src/llama_stack_client/types/beta/dataset_list_response.py">DatasetListResponse</a></code>
561-
- <code title="post /v1beta/datasetio/append-rows/{dataset_id}">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">appendrows</a>(path_dataset_id, \*\*<a href="src/llama_stack_client/types/beta/dataset_appendrows_params.py">params</a>) -> None</code>
489+
- <code title="post /v1beta/datasetio/append-rows/{dataset_id}">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">appendrows</a>(dataset_id, \*\*<a href="src/llama_stack_client/types/beta/dataset_appendrows_params.py">params</a>) -> None</code>
562490
- <code title="get /v1beta/datasetio/iterrows/{dataset_id}">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">iterrows</a>(dataset_id, \*\*<a href="src/llama_stack_client/types/beta/dataset_iterrows_params.py">params</a>) -> <a href="./src/llama_stack_client/types/beta/dataset_iterrows_response.py">DatasetIterrowsResponse</a></code>
563491
- <code title="post /v1beta/datasets">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">register</a>(\*\*<a href="src/llama_stack_client/types/beta/dataset_register_params.py">params</a>) -> <a href="./src/llama_stack_client/types/beta/dataset_register_response.py">DatasetRegisterResponse</a></code>
564492
- <code title="delete /v1beta/datasets/{dataset_id}">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">unregister</a>(dataset_id) -> None</code>

pyproject.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "llama_stack_client"
3-
version = "0.6.1-alpha.1"
3+
version = "0.7.0-alpha.1"
44
description = "The official Python library for the llama-stack-client API"
55
dynamic = ["readme"]
66
license = "MIT"
@@ -9,7 +9,7 @@ authors = [{ name = "Meta Llama", email = "llama-oss@meta.com" }]
99
dependencies = [
1010
"httpx>=0.23.0, <1",
1111
"pydantic>=1.9.0, <3",
12-
"typing-extensions>=4.7, <5",
12+
"typing-extensions>=4.7, <5", "typing-extensions>=4.14, <5",
1313
"anyio>=3.5.0, <5",
1414
"distro>=1.7.0, <2",
1515
"sniffio",

requirements-dev.lock

Lines changed: 15 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -7,15 +7,15 @@ anyio==4.12.1
77
# via
88
# httpx
99
# llama-stack-client
10-
black==26.1.0
10+
black==26.3.1
1111
certifi==2026.1.4
1212
# via
1313
# httpcore
1414
# httpx
1515
# requests
1616
cfgv==3.5.0
1717
# via pre-commit
18-
charset-normalizer==3.4.4
18+
charset-normalizer==3.4.6
1919
# via requests
2020
click==8.3.1
2121
# via
@@ -33,8 +33,10 @@ distro==1.9.0
3333
# via llama-stack-client
3434
execnet==2.1.2
3535
# via pytest-xdist
36-
filelock==3.20.3
37-
# via virtualenv
36+
filelock==3.25.2
37+
# via
38+
# python-discovery
39+
# virtualenv
3840
fire==0.7.1
3941
# via llama-stack-client
4042
h11==0.16.0
@@ -45,7 +47,7 @@ httpx==0.28.1
4547
# via
4648
# llama-stack-client
4749
# respx
48-
identify==2.6.16
50+
identify==2.6.18
4951
# via pre-commit
5052
idna==3.11
5153
# via
@@ -68,21 +70,22 @@ nodeenv==1.10.0
6870
# via
6971
# pre-commit
7072
# pyright
71-
numpy==2.4.2
73+
numpy==2.4.3
7274
# via pandas
7375
packaging==25.0
7476
# via
7577
# black
7678
# pytest
77-
pandas==3.0.0
79+
pandas==3.0.1
7880
# via llama-stack-client
7981
pathspec==1.0.3
8082
# via
8183
# black
8284
# mypy
83-
platformdirs==4.5.1
85+
platformdirs==4.9.4
8486
# via
8587
# black
88+
# python-discovery
8689
# virtualenv
8790
pluggy==1.6.0
8891
# via pytest
@@ -108,13 +111,15 @@ pytest-asyncio==1.3.0
108111
pytest-xdist==3.8.0
109112
python-dateutil==2.9.0.post0
110113
# via pandas
114+
python-discovery==1.2.1
115+
# via virtualenv
111116
pytokens==0.4.1
112117
# via black
113118
pyyaml==6.0.3
114119
# via
115120
# pre-commit
116121
# pyaml
117-
requests==2.32.5
122+
requests==2.33.0
118123
# via llama-stack-client
119124
respx==0.22.0
120125
rich==14.2.0
@@ -147,7 +152,7 @@ tzdata==2025.3 ; sys_platform == 'emscripten' or sys_platform == 'win32'
147152
# via pandas
148153
urllib3==2.6.3
149154
# via requests
150-
virtualenv==20.36.1
155+
virtualenv==21.2.0
151156
# via pre-commit
152157
wcwidth==0.6.0
153158
# via prompt-toolkit

scripts/mock

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,11 +26,11 @@ echo "==> Modifying SSE schemas for the mock server"
2626
yq -i '(.. | select(has("text/event-stream")).["text/event-stream"].schema) = {"type": "string"}' "$SPEC_PATH"
2727
echo "==> Starting mock server with file ${SPEC_PATH}"
2828

29-
# Run prism mock on the given spec
29+
# Run steady mock on the given spec
3030
if [ "$1" == "--daemon" ]; then
3131
npm exec --package=@mockoon/cli@9.3.0 -- mockoon-cli start --data "$SPEC_PATH" --port 4010 &>.mockoon.log &
3232

33-
# Wait for server to come online (max 30s)
33+
# Wait for server to come online via health endpoint (max 30s)
3434
echo -n "Waiting for server"
3535
while ! grep -q "Error: \|Server started on port 4010" ".mockoon.log"; do
3636
echo -n "."

0 commit comments

Comments
 (0)