diff --git a/.github/workflows/test.yaml b/.github/workflows/test.yaml index 95f6e5a27..1d2b1983c 100644 --- a/.github/workflows/test.yaml +++ b/.github/workflows/test.yaml @@ -22,7 +22,7 @@ jobs: - name: Install huggingface-hub run: pip install huggingface-hub - name: Download model - run: huggingface-cli download ${{ env.REPO_ID }} ${{ env.MODEL_FILE }} + run: hf download ${{ env.REPO_ID }} ${{ env.MODEL_FILE }} - name: Cache model uses: actions/cache@v4 with: diff --git a/CHANGELOG.md b/CHANGELOG.md index 16954eb88..1f577c1a4 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -7,6 +7,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ## [Unreleased] +- fix(ci): Use the `hf` CLI instead of the deprecated `huggingface-cli` name in GitHub Actions and docs by @abetlen in #2149 + ## [0.3.16] - feat: Update llama.cpp to ggerganov/llama.cpp@4227c9be4268ac844921b90f31595f81236bd317 diff --git a/README.md b/README.md index 382f7cbed..d2ba297ca 100644 --- a/README.md +++ b/README.md @@ -328,7 +328,7 @@ llm = Llama.from_pretrained( ) ``` -By default [`from_pretrained`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.from_pretrained) will download the model to the huggingface cache directory, you can then manage installed model files with the [`huggingface-cli`](https://huggingface.co/docs/huggingface_hub/en/guides/cli) tool. +By default [`from_pretrained`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.from_pretrained) will download the model to the huggingface cache directory, you can then manage installed model files with the [`hf`](https://huggingface.co/docs/huggingface_hub/en/guides/cli) tool. ### Chat Completion