Releases: huggingface/huggingface_hub
[v0.32.3]: Handle env variables in `tiny-agents`, better CLI exit and handling of MCP tool calls arguments
Full Changelog: v0.32.2...v0.32.3
This release introduces some improvements and bug fixes to tiny-agents
:
[v0.32.2]: Add endpoint support in Tiny-Agent + fix `snapshot_download` on large repos
Full Changelog: v0.32.1...v0.32.2
[v0.32.1]: hot-fix: Fix tiny agents on Windows
Patch release to fix #3116
Full Changelog: v0.32.0...v0.32.1
[v0.32.0]: MCP Client, Tiny Agents CLI and more!
🤖 Powering LLMs with Tools: MCP Client & Tiny Agents CLI
✨ The huggingface_hub
library now includes an MCP Client, designed to empower Large Language Models (LLMs) with the ability to interact with external Tools via Model Context Protocol (MCP). This client extends the InfrenceClient
and provides a seamless way to connect LLMs to both local and remote tool servers!
pip install -U huggingface_hub[mcp]
In the following example, we use the Qwen/Qwen2.5-72B-Instruct model via the Nebius inference provider. We then add a remote MCP server, in this case, an SSE server which makes the Flux image generation tool available to the LLM:
import os
from huggingface_hub import ChatCompletionInputMessage, ChatCompletionStreamOutput, MCPClient
async def main():
async with MCPClient(
provider="nebius",
model="Qwen/Qwen2.5-72B-Instruct",
api_key=os.environ["HF_TOKEN"],
) as client:
await client.add_mcp_server(type="sse", url="https://evalstate-flux1-schnell.hf.space/gradio_api/mcp/sse")
messages = [
{
"role": "user",
"content": "Generate a picture of a cat on the moon",
}
]
async for chunk in client.process_single_turn_with_tools(messages):
# Log messages
if isinstance(chunk, ChatCompletionStreamOutput):
delta = chunk.choices[0].delta
if delta.content:
print(delta.content, end="")
# Or tool calls
elif isinstance(chunk, ChatCompletionInputMessage):
print(
f"\nCalled tool '{chunk.name}'. Result: '{chunk.content if len(chunk.content) < 1000 else chunk.content[:1000] + '...'}'"
)
if __name__ == "__main__":
import asyncio
asyncio.run(main())
For even simpler development, we now also offer a higher-level Agent
class. These 'Tiny Agents' simplify creating conversational Agents by managing the chat loop and state, essentially acting as a user-friendly wrapper around MCPClient
. It's designed to be a simple while loop built right on top of an MCPClient.
You can run these Agents directly from the command line:
> tiny-agents run --help
Usage: tiny-agents run [OPTIONS] [PATH] COMMAND [ARGS]...
Run the Agent in the CLI
╭─ Arguments ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ path [PATH] Path to a local folder containing an agent.json file or a built-in agent stored in the 'tiny-agents/tiny-agents' Hugging Face dataset │
│ (https://huggingface.co/datasets/tiny-agents/tiny-agents) │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --help Show this message and exit. │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
You can run these Agents using your own local configs or load them directly from the Hugging Face dataset tiny-agents.
This is an early version of the MCPClient
, and community contributions are welcome 🤗
- [MCP] Add documentation by @hanouticelina in #3102
- [MCP] add support for SSE + HTTP by @Wauplin in #3099
- [MCP] Tiny Agents in Python by @hanouticelina in #3098
- PoC:
InferenceClient
is also aMCPClient
by @julien-c in #2986
⚡ Inference Providers
Thanks to @diadorer, feature extraction (embeddings) inference is now supported with Nebius provider!
We’re thrilled to introduce Nscale as an official inference provider! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models 🔥
We also fixed compatibility issues with structured outputs across providers by ensuring the InferenceClient
follows the OpenAI API specs structured output.
- [Inference Providers] Fix structured output schema in chat completion by @hanouticelina in #3082
💾 Serialization
We've introduced a new @strict
decorator for dataclasses, providing robust validation capabilities to ensure data integrity both at initialization and during assignment. Here is a basic example:
from dataclasses import dataclass
from huggingface_hub.dataclasses import strict, as_validated_field
# Custom validator to ensure a value is positive
def positive_int(value: int):
if not value > 0:
raise ValueError(f"Value must be positive, got {value}")
class Config:
model_type: str
hidden_size: int = positive_int(default=16)
vocab_size: int = 32 # Default value
# Methods named `validate_xxx` are treated as class-wise validators
def validate_big_enough_vocab(self):
if self.vocab_size < self.hidden_size:
raise ValueError(f"vocab_size ({self.vocab_size}) must be greater than hidden_size ({self.hidden_size})")
config = Config(model_type="bert", hidden_size=24) # Valid
config = Config(model_type="bert", hidden_size=-1) # Raises StrictDataclassFieldValidationError
# `vocab_size` too small compared to `hidden_size`
config = Config(model_type="bert", hidden_size=32, vocab_size=16) # Raises StrictDataclassClassValidationError
This feature also includes support for custom validators, class-wise validation logic, handling of additional keyword arguments, and automatic validation based on type hints. Documentation can be found here.
This release brings also support for DTensor
in _get_unique_id
/ get_torch_storage_size
helpers, allowing transformers
to seamlessly use save_pretrained
with DTensor
.
✨ HF API
When creating an Endpoint, the default for scale_to_zero_timeout
is now None
, meaning endpoints will no longer scale to zero by default unless explicitly configured.
- Dont set scale to zero as default when creating an Endpoint by @tomaarsen in #3062
We've also introduced experimental helpers to manage OAuth within FastAPI applications, bringing functionality previously used in Gradio to a wider range of frameworks for easier integration.
📚 Documentation
We now have much more detailed documentation for Inference! This includes more detailed explanations and examples to clarify that the InferenceClient
can also be effectively used with local endpoints (llama.cpp, vllm, MLX..etc).
- [Inference] Mention local endpoints inference + remove separate HF Inference API mentions by @hanouticelina in #3085
🛠️ Small fixes and maintenance
😌 QoL improvements
- bump hf-xet min version by @hanouticelina in #3078
- Add
api.endpoint
to arguments for_get_upload_mode
by @matthewgrossman in #3077 - surface 401 unauthorized errors more directly in snapshot_download by @hanouticelina in #3092
🐛 Bug and typo fixes
- [HfFileSystem] Fix end-of-file
read()
by @lhoestq in #3080 - [Inference Endpoints] fix inference endpoint creation with custom image by @hanouticelina in #3076
- Expand file lock scope to resolve concurrency issues during downloads by @humengyu2012 in #3063
- Documentation Issue by @thanosKivertzikidis in #3091
- Do not fetch /preupload if already done in upload-large-folder by @Wauplin in #3100
🏗️ internal
[v0.31.4]: strict dataclasses, support `DTensor` saving & some bug fixes
This release includes some new features and bug fixes:
- New
strict
decorators for runtime dataclass validation with custom and type-based checks. by @Wauplin in #2895. - Added
DTensor
support to_get_unique_id
/get_torch_storage_size
helpers, enablingtransformers
to usesave_pretrained
withDTensor
. by @S1ro1 in #3042. - Some bug fixes: #3080 & #3076.
Full Changelog: v0.31.2...v0.31.4
[v0.31.2] Hot-fix: make `hf-xet` optional again and bump the min version of the package
Patch release to make hf-xet
optional. More context in #3079 and #3078.
Full Changelog: v0.31.1...v0.31.2
[v0.31.0] LoRAs with Inference Providers, `auto` mode for provider selection, embeddings models and more
🧑🎨 Introducing LoRAs with fal.ai and Replicate providers
We're introducing blazingly fast LoRA inference powered by
fal.ai and Replicate through Hugging Face Inference Providers! You can use any compatible LoRA available on the Hugging Face Hub and get generations at lightning fast speed ⚡
from huggingface_hub import InferenceClient
client = InferenceClient(provider="fal-ai") # or provider="replicate"
# output is a PIL.Image object
image = client.text_to_image(
"a boy and a girl looking out of a window with a cat perched on the window sill. There is a bicycle parked in front of them and a plant with flowers to the right side of the image. The wall behind them is visible in the background.",
model="openfree/flux-chatgpt-ghibli-lora",
)
- [Inference Providers] LoRAs with Replicate by @hanouticelina in #3054
- [Inference Providers] Support for LoRAs with fal by @hanouticelina in #3005
⚙️ auto
mode for provider selection
You can now automatically select a provider for a model using auto
mode — it will pick the first available provider based on your preferred order set in https://hf.co/settings/inference-providers.
from huggingface_hub import InferenceClient
# will select the first provider available for the model, sorted by your order.
client = InferenceClient(provider="auto")
completion = client.chat.completions.create(
model="Qwen/Qwen3-235B-A22B",
messages=[
{
"role": "user",
"content": "What is the capital of France?"
}
],
)
print(completion.choices[0].message)
provider
argument. Previously, the default was hf-inference
, so this change may be a breaking one if you're not specifying the provider name when initializing InferenceClient
or AsyncInferenceClient
.
🧠 Embeddings support with Sambanova (feature-extraction)
We added support for feature extraction (embeddings) inference with sambanova provider.
- [Inference Providers] sambanova supports feature extraction by @hanouticelina in #3037
⚡ Other Inference features
HF Inference API provider is now fully integrated as an Inference Provider, this means it only supports a predefined list of deployed models, selected based on popularity.
Cold-starting arbitrary models from the Hub is no longer supported — if a model isn't already deployed, it won’t be available via HF Inference API.
Miscellaneous improvements and some bug fixes:
- Fix 'sentence-transformers/all-MiniLM-L6-v2' doesn't support task 'feature-extraction' by @Wauplin in #2968
- fix text generation by @hanouticelina in #2982
- Fix HfInference conversational by @Wauplin in #2985
- Fix 'sentence_similarity' on InferenceClient by @tomaarsen in #3004
- Update inference types (automated commit) by @HuggingFaceInfra in #3015
- update text to speech input by @hanouticelina in #3025
- [Inference Providers] fix inference with URL endpoints by @hanouticelina in #3041
- Update inference types (automated commit) by @HuggingFaceInfra in #3051
✅ Of course, all of those inference changes are available in the AsyncInferenceClient
async equivalent 🤗
🚀 Xet
Thanks to @bpronan's PR, Xet now supports uploading byte arrays:
from huggingface_hub import upload_file
file_content = b"my-file-content"
repo_id = "username/model-name" # `hf-xet` should be installed and Xet should be enabled for this repo
upload_file(
path_or_fileobj=file_content,
repo_id=repo_id,
)
Additionally, we’ve added documentation for environment variables used by hf-xet
to optimize file download/upload performance — including options for caching (HF_XET_CHUNK_CACHE_SIZE_BYTES
), concurrency (HF_XET_NUM_CONCURRENT_RANGE_GETS
), high-performance mode (HF_XET_HIGH_PERFORMANCE
), and sequential writes (HF_XET_RECONSTRUCT_WRITE_SEQUENTIALLY
).
- Docs for xet env variables by @rajatarya in #3024
- Minor xet changes: HF_HUB_DISABLE_XET flag, suppress logger.info by @rajatarya in #3039
Miscellaneous improvements:
✨ HF API
We added HTTP download support for files larger than 50GB — enabling more reliable handling of large file downloads.
- Add HTTP Download support for files > 50GB by @rajatarya in #2991
We also added dynamic batching to upload_large_folder
, replacing the fixed 50-files-per-commit rule with an adaptive strategy that adjusts based on commit success and duration — improving performance and reducing the risk of hitting the commits rate limit on large repositories.
- Fix dynamic commit size by @maximizemaxwell in #3016
We added support for new arguments when creating or updating Hugging Face Inference Endpoints.
- add route payload to deploy Inference Endpoints by @Vaibhavs10 in #3013
- Add the 'env' parameter to creating/updating Inference Endpoints by @tomaarsen in #3045
💔 Breaking changes
- The default value of the
provider
argument inInferenceClient
andAsyncInferenceClient
is now "auto" instead of "hf-inference" (HF Inference API). This means provider selection will now follow your preferred order set in your inference provider settings.
If your code relied on the previous default ("hf-inference"), you may need to update it explicitly to avoid unexpected behavior. - HF Inference API Routing Update: The inference URL path for
feature-extraction
andsentence-similarity
tasks has changed fromhttps://router.huggingface.co/hf-inference/pipeline/{task}/{model}
tohttps://router.huggingface.co/hf-inference/models/{model}/pipeline/{task}
.
- [inference] Necessary breaking change: nest task-specific route inside of model route by @julien-c in #3044
🛠️ Small fixes and maintenance
😌 QoL improvements
- Unlist TPUs from SpaceHardware by @Wauplin in #2973
- dev(narugo): disable hf_transfer when custom 'Range' header is assigned by @narugo1992 in #2979
- Improve error handling for invalid eval results in model cards by @hanouticelina in #3000
- Handle Rate Limits in Pagination with Automatic Retries by @Weyaxi in #2970
- Add example for downloading files in subdirectories, related to #3014 by @mixer3d in #3023
- Super-micro-tiny-PR to allow for direct copy-paste :) by @fracapuano in #3030
- Migrate to logger.warning usage by @emmanuel-ferdman in #3056
🐛 Bug and typo fixes
- Retry on transient error in download workflow by @Wauplin in #2976
- fix snapshot download behavior in offline mode when downloading to a local dir by @hanouticelina in #3009
- fix docstring by @hanouticelina in #3040
- fix default CACHE_DIR by @albertcthomas in #3050
🏗️ internal
- fix: fix test_get_hf_file_metadata_from_a_lfs_file as since xet migration by @XciD in #2972
- A better security-wise style bot GH Action by @hanouticelina in #2914
- prepare for next release by @hanouticelina in #2983
- Bump
hf_xet
min version to 1.0.0 + make it required dep on 64 bits by @hanouticelina in #2971 - fix permissions for style bot by @hanouticelina in #3012
- remove (inference only) VCR tests by @hanouticelina in #3021
- remove test by @hanouticelina in #3028
Community contributions
The following contributors have made significant changes to the library over the last release:
- @bpronan
- @tomaarsen
- @Weyaxi
- Handle Rate Limits in Pagination with Automatic Retries (#2970)
- @rajatarya
- @Vaibhavs10
- add route payload to deploy Inference Endpoints (#3013)
- @maximizemaxwell
- Fix dynamic commit size (#3016)
- @emmanuel-ferdman
- Migrate to logger.warning usage (#3056)
v0.30.2: Fix text-generation task in InferenceClient
Fixing some InferenceClient
-related bugs:
- [Inference Providers] Fix text-generation when using an external provider #2982 by @hanouticelina
- Fix HfInference conversational #2985 by @Wauplin
Full Changelog: v0.30.1...v0.30.2
v0.30.1: fix 'sentence-transformers/all-MiniLM-L6-v2' doesn't support task 'feature-extraction'
Patch release to fix #2967.
Full Changelog: v0.30.0...v0.30.1
Xet is here! (+ many cool Inference-related things!)
🚀 Ready. Xet. Go!
This might just be our biggest update in the past two years! Xet is a groundbreaking new protocol for storing large objects in Git repositories, designed to replace Git LFS. Unlike LFS, which deduplicates files, Xet operates at the chunk level—making it a game-changer for AI builders collaborating on massive models and datasets. Our Python integration is powered by xet-core, a Rust-based package that handles all the low-level details.
You can start using Xet today by installing the optional dependency:
pip install -U huggingface_hub[hf_xet]
With that, you can seamlessly download files from Xet-enabled repositories! And don’t worry—everything remains fully backward-compatible if you’re not ready to upgrade yet.
Blog post: Xet on the Hub
Docs: Storage backends → Xet
Tip
Want to store your own files with Xet? We’re gradually rolling out support on the Hugging Face Hub, so hf_xet
uploads may need to be enabled for your repo. Join the waitlist to get onboarded soon!
This is the result of collaborative work by @bpronan, @hanouticelina, @rajatarya, @jsulz, @assafvayner, @Wauplin, + many others on the infra/Hub side!
- Xet download workflow by @hanouticelina in #2875
- Add ability to enable/disable xet storage on a repo by @hanouticelina in #2893
- Xet upload workflow by @hanouticelina in #2887
- Xet Docs for huggingface_hub by @rajatarya in #2899
- Adding Token Refresh Xet Tests by @rajatarya in #2932
- Using a two stage download path for xet files. by @bpronan in #2920
- add
xetEnabled
as an expand property by @hanouticelina in #2907 - Xet integration by @Wauplin in #2958
⚡ Enhanced InferenceClient
The InferenceClient
has received significant updates and improvements in this release, making it more robust and easy to work with.
We’re thrilled to introduce Cerebras and Cohere as official inference providers! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models.
- Add Cohere as an Inference Provider by @alexrs-cohere in #2888
- Add Cerebras provider by @Wauplin in #2901
- remove cohere from testing and fix quality by @hanouticelina in #2902
Novita is now our 3rd provider to support text-to-video task after Fal.ai and Replicate:
from huggingface_hub import InferenceClient
client = InferenceClient(provider="novita")
video = client.text_to_video(
"A young man walking on the street",
model="Wan-AI/Wan2.1-T2V-14B",
)
- [Inference Providers] Add text-to-video support for Novita by @hanouticelina in #2922
It is now possible to centralize billing on your organization rather than individual accounts! This helps companies managing their budget and setting limits at a team level. Organization must be subscribed to Enterprise Hub.
from huggingface_hub import InferenceClient
client = InferenceClient(provider="fal-ai", bill_to="openai")
image = client.text_to_image(
"A majestic lion in a fantasy forest",
model="black-forest-labs/FLUX.1-schnell",
)
image.save("lion.png")
Handling long-running inference tasks just got easier! To prevent request timeouts, we’ve introduced asynchronous calls for text-to-video inference. We are expecting more providers to leverage the same structure soon, ensuring better robustness and developer-experience.
- [Inference Providers] Async calls for fal.ai by @hanouticelina in #2927
- update polling interval by @hanouticelina in #2937
- [Inference Providers] Fix status and response URLs when polling text-to-video results with fal-ai by @hanouticelina in #2943
Miscellaneous improvements:
- [Bot] Update inference types by @HuggingFaceInfra in #2832
- Update
InferenceClient
docstring to reflect thattoken=False
is no longer accepted by @abidlabs in #2853 - [Inference providers] Root-only base URLs by @Wauplin in #2918
- Add prompt in image_to_image type by @Wauplin in #2956
- [Inference Providers] fold OpenAI support into
provider
parameter by @hanouticelina in #2949 - clean up some inference stuff by @Wauplin in #2941
- regenerate cassettes by @hanouticelina in #2925
- Fix payload model name when model id is a URL by @hanouticelina in #2911
- [InferenceClient] Fix token initialization and add more tests by @hanouticelina in #2921
- [Inference Providers] check inference provider mapping for HF Inference API by @hanouticelina in #2948
✨ New Features and Improvements
This release also includes several other notable features and improvements.
It's now possible to pass a path with wildcard to the upload command instead of passing --include=...
option:
huggingface-cli upload my-cool-model *.safetensors
- Added support for Wildcards in huggingface-cli upload by @devesh-2002 in #2868
Deploying an Inference Endpoint from the Model Catalog just got 100x easier! Simply select which model to deploy and we handle the rest to guarantee the best hardware and settings for your dedicated endpoints.
from huggingface_hub import create_inference_endpoint_from_catalog
endpoint = create_inference_endpoint_from_catalog("unsloth/DeepSeek-R1-GGUF")
endpoint.wait()
endpoint.client.chat_completion(...)
The ModelHubMixin
got two small updates:
- authors can provide a paper URL that will be added to all model cards pushed by the library.
- dataclasses are now supported for any init arg (was only the case of
config
until now)
- Add paper URL to hub mixin by @NielsRogge in #2917
- [HubMixin] handle dataclasses in all args, not only 'config' by @Wauplin in #2928
You can now sort by name, size, last updated and last used where using the delete-cache
command:
huggingface-cli delete-cache --sort=size
- feat: add
--sort
arg todelete-cache
to sort by size by @AlpinDale in #2815
Since end 2024, it is possible to manage the LFS files stored in a repo from the UI (see docs). This release makes it possible to do the same programmatically. The goal is to enable users to free-up some storage space in their private repositories.
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> lfs_files = api.list_lfs_files("username/my-cool-repo")
# Filter files files to delete based on a combination of `filename`, `pushed_at`, `ref` or `size`.
# e.g. select only LFS files in the "checkpoints" folder
>>> lfs_files_to_delete = (lfs_file for lfs_file in lfs_files if lfs_file.filename.startswith("checkpoints/"))
# Permanently delete LFS files
>>> api.permanently_delete_lfs_files("username/my-cool-repo", lfs_files_to_delete)
Warning
This is a power-user tool to use carefully. Deleting LFS files from a repo is a non-revertible action.
💔 Breaking Changes
labels
has been removed from InferenceClient.zero_shot_classification
and InferenceClient.zero_shot_image_classification
tasks in favor of candidate_labels
. There has been a proper deprecation warning for that.
🛠️ Small Fixes and Maintenance
🐛 Bug and Typo Fixes
- Fix revision bug in _upload_large_folder.py by @yuantuo666 in #2879
- bug fix in inference_endpoint wait function for proper waiting on update by @Ajinkya-25 in #2867
- Update SpaceHardware enum by @Wauplin in #2891
- Fix: Restore sys.stdout in notebook_login after error by @LEEMINJOO in #2896
- Remove link to unmaintained model card app Space by @davanstrien in #2897
- Fixing a typo in chat_completion example by @Wauplin in #2910
- chore: Link to Authentication by @FL33TW00D in #2905
- Handle file-like objects in curlify by @hanouticelina in #2912
- Fix typos by @omahs in #2951
- Add expanduser and expandvars to path envvars by @FredHaa in #2945
🏗️ Internal
Thanks to the work previously introduced by the diffusers
team, we've published a GitHub Action that runs code style tooling on demand on Pull Requests, making the life of contributors and reviewers easier.
- add style bot GitHub action by @hanouticelina in #2898
- fix style bot GH action by @hanouticelina in #2906
- Fix bot style GH action (again) by @hanouticelina in #2909
Other minor updates:
- Fix prerelease CI by @Wauplin in #2877
- Update update-inference-types.yaml by @Wauplin in #2926
- [Internal] Fix check parameters script by @hanouticelina in #2957
Significant community contributions
The following contributors have made significant changes to the library over the last release:
- @Ajinkya-25
- bug fix in inference_endpoint wait function for proper waiting on update (#2867)
- @abidlabs
- Update
InferenceClient
docstring to reflect thattoken=False
is no longer accepted (#2853)
- Update
- @devesh-2002
- Added support for Wildcards in huggingface-cli upload (#2868)
- @alexrs-cohere
- Add Cohere as an Inference Provider (#2888)
- @NielsRogge
- Add paper URL to hub mixin (#2917)
- @AlpinDale
- feat: add
--sort
arg todelete-cache
to sort by size (#2815)
- feat: add
- @FredHaa
- Add expanduser and expandvars to path envvars (#2945)
- @omahs
- Fix typos (#2951)