Releases: huggingface/huggingface_hub
v0.21.0: dataclasses everywhere, file-system, PyTorchModelHubMixin, serialization and more.
Discuss about the release in our Community Tab. Feedback welcome!! 🤗
🖇️ Dataclasses everywhere!
All objects returned by the HfApi client are now dataclasses!
In the past, objects were either dataclasses, typed dictionaries, non-typed dictionaries and even basic classes. This is now all harmonized with the goal of improving developer experience.
Kudos goes to the community for the implementation and testing of all the harmonization process. Thanks again for the contributions!
- Use dataclasses for all objects returned by HfApi #1911 by @Ahmedniz1 in #1974
- Updating HfApi objects to use dataclass by @Ahmedniz1 in #1988
- Dataclasses for objects returned hf api by @NouamaneELGueddarii in #1993
💾 FileSystem
The HfFileSystem class implements the fsspec interface to allow loading and writing files with a filesystem-like interface. The interface is highly used by the datasets library and this release will improve further the efficiency and robustness of the integration.
- Pass revision in path to AbstractBufferedFile init by @albertvillanova in #1948
- [HfFileSystem] Fix
rmon branch by @lhoestq in #1957 - Retry fetching data on 502 error in
HfFileSystemby @mariosasko in #1981 - Add HfFileSystemStreamFile by @lhoestq in #1967
- [HfFileSystem] Copy non lfs files by @lhoestq in #1996
- Add
HfFileSystem.urlmethod by @mariosasko in #2027
🧩 Pytorch Hub Mixin
The PyTorchModelHubMixin class let's you upload ANY pytorch model to the Hub in a few lines of code. More precisely, it is a class that can be inherited in any nn.Module class to add the from_pretrained, save_pretrained and push_to_hub helpers to your class. It handles serialization and deserialization of weights and configs for you and enables download counts on the Hub.
With this release, we've fixed 2 pain points holding back users from using this lib:
- Configs are now better handled. The mixin automatically detects if the base class defines a config, saves it on the Hub and then injects it at load time, either as a dictionary or a dataclass depending on the base class's expectations.
- Weights are now saved as
.safetensorsfiles instead of pytorch pickles for safety reasons. Loading from previous pytorch pickles is still supported but we are moving toward completely deprecating them (in a mid to long term plan).
- Better config support in ModelHubMixin by @Wauplin in #2001
- Use safetensors by default for
PyTorchModelHubMixinby @bmuskalla in #2033
✨ InferenceClient improvements
Audio-to-audio task is now supported by both by the InferenceClient!
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> audio_output = client.audio_to_audio("audio.flac")
>>> for i, item in enumerate(audio_output):
>>> with open(f"output_{i}.flac", "wb") as f:
f.write(item["blob"])- Added audio to audio in inference client by @Ahmedniz1 in #2020
Also fixed a few things:
- Fix intolerance for new field in TGI stream response: 'index' by @danielpcox in #2006
- Fix optional model in tabular tasks by @Wauplin in #2018
- Added best_of to non-TGI ignored parameters by @dopc in #1949
📤 Model serialization
With the aim of harmonizing repo structures and file serialization on the Hub, we added a new module serialization with a first helper split_state_dict_into_shards that takes a state dict and split it into shards. Code implementation is mostly taken from transformers and aims to be reused by other libraries in the ecosystem. It seamlessly supports torch, tensorflow and numpy weights, and can be easily extended to other frameworks.
This is a first step in the harmonization process and more loading/saving helpers will be added soon.
📚 Documentation
🌐 Translations
Community is actively getting the job done to translate the huggingface_hub to other languages. We now have docs available in Simplified Chinese (here) and in French (here) to help democratize good machine learning!
- [i18n-CN] Translated some files to simplified Chinese #1915 by @2404589803 in #1916
- Update .github workflow to build cn docs on PRs by @Wauplin in #1931
- [i18n-FR] Translated files in french and reviewed them by @JibrilEl in #2024
Docs misc
- Document
base_modelin modelcard metadata by @Wauplin in #1936 - Update the documentation of add_collection_item by @FremyCompany in #1958
- Docs[i18n-en]: added pkgx as an installation method to the docs by @michaelessiet in #1955
- Added
hf_transferextra intosetup.pyanddocs/by @jamesbraza in #1970 - Documenting CLI default for
download --repo-typeby @jamesbraza in #1986 - Update repository.md by @xmichaelmason in #2010
Docs fixes
- Fix URL in
get_safetensors_metadatadocstring by @Wauplin in #1951 - Fix grammar by @Anthonyg5005 in #2003
- Fix doc by @jordane95 in #2013
- typo fix by @Decryptu in #2035
🛠️ Misc improvements
Creating a commit with an invalid README will fail early instead of uploading all LFS files before failing to commit.
Added a revision_exists helper, working similarly to repo_exists and file_exists:
>>> from huggingface_hub import revision_exists
>>> revision_exists("google/gemma-7b", "float16")
True
>>> revision_exists("google/gemma-7b", "not-a-revision")
FalseInferenceClient.wait(...) now raises an error if the endpoint is in a failed state.
Improved progress bar when downloading a file
Other stuff:
- added will not echo message to the login token message by @vtrenton in #1925
- Raise if repo is disabled by @Wauplin in #1965
- Fix timezone in datetime parsing by @Wauplin in #1982
- retry on any 5xx on upload by @Wauplin in #2026
💔 Breaking changes
- Classes
ModelFilterandDatasetFilterare deprecated when listing models and datasets in favor of a simpler API that lets you pass the parameters directly tolist_modelsandlist_datasets.
>>> from huggingface_hub import list_models, ModelFilter
# use
>>> list_models(language="zh")
# instead of
>>> list_models(filter=ModelFilter(language="zh"))Cleaner, right? ModelFilter and DatasetFilter will still be supported until v0.24 release.
- In the inference client,
ModelStatus.compute_typeis not a string anymore but a dictionary with more detailed info...
0.20.3 hot-fix: Fix HfFolder login when env variable not set
This patch release fixes an issue when retrieving the locally saved token using huggingface_hub.HfFolder.get_token. For the record, this is a "planned to be deprecated" method, in favor of huggingface_hub.get_token which is more robust and versatile. The issue came from a breaking change introduced in #1895 meaning only 0.20.x is affected.
For more details, please refer to #1966.
Full Changelog: v0.20.2...v0.20.3
0.20.2 hot-fix: Fix concurrency issues in google colab login
A concurrency issue when using userdata.get to retrieve HF_TOKEN token led to deadlocks when downloading files in parallel. This hot-fix release fixes this issue by using a global lock before trying to get the token from the secrets vault. More details in #1953.
Full Changelog: v0.20.1...v0.20.2
0.20.1: hot-fix Fix circular import
This hot-fix release fixes a circular import error happening when import login or logout helpers from huggingface_hub.
Related PR: #1930
Full Changelog: v0.20.0...v0.20.1
v0.20.0: Authentication, speed, safetensors metadata, access requests and more.
(Discuss about the release in our Community Tab. Feedback welcome!! 🤗)
🔐 Authentication
Authentication has been greatly improved in Google Colab. The best way to authenticate in a Colab notebook is to define a HF_TOKEN secret in your personal secrets. When a notebook tries to reach the Hub, a pop-up will ask you if you want to share the HF_TOKEN secret with this notebook -as an opt-in mechanism. This way, no need to call huggingface_hub.login and copy-paste your token anymore! 🔥🔥🔥
In addition to the Google Colab integration, the login guide has been revisited to focus on security. It is recommended to authenticate either using huggingface_hub.login or the HF_TOKEN environment variable, rather than passing a hardcoded token in your scripts. Check out the new guide here.
- Login/authentication enhancements by @Wauplin in #1895
- Catch
SecretNotFoundErrorin google colab login by @Wauplin in #1912
🏎️ Faster HfFileSystem
HfFileSystem is a pythonic fsspec-compatible file interface to the Hugging Face Hub. Implementation has been greatly improved to optimize fs.find performances.
Here is a quick benchmark with the bigcode/the-stack-dedup dataset:
| v0.19.4 | v0.20.0 | |
|---|---|---|
hffs.find("datasets/bigcode/the-stack-dedup", detail=False) |
46.2s | 1.63s |
hffs.find("datasets/bigcode/the-stack-dedup", detail=True) |
47.3s | 24.2s |
- Faster
HfFileSystem.findby @mariosasko in #1809 - Faster
HfFileSystem.globby @lhoestq in #1815 - Fix common path in
_ ls_treeby @lhoestq in #1850 - Remove
maxdepthparam fromHfFileSystem.globby @mariosasko in #1875 - [HfFileSystem] Support quoted revisions in path by @lhoestq in #1888
- Deprecate
HfApi.list_files_infoby @mariosasko in #1910
🚪 Access requests API (gated repos)
Models and datasets can be gated to monitor who's accessing the data you are sharing. You can also filter access with a manual approval of the requests. Access requests can now be managed programmatically using HfApi. This can be useful for example if you have advanced user request screening requirements (for advanced compliance requirements, etc) or if you want to condition access to a model based on completing a payment flow.
Check out this guide to learn more about gated repos.
>>> from huggingface_hub import list_pending_access_requests, accept_access_request
# List pending requests
>>> requests = list_pending_access_requests("meta-llama/Llama-2-7b")
>>> requests[0]
[
AccessRequest(
username='clem',
fullname='Clem 🤗',
email='***',
timestamp=datetime.datetime(2023, 11, 23, 18, 4, 53, 828000, tzinfo=datetime.timezone.utc),
status='pending',
fields=None,
),
...
]
# Accept Clem's request
>>> accept_access_request("meta-llama/Llama-2-7b", "clem")🔍 Parse Safetensors metadata
Safetensors is a simple, fast and secured format to save tensors in a file. Its advantages makes it the preferred format to host weights on the Hub. Thanks to its specification, it is possible to parse the file metadata on-the-fly. HfApi now provides get_safetensors_metadata, an helper to get safetensors metadata from a repo.
# Parse repo with single weights file
>>> metadata = get_safetensors_metadata("bigscience/bloomz-560m")
>>> metadata
SafetensorsRepoMetadata(
metadata=None,
sharded=False,
weight_map={'h.0.input_layernorm.bias': 'model.safetensors', ...},
files_metadata={'model.safetensors': SafetensorsFileMetadata(...)}
)
>>> metadata.files_metadata["model.safetensors"].metadata
{'format': 'pt'}Other improvements
List and filter collections
You can now list collections on the Hub. You can filter them to return only collection containing a given item, or created by a given author.
>>> collections = list_collections(item="models/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF", sort="trending", limit=5):
>>> for collection in collections:
... print(collection.slug)
teknium/quantized-models-6544690bb978e0b0f7328748
AmeerH/function-calling-65560a2565d7a6ef568527af
PostArchitekt/7bz-65479bb8c194936469697d8c
gnomealone/need-to-test-652007226c6ce4cdacf9c233
Crataco/favorite-7b-models-651944072b4fffcb41f8b568- add list_collections endpoint, solves #1835 by @ceferisbarov in #1856
- fix list collections sort values by @Wauplin in #1867
- Warn about truncation when listing collections by @Wauplin in #1873
Respect .gitignore
upload_folder now respect gitignore files!
Previously it was possible to filter which files should be uploaded from a folder using the allow_patterns and ignore_patterns parameters. This can now automatically be done by simply creating a .gitignore file in your repo.
- Respect
.gitignorefile in commits by @Wauplin in #1868 - Remove respect_gitignore parameter by @Wauplin in #1876
Robust uploads
Uploading LFS files has also gotten more robust with a retry mechanism if a transient error happen while uploading to S3.
Target language in InferenceClient.translation
InferenceClient.translation now supports src_lang/tgt_lang for applicable models.
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.translation("My name is Sarah Jessica Parker but you can call me Jessica", model="facebook/mbart-large-50-many-to-many-mmt", src_lang="en_XX", tgt_lang="fr_XX")
"Mon nom est Sarah Jessica Parker mais vous pouvez m'appeler Jessica"
>>> client.translation("My name is Sarah Jessica Parker but you can call me Jessica", model="facebook/mbart-large-50-many-to-many-mmt", src_lang="en_XX", tgt_lang="es_XX")
'Mi nombre es Sarah Jessica Parker pero puedes llamarme Jessica'- add language support to translation client, solves #1763 by @ceferisbarov in #1869
Support source in reported EvalResult
EvalResult now support source_name and source_link to provide a custom source for a reported result.
🛠️ Misc
Fetch all pull requests refs with list_repo_refs.
Filter discussion when listing them with get_repo_discussions.
# List opened PR from "sanchit-gandhi" on model repo "openai/whisper-large-v3"
>>> from huggingface_hub import get_repo_discussions
>>> discussions = get_repo_discussions(
... repo_id="openai/whisper-large-v3",
... author="sanchit-gandhi",
... discussion_type="pull_request",
... discussion_status="open",
... )- ✨ Add filters to HfApi.get_repo_discussions by @SBrandeis in #1845
New field createdAt for ModelInfo, DatasetInfo and SpaceInfo.
It's now possible to create an inference endpoint running on a custom docker image (typically: a TGI container).
# Start an Inference Endpoint running Zephyr-7b-beta on TGI
>>> from huggingface_hub import create_inference_endpoint
>>> endpoint = create_inference_endpoint(
... "aws-zephyr-7b-beta-0486",
... repository="HuggingFaceH4/zephyr-7b-beta",
... framework="pytorch",
... task="text-generation",
... accelerator="gpu",
... vendor="aws",
... region="us-east-1",
... type="protected",
... instance_size="medium",
... instance_type="g5.2xlarge",
... custom_image={
... "health_route": "/health",
... "env": {
... "MAX_BATCH_PREFILL_TOKENS": "2048",
... "MAX_INPUT_LENGTH": "1024",
... "MAX_TOTAL_TOKENS": "1512",
... "MODEL_ID": "/repository"
... },
... "url": "ghcr.io/huggingface/text-generation-inference:1.1.0",
... },
... )Upload CLI: create branch when revision does not exist
🖥️ Environment variables
huggingface_hub.constants.HF_HOME has been made a public constant (see reference).
Offline mode has gotten more consistent. If HF_HUB_OFFLINE is set, any http call to the Hub will fail. The fallback mechanism is snapshot_download has been refactored to be aligned with the hf_hub_download workflow. If offline mode is activated (or a connection error happens) and the files are already in the cache, snapshot_download returns the corresponding snapshot directory.
- Respect HF_HUB_OFFLINE for every http call by @Wauplin in #1899
- Improve
snapshot_downloadoffline mode by @Wauplin in #1913
DO_NOT_TRACK environment variable is now respected to deactivate telemetry calls. This is similar to HF_HUB_DISABLE_TELEMETRY but not specific to Hugging Face.
📚 Documentation
- Document more list repos behavior by @Wauplin in #1823
- [i18n-KO] 🌐 Translated
git_vs_http.mdto Korean by @heuristicwave in #1862
Doc fixes
v0.19.4 - Hot-fix: do not fail if pydantic install is corrupted
On Python3.8, it is fairly easy to get a corrupted install of pydantic (more specificially, pydantic 2.x cannot run if tensorflow is installed because of an incompatible requirement on typing_extensions). Since pydantic is an optional dependency of huggingface_hub, we do not want to crash at huggingface_hub import time if pydantic install is corrupted. However this was the case because of how imports are made in huggingface_hub. This hot-fix releases fixes this bug. If pydantic is not correctly installed, we only raise a warning and continue as if it was not installed at all.
Related PR: #1829
Full Changelog: v0.19.3...v0.19.4
v0.19.3 - Hot-fix: pin `pydantic<2.0` on Python3.8
Hot-fix release after #1828.
In 0.19.0 we've loosen pydantic requirements to accept both 1.x and 2.x since huggingface_hub is compatible with both. However, it started to cause issues when installing both huggingface_hub[inference] and tensorflow in a Python3.8 environment. The problem comes from the fact that on Python3.8, Pydantic>=2.x and tensorflow don't seem to be compatible. Tensorflow depends on
typing_extension<=4.5.0 while pydantic 2.x requires typing_extensions>=4.6. This causes a ImportError: cannot import name 'TypeAliasType' from 'typing_extensions'. when importing huggingface_hub.
As a side note, tensorflow support for Python3.8 has been dropped since 2.14.0. Therefore this issue should affect less and less users over time.
Full Changelog: v0.19.2...v0.19.3
v0.19.2 - Patch: expose HF_HOME in constants
Not a hot-fix.
In #1786 (already release in 0.19.0), we harmonized the environment variables in the HF ecosystem with the goal to propagate this harmonization to other HF libraries. In this work, we forgot to expose HF_HOME as a constant value that can be reused, especially by transformers or datasets. This release fixes this (see #1825).
Full Changelog: v0.19.1...v0.19.2
v0.19.1 - Hot-fix: ignore TypeError when listing models with corrupted ModelCard
Full Changelog: v0.19.0...v0.19.1.
Fixes a regression bug (PR #1821) introduced in 0.19.0 that made looping over models with list_models fail. The problem came from the fact that we are now parsing the data returned by the server into Python objects. However for some models the metadata in the model card is not valid. This is usually checked by the server but some models created before we started to enforce correct metadata are not valid. This hot-fix fixes the issue by ignoring the corrupted data, if any.
v0.19.0: Inference Endpoints and robustness!
(Discuss about the release in our Community Tab. Feedback welcome!! 🤗)
🚀 Inference Endpoints API
Inference Endpoints provides a secure solution to easily deploy models hosted on the Hub in a production-ready infrastructure managed by Huggingface. With huggingface_hub>=0.19.0 integration, you can now manage your Inference Endpoints programmatically. Combined with the InferenceClient, this becomes the go-to solution to deploy models and run jobs in production, either sequentially or in batch!
Here is an example how to get an inference endpoint, wake it up, wait for initialization, run jobs in batch and pause back the endpoint. All of this in a few lines of code! For more details, please check out our dedicated guide.
>>> import asyncio
>>> from huggingface_hub import get_inference_endpoint
# Get endpoint + wait until initialized
>>> endpoint = get_inference_endpoint("batch-endpoint").resume().wait()
# Run inference
>>> async_client = endpoint.async_client
>>> results = asyncio.gather(*[async_client.text_generation(...) for job in jobs])
# Pause endpoint
>>> endpoint.pause()- Implement API for Inference Endpoints by @Wauplin in #1779
- Fix inference endpoints docs by @Wauplin in #1785
⏬ Improved download experience
huggingface_hub is a library primarily used to transfer (huge!) files with the Huggingface Hub. Our goal is to keep improving the experience for this core part of the library. In this release, we introduce a more robust download mechanism for slow/limited connection while improving the UX for users with a high bandwidth available!
More robust downloads
Getting a connection error in the middle of a download is frustrating. That's why we've implemented a retry mechanism that automatically reconnects if a connection get closed or a ReadTimeout error is raised. The download restart exactly where it stopped without having to redownload any bytes.
- Retry on ConnectionError/ReadTimeout when streaming file from server by @Wauplin in #1766
- Reset nb_retries if data has been received from the server by @Wauplin in #1784
In addition to this, it is possible to configure huggingface_hub with higher timeouts thanks to @Shahafgo. This should help getting around some issues on slower connections.
- Adding the ability to configure the timeout of get request by @Shahafgo in #1720
- Fix a bug to respect the HF_HUB_ETAG_TIMEOUT. by @Shahafgo in #1728
Progress bars while using hf_transfer
hf_transfer is a Rust-based library focused on improving upload and download speed on machines with a high bandwidth available. Once installed (pip install -U hf_transfer), it can transparently be used with huggingface_hub simply by setting HF_HUB_ENABLE_HF_TRANSFER=1 as environment variable. The counterpart of higher performances is the lack of some user-friendly features such as better error handling or a retry mechanism -meaning it is recommended only to power-users-. In this release we still ship a new feature to improve UX: progress bars. No need to update any existing code, a simple library upgrade is enough.
hf-transferprogress bar by @cbensimon in #1792- Add support for progress bars in hf_transfer uploads by @Wauplin in #1804
📚 Documentation
huggingface-cli guide
huggingface-cli is the CLI tool shipped with huggingface_hub. It recently got some nice improvement, especially with commands to download and upload files directly from the terminal. All of this needed a guide, so here it is!
Environment variables
Environment variables are useful to configure how huggingface_hub should work. Historically we had some inconsistencies on how those variables were named. This is now improved, with a backward compatible approach. Please check the package reference for more details. The goal is to propagate those changes to the whole HF-ecosystem, making configuration easier for everyone.
- Harmonize environment variables by @Wauplin in #1786
- Ensure backward compatibility for HUGGING_FACE_HUB_TOKEN env variable by @Wauplin in #1795
- Do not promote
HF_ENDPOINTenvironment variable by @Wauplin in #1799
Hindi translation
Hindi documentation landed on the Hub thanks to @aneeshd27! Checkout the Hindi version of the quickstart guide here.
- Added translation of 3 files as mentioned in issue by @aneeshd27 in #1772
Minor docs fixes
- Added
[[autodoc]]forModelStatusby @jamesbraza in #1758 - Expanded docstrings on
postandModelStatusby @jamesbraza in #1740 - Fix document link for manage-cache by @liuxueyang in #1774
- Minor doc fixes by @pcuenca in #1775
💔 Breaking changes
Legacy ModelSearchArguments and DatasetSearchArguments have been completely removed from huggingface_hub. This shouldn't cause problem as they were already not in use (and unusable in practice).
- Removed GeneralTags, ModelTags and DatasetTags by @VictorHugoPilled in #1761
Classes containing details about a repo (ModelInfo, DatasetInfo and SpaceInfo) have been refactored by @mariosasko to be more Pythonic and aligned with the other classes in huggingface_hub. In particular those objects are now based the dataclass module instead of a custom ReprMixin class. Every change is meant to be backward compatible, meaning no breaking changes is expected. However, if you detect any inconsistency, please let us know and we will fix it asap.
- Replace
ReprMixinwith dataclasses by @mariosasko in #1788 - Fix SpaceInfo initialization + add test by @Wauplin in #1802
The legacy Repository and InferenceAPI classes are now deprecated but will not be removed before the next major release (v1.0).
Instead of the git-based Repository, we advice to use the http-based HfApi. Check out this guide explaining the reasons behind it. For InferenceAPI, we recommend to switch to InferenceClient which is much more feature-complete and will keep getting improved.
⚙️ Miscellaneous improvements, fixes and maintenance
InferenceClient
- Adding
InferenceClient.get_recommended_modelby @jamesbraza in #1770 - Fix InferenceClient.text_generation when pydantic is not installed by @Wauplin in #1793
- Supporting
pydantic<3by @jamesbraza in #1727
HfFileSystem
- [hffs] Raise
NotImplementedErroron transaction commits by @Wauplin in #1736 - Fix huggingface filesystem repo_type not forwarded by @Wauplin in #1791
- Fix
HfFileSystemFilewhen init fails + improve error message by @Wauplin in #1805
FIPS compliance
Misc fixes
- Fix UnboundLocalError when using commit context manager by @hahunavth in #1722
- Fixed improperly configured 'every' leading to test_sync_and_squash_history failure by @jamesbraza in #1731
- Testing
WEBHOOK_PAYLOAD_EXAMPLEdeserialization by @jamesbraza in #1732 - Keep lock files in a
/locksfolder to prevent rare concurrency issue by @beeender in #1659 - Fix Space runtime on static Space by @Wauplin in #1754
- Clearer error message on unprocessable entity. by @Wauplin in #1755
- Do not warn in ModelHubMixin on missing config file by @Wauplin in #1776
- Update SpaceHardware enum by @Wauplin in #1798
- change prop name by @julien-c in #1803
Internal
- Bump version to 0.19 by @Wauplin in #1723
- Make
@retry_endpointa default for all test by @Wauplin in #1725 - Retry test on 502 Bad Gateway by @Wauplin in #1737
- Consolidated mypy type ignores in
InferenceClient.postby @jamesbraza in #1742 - fix: remove useless token by @rtrompier in #1765
- Fix CI (typing-extensions minimal requirement by @Wauplin in #1781
- remove black formatter to use only ruff by @Wauplin in #1783
- Separate test and prod cache (+ ruff formatter) by @Wauplin in #1789
- fix 3.8 tensorflow in ci by @Wauplin (direct commit on main)
🤗 Significant community contributions
The following contributors have made significant changes to the library over the last release:
- @VictorHugoPilled
- Removed GeneralTags, ModelTags and DatasetTags (#1761)
- @aneeshd27
- Added translation of 3 files as mentioned in issue (#1772)