Skip to content

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Aug 6, 2025

Bumps transformers from 4.49.0 to 4.53.0.

Release notes

Sourced from transformers's releases.

Release v4.53.0

Gemma3n

Gemma 3n models are designed for efficient execution on low-resource devices. They are capable of multimodal input, handling text, image, video, and audio input, and generating text outputs, with open weights for pre-trained and instruction-tuned variants. These models were trained with data in over 140 spoken languages.

Gemma 3n models use selective parameter activation technology to reduce resource requirements. This technique allows the models to operate at an effective size of 2B and 4B parameters, which is lower than the total number of parameters they contain. For more information on Gemma 3n's efficient parameter management technology, see the Gemma 3n page.

image

from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
torch_dtype=torch.bfloat16,
model="google/gemma-3n-e4b",
device="cuda",
)
output = pipe(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg",
text="<image_soft_token> in this image, there is"
)
print(output)

Dia

image

Dia is an opensource text-to-speech (TTS) model (1.6B parameters) developed by Nari Labs. It can generate highly realistic dialogue from transcript including nonverbal communications such as laughter and coughing. Furthermore, emotion and tone control is also possible via audio conditioning (voice cloning).

Model Architecture: Dia is an encoder-decoder transformer based on the original transformer architecture. However, some more modern features such as rotational positional embeddings (RoPE) are also included. For its text portion (encoder), a byte tokenizer is utilized while for the audio portion (decoder), a pretrained codec model DAC is used - DAC encodes speech into discrete codebook tokens and decodes them back into audio.

Kyutai Speech-to-Text

Kyutai STT is a speech-to-text model architecture based on the Mimi codec, which encodes audio into discrete tokens in a streaming fashion, and a Moshi-like autoregressive decoder. Kyutai’s lab has released two model checkpoints:

... (truncated)

Commits

Dependabot compatibility score

You can trigger a rebase of this PR by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    You can disable automated security fix PRs for this repo from the Security Alerts page.

Summary by cubic

Upgraded the transformers library from 4.49.0 to 4.53.0 to add support for new models and bug fixes. This brings features like Gemma 3n, Dia TTS, and Kyutai speech-to-text.

  • Dependencies
    • Updated transformers in backend requirements files.

Note
Automatic rebases have been disabled on this pull request as it has been open for over 30 days.

@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Aug 6, 2025
@dependabot dependabot bot requested a review from a team as a code owner August 6, 2025 20:02
Copy link

vercel bot commented Aug 6, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
internal-search Error Error Sep 1, 2025 5:04pm

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Greptile Summary

This PR is an automated dependency update by Dependabot that bumps the transformers library from version 4.49.0 to 4.53.0 in the backend requirements files (backend/requirements/default.txt). The transformers library is a critical dependency for Onyx's model server component, providing state-of-the-art NLP models and tools for the AI functionality.

The update introduces several significant new model architectures and capabilities:

  • Gemma 3n: New multimodal models capable of handling text, image, video, and audio input while generating text outputs. These models use selective parameter activation for efficient execution on low-resource devices.
  • Dia: A 1.6B parameter text-to-speech (TTS) model that can generate realistic dialogue including nonverbal communications like laughter and coughing, with emotion and tone control via audio conditioning.
  • Kyutai Speech-to-Text: A new speech-to-text model architecture based on the Mimi codec for streaming audio transcription.

Additionally, the update includes important bug fixes such as Whisper encoder CPU offloading improvements, Qwen2-VL vision attention scaling fixes, and enhanced long-form generation handling for Whisper pipelines. The transformers library integrates deeply with Onyx's model server architecture, which handles AI model inference and is containerized separately from the main backend API server.

Confidence score: 4/5

  • This PR is safe to merge with minimal risk as it's a minor version bump of a well-maintained library
  • Score reflects the routine nature of dependency updates and the backward compatibility maintained in minor releases
  • Pay attention to the model server component and any integration tests that validate model functionality

1 file reviewed, no comments

Edit Code Review Bot Settings | Greptile

Bumps [transformers](https://github.com/huggingface/transformers) from 4.49.0 to 4.53.0.
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.49.0...v4.53.0)

---
updated-dependencies:
- dependency-name: transformers
  dependency-version: 4.53.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot force-pushed the dependabot/pip/backend/requirements/transformers-4.53.0 branch from 1697461 to 4e5d43c Compare September 1, 2025 16:59
@wenxi-onyx wenxi-onyx closed this Oct 8, 2025
Copy link
Contributor Author

dependabot bot commented on behalf of github Oct 8, 2025

OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting @dependabot ignore this major version or @dependabot ignore this minor version.

If you change your mind, just re-open this PR and I'll resolve any conflicts on it.

@dependabot dependabot bot deleted the dependabot/pip/backend/requirements/transformers-4.53.0 branch October 8, 2025 17:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file python Pull requests that update Python code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant