-
-
Notifications
You must be signed in to change notification settings - Fork 9.4k
[MODEL] New model support for naver-hyperclovax/HyperCLOVAX-SEED-Vision-Instruct-3B #20931
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MODEL] New model support for naver-hyperclovax/HyperCLOVAX-SEED-Vision-Instruct-3B #20931
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @bigshanedogg, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces full support for the HyperCLOVAX-SEED-Vision-Instruct-3B
multimodal model. It includes the necessary architectural components for processing both image and video inputs, integrating them with the language model, and handling various image resolutions. An example script is also provided to facilitate easy testing and usage of the new model.
Highlights
- New Model Support: Added comprehensive support for the
naver-hyperclovax/HyperCLOVAX-SEED-Vision-Instruct-3B
multimodal model, enabling it to be used within the system. - Multimodal Processing Implementation: Implemented custom processing logic for images and videos, including
HCXVisionProcessingInfo
,HCXVisionMultiModalProcessor
, andHCXVisionDummyInputsBuilder
, to seamlessly integrate visual inputs into the language model's pipeline. - Vision Encoder and Projector Integration: Integrated and initialized a vision tower (supporting CLIP or Siglip architectures) and a multimodal projector (
HCXVisionMlp
orHCXVisionCAbstractor
) responsible for converting visual features into a format consumable by the language model. - Any-Resolution Image Handling: Included utility functions such as
unpad_image
,select_best_resolution
, andanyres_postprocessing
to support flexible image input handling across various resolutions and aspect ratios. - Example Usage Script: Provided a new offline inference example script (
hyperclovax_seed_vision_3b_instruct.py
) to demonstrate how to load and use the newly supportedHyperCLOVAX-SEED-Vision-Instruct-3B
model with multimodal inputs.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds support for the naver-hyperclovax/HyperCLOVAX-SEED-Vision-Instruct-3B
model. The changes include the model implementation, an example script, and documentation updates. The implementation looks solid, but I've found a critical issue that would cause a runtime error, along with several high and medium severity issues related to correctness and maintainability in the new model file. Addressing these will ensure the new model is robust and well-integrated.
12aba7d
to
d64c8d3
Compare
examples/offline_inference/hyperclovax_seed/hyperclovax_seed_vision_3b_instruct.py
Outdated
Show resolved
Hide resolved
This pull request has merge conflicts that must be resolved before it can be |
56284e3
to
6cf535c
Compare
I've addressed the feedback you left in the comments and updated the PR description accordingly.
(cc. @Isotr0py , @DarkLight1337) |
This pull request has merge conflicts that must be resolved before it can be |
0560c88
to
a6b1127
Compare
0898a0e
to
db1775a
Compare
Can you push a new commit to this PR to trigger CI? |
0fee73a
to
f1664b3
Compare
Signed-off-by: bigshanedogg <bigshane319@gmail.com>
f1664b3
to
c6490ff
Compare
…on-Instruct-3B (vllm-project#20931) Signed-off-by: bigshanedogg <bigshane319@gmail.com>
…on-Instruct-3B (vllm-project#20931) Signed-off-by: bigshanedogg <bigshane319@gmail.com> Signed-off-by: shuw <shuw@nvidia.com>
…on-Instruct-3B (vllm-project#20931) Signed-off-by: bigshanedogg <bigshane319@gmail.com> Signed-off-by: x22x22 <wadeking@qq.com>
…on-Instruct-3B (vllm-project#20931) Signed-off-by: bigshanedogg <bigshane319@gmail.com>
…on-Instruct-3B (vllm-project#20931) Signed-off-by: bigshanedogg <bigshane319@gmail.com>
…on-Instruct-3B (vllm-project#20931) Signed-off-by: bigshanedogg <bigshane319@gmail.com> Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
…on-Instruct-3B (vllm-project#20931) Signed-off-by: bigshanedogg <bigshane319@gmail.com> Signed-off-by: Paul Pak <paulpak58@gmail.com>
…on-Instruct-3B (vllm-project#20931) Signed-off-by: bigshanedogg <bigshane319@gmail.com>
…on-Instruct-3B (vllm-project#20931) Signed-off-by: bigshanedogg <bigshane319@gmail.com> Signed-off-by: Boyuan Feng <boyuan@meta.com>
…on-Instruct-3B (vllm-project#20931) Signed-off-by: bigshanedogg <bigshane319@gmail.com> Signed-off-by: Diego-Castan <diego.castan@ibm.com>
Essential Elements of an Effective PR Description Checklist
supported_models.md
andexamples
for a new model.Purpose
Test Plan
offline inference
PYTHONPWD=${PWD} python examples/offline_inference/vision_language.py \ --model-type hyperclovax_seed_vision \ --modality image \ --num-prompts 4
online serving
python -m vllm.entrypoints.openai.api_server \ --model naver-hyperclovax/HyperCLOVAX-SEED-Vision-Instruct-3B \ --trust_remote_code \ --chat-template-content-format "openai" \ --max-model-len 4096 \ --max-num-seqs 8
Test Result
offline inference
online serving
(Optional) Documentation Update
supported_models.md
examples/offline_inference/vision_language.py
examples/offline_inference/vision_language_multi_image.py
tests/models/multimodal/processing/test_common.py