-
Notifications
You must be signed in to change notification settings - Fork 216
Python demos requirement incompatibility #3205
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Fixed here: #3211 |
Yes, just looked. But the transformers line now allows installing versions that have the CVEs (and the CVEs are rated high: most are 8.8 cvss). The CVE's were fixed in 4.48.0. I'll give this a try and see if it selects the newest or picks something that might have the CVE. |
OK, ran the installation. It now downloads transformers from 4.49 all the way back to transformers-4.26.0 while deciding which to pick. Pip then ultimately selected transformers-4.39.3. Previously, the requirements were set to 4.40.0. So, this looks like a regression from that perspective. I'd suggest making it transformers>=4.48.0,<=4.49 so that it's past all CVEs. |
OK, testing my last suggestion leads to a big conflict. Pip is saying this: The conflict is caused by: Looking at optimum, it's dependencies are kind of messed up. The 1.18.1 release notes even says: "Enable transformers v4.42.0 " https://github.com/huggingface/optimum-intel/releases/tag/v1.18.1 But they did not update setup.py to reflect this. They have everything right in the v1.22 setup.py. However, the package that comes from pipy says optimum 1.22.0 depends on transformers<4.45.0 and >=4.29. I don't know how that can be. It is possible to fix the original problem (1.17.0) with sed: sed -i 's/self._supports_cache_class/False/' Seems like upstream optimum needs to sort out it's dependencies. |
@RH-steve-grubb we will enforce using newer version of the transformers without the vulnerability. With that we will drop also old python node demo with seq2seq use case. It is based on old optimum-intel fork which is not in sync with latest transformers. Right now LLM models are supported with OpenAI API so that old demo is now obsolete. The remaining python demos will be with stable diffusion and clip. Stable diffusion will be replace in new release with image generation endpoint in next release 2025.2. |
Describe the bug
There's still one more issue caused by the transformers upgrade aimed at the 2025.1 release. If you run a test program that is designed to confirm compatibility between the transformers library and the Intel-optimized optimum.intel.openvino you get a traceback:
The _supports_cache_class attribute was introduced recently (transformers 4.42.x), and the Optimum-Intel (OVModelForCausalLM) class hasn't implemented support for the latest caching API introduced by transformers. Upstream noticed this and added support in the optimum 1.18.1 release.
So, the requirements should be optimum[diffusers]==1.18.1. Would upgrading optimum cause any other problems?
To Reproduce
Run the following program in the image after installing the demos/python_demos/requirements.txt python modules.
Expected behavior
The program should output something. And it does with optimum==1.18.1.
Configuration
OVMS 2025.1
The text was updated successfully, but these errors were encountered: