Core dump when loading TFLite models with quantization in TensorFlow
Moderate severity
GitHub Reviewed
Published
May 17, 2022
in
tensorflow/tensorflow
•
Updated Jan 30, 2023
Description
Published by the National Vulnerability Database
May 21, 2022
Published to the GitHub Advisory Database
May 24, 2022
Reviewed
May 24, 2022
Last updated
Jan 30, 2023
Impact
Certain TFLite models that were created using TFLite model converter would crash when loaded in the TFLite interpreter. The culprit is that during quantization the scale of values could be greater than 1 but code was always assuming sub-unit scaling.
Thus, since code was calling
QuantizeMultiplierSmallerThanOneExp
, theTFLITE_CHECK_LT
assertion would trigger and abort the process.Patches
We have patched the issue in GitHub commit a989426ee1346693cc015792f11d715f6944f2b8.
The fix will be included in TensorFlow 2.9.0. We will also cherrypick this commit on TensorFlow 2.8.1, TensorFlow 2.7.2, and TensorFlow 2.6.4, as these are also affected and still in supported range.
For more information
Please consult our security guide for more information regarding the security model and how to contact us with issues and questions.
Attribution
This vulnerability has been reported externally via a GitHub issue.
References