Skip to content

Commit b3f0b0e

Browse files
committed
Fixed some package errors
1 parent 95ae6af commit b3f0b0e

File tree

2 files changed

+3
-1
lines changed

2 files changed

+3
-1
lines changed

locallab/model_manager.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -304,6 +304,8 @@ async def load_model(self, model_id: str) -> bool:
304304
trust_remote_code=True,
305305
token=hf_token
306306
)
307+
# Get quantization configuration
308+
config = self._get_quantization_config()
307309

308310
# Determine if we should use CPU offloading
309311
use_cpu_offload = not torch.cuda.is_available() or torch.cuda.get_device_properties(0).total_memory < 4 * 1024 * 1024 * 1024 # Less than 4GB VRAM

setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55

66
setup(
77
name="locallab",
8-
version="0.4.26",
8+
version="0.4.27",
99
packages=find_packages(include=["locallab", "locallab.*"]),
1010
install_requires=[
1111
"fastapi>=0.95.0,<1.0.0",

0 commit comments

Comments
 (0)