Skip to content

Conversation

Nidhogg-lyz
Copy link
Contributor

Fix TypeError object got multiple values for keyword argument 'past_key_values' when past_key_values is provided in base_model's forward call in Prefix Tuning. Resolves #1938 .

@Nidhogg-lyz
Copy link
Contributor Author

I'm not sure if I add the unit test file test_past_kv.py in the correct place, so if there's any problem please let me know.

@Nidhogg-lyz Nidhogg-lyz marked this pull request as ready for review July 23, 2024 03:33
Copy link
Member

@BenjaminBossan BenjaminBossan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for providing this fix so quickly. I have a few comments, please take a look.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@BenjaminBossan
Copy link
Member

Thanks for the updates @Nidhogg-lyz, could you please run make style?

@Nidhogg-lyz
Copy link
Contributor Author

@BenjaminBossan I've already run make style and make quality. It seems the errors raised are not related to the codes I modified, should I edit others' code?

~nidhogg% make style
ruff check --fix src tests examples docs scripts docker
src/peft/tuners/boft/layer.py:94:20: F811 Redefinition of unused `fbd_cuda` from line 87
   |
92 |             )
93 |             # extra_cuda_cflags = ['-std=c++14', '-ccbin=$$(which gcc-7)']) # cuda10.2 is not compatible with gcc9. Specify gcc 7
94 |             import fbd_cuda
   |                    ^^^^^^^^ F811
95 |     except Exception as e:
96 |         warnings.warn(f"Failed to load the CUDA extension: {e}, check if ninja is available.")
   |
   = help: Remove definition: `fbd_cuda`

src/peft/tuners/lora/model.py:556:12: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
554 |             )
555 | 
556 |         if target_module_types[0] == str:
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
557 |             new_target_modules = "|".join(f"({self.peft_config[adapter].target_modules})" for adapter in adapters)
558 |         elif target_module_types[0] == set:
    |

src/peft/tuners/lora/model.py:558:14: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
556 |         if target_module_types[0] == str:
557 |             new_target_modules = "|".join(f"({self.peft_config[adapter].target_modules})" for adapter in adapters)
558 |         elif target_module_types[0] == set:
    |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
559 |             new_target_modules = reduce(
560 |                 operator.or_, (self.peft_config[adapter].target_modules for adapter in adapters)
    |

src/peft/tuners/tuners_utils.py:574:9: F811 Redefinition of unused `active_adapter` from line 509
    |
573 |     @property
574 |     def active_adapter(self) -> str | list[str]:
    |         ^^^^^^^^^^^^^^ F811
575 |         # use a property to ensure that active_adapter is not set directly, instead use the set_adapter method
576 |         return self._active_adapter
    |
    = help: Remove definition: `active_adapter`

src/peft/tuners/xlora/model.py:49:12: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
   |
47 |     for module in base.modules():
48 |         # Check the exact type because classes like OPTLearnedPositionalEmbedding inherit from nn.Embedding
49 |         if type(module) == lora.Linear:
   |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
50 |             device = module.lora_A[next(iter(module.lora_A))].weight.device
51 |             new_layer = XLoraLinearLayer(
   |

src/peft/tuners/xlora/model.py:61:14: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
   |
59 |             module.forward = new_layer.forward  # type: ignore[method-assign]
60 |             total_swapped += 1
61 |         elif type(module) == lora.Embedding:
   |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
62 |             device = module.lora_embedding_A[next(iter(module.lora_embedding_A))].device
63 |             new_layer = XLoraEmbeddingLayer(
   |

src/peft/tuners/xlora/model.py:73:14: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
   |
71 |             module.forward = new_layer.forward  # type: ignore[method-assign]
72 |             total_swapped += 1
73 |         elif type(module) == lora.Conv2d:
   |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
74 |             device = module.lora_A[next(iter(module.lora_A))].weight.device
75 |             new_layer = XLoraConv2dLayer(
   |

tests/test_mixed.py:125:12: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
123 |         tuner_layers = [mod for mod in peft_model_01.modules() if isinstance(mod, BaseTunerLayer)]
124 |         tuner_types = {type(tuner_layer) for tuner_layer in tuner_layers}
125 |         if type(config0) == type(config1):
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
126 |             assert len(tuner_types) == 1
127 |         else:
    |

tests/test_mixed.py:150:12: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
148 |         tuner_layers = [mod for mod in peft_model_10.modules() if isinstance(mod, BaseTunerLayer)]
149 |         tuner_types = {type(tuner_layer) for tuner_layer in tuner_layers}
150 |         if type(config0) == type(config1):
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
151 |             assert len(tuner_types) == 1
152 |         else:
    |

tests/test_mixed.py:169:12: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
167 |         tuner_layers = [mod for mod in peft_model_10.modules() if isinstance(mod, BaseTunerLayer)]
168 |         tuner_types = {type(tuner_layer) for tuner_layer in tuner_layers}
169 |         if type(config0) == type(config1):
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
170 |             assert len(tuner_types) == 1
171 |         else:
    |

tests/test_tuners_utils.py:294:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
292 |         for name, actual_module in actual_model.named_modules():
293 |             expected_module = expected_model_module_dict[name]
294 |             assert type(actual_module) == type(expected_module)
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
295 | 
296 |     def test_maybe_include_all_linear_layers_ia3_loha(self):
    |

Found 11 errors.
make: *** [style] Error 1

For make quality

~nidhogg% make quality
ruff check src tests examples docs scripts docker
src/peft/tuners/boft/layer.py:94:20: F811 Redefinition of unused `fbd_cuda` from line 87
   |
92 |             )
93 |             # extra_cuda_cflags = ['-std=c++14', '-ccbin=$$(which gcc-7)']) # cuda10.2 is not compatible with gcc9. Specify gcc 7
94 |             import fbd_cuda
   |                    ^^^^^^^^ F811
95 |     except Exception as e:
96 |         warnings.warn(f"Failed to load the CUDA extension: {e}, check if ninja is available.")
   |
   = help: Remove definition: `fbd_cuda`

src/peft/tuners/lora/model.py:556:12: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
554 |             )
555 | 
556 |         if target_module_types[0] == str:
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
557 |             new_target_modules = "|".join(f"({self.peft_config[adapter].target_modules})" for adapter in adapters)
558 |         elif target_module_types[0] == set:
    |

src/peft/tuners/lora/model.py:558:14: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
556 |         if target_module_types[0] == str:
557 |             new_target_modules = "|".join(f"({self.peft_config[adapter].target_modules})" for adapter in adapters)
558 |         elif target_module_types[0] == set:
    |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
559 |             new_target_modules = reduce(
560 |                 operator.or_, (self.peft_config[adapter].target_modules for adapter in adapters)
    |

src/peft/tuners/tuners_utils.py:574:9: F811 Redefinition of unused `active_adapter` from line 509
    |
573 |     @property
574 |     def active_adapter(self) -> str | list[str]:
    |         ^^^^^^^^^^^^^^ F811
575 |         # use a property to ensure that active_adapter is not set directly, instead use the set_adapter method
576 |         return self._active_adapter
    |
    = help: Remove definition: `active_adapter`

src/peft/tuners/xlora/model.py:49:12: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
   |
47 |     for module in base.modules():
48 |         # Check the exact type because classes like OPTLearnedPositionalEmbedding inherit from nn.Embedding
49 |         if type(module) == lora.Linear:
   |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
50 |             device = module.lora_A[next(iter(module.lora_A))].weight.device
51 |             new_layer = XLoraLinearLayer(
   |

src/peft/tuners/xlora/model.py:61:14: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
   |
59 |             module.forward = new_layer.forward  # type: ignore[method-assign]
60 |             total_swapped += 1
61 |         elif type(module) == lora.Embedding:
   |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
62 |             device = module.lora_embedding_A[next(iter(module.lora_embedding_A))].device
63 |             new_layer = XLoraEmbeddingLayer(
   |

src/peft/tuners/xlora/model.py:73:14: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
   |
71 |             module.forward = new_layer.forward  # type: ignore[method-assign]
72 |             total_swapped += 1
73 |         elif type(module) == lora.Conv2d:
   |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
74 |             device = module.lora_A[next(iter(module.lora_A))].weight.device
75 |             new_layer = XLoraConv2dLayer(
   |

tests/test_mixed.py:125:12: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
123 |         tuner_layers = [mod for mod in peft_model_01.modules() if isinstance(mod, BaseTunerLayer)]
124 |         tuner_types = {type(tuner_layer) for tuner_layer in tuner_layers}
125 |         if type(config0) == type(config1):
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
126 |             assert len(tuner_types) == 1
127 |         else:
    |

tests/test_mixed.py:150:12: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
148 |         tuner_layers = [mod for mod in peft_model_10.modules() if isinstance(mod, BaseTunerLayer)]
149 |         tuner_types = {type(tuner_layer) for tuner_layer in tuner_layers}
150 |         if type(config0) == type(config1):
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
151 |             assert len(tuner_types) == 1
152 |         else:
    |

tests/test_mixed.py:169:12: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
167 |         tuner_layers = [mod for mod in peft_model_10.modules() if isinstance(mod, BaseTunerLayer)]
168 |         tuner_types = {type(tuner_layer) for tuner_layer in tuner_layers}
169 |         if type(config0) == type(config1):
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
170 |             assert len(tuner_types) == 1
171 |         else:
    |

tests/test_tuners_utils.py:294:20: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
    |
292 |         for name, actual_module in actual_model.named_modules():
293 |             expected_module = expected_model_module_dict[name]
294 |             assert type(actual_module) == type(expected_module)
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E721
295 | 
296 |     def test_maybe_include_all_linear_layers_ia3_loha(self):
    |

Found 11 errors.
make: *** [quality] Error 1

@BenjaminBossan
Copy link
Member

What is your ruff version? Try 0.4.10.

@Nidhogg-lyz
Copy link
Contributor Author

Done! My original version of ruff is 0.5.4 and that's the reason.

@BenjaminBossan
Copy link
Member

BenjaminBossan commented Jul 26, 2024

The errors for MacOS on CI seem to be unrelated to this PR:

RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

I will investigate further and come back to this PR afterwards.

Edit: Sorry, I was mistaken, it's not unrelated. @Nidhogg-lyz could you please remove the usage of float16 from the test?

@Nidhogg-lyz
Copy link
Contributor Author

Done!

Copy link
Member

@BenjaminBossan BenjaminBossan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the fix.

@BenjaminBossan BenjaminBossan merged commit 296fbcd into huggingface:main Jul 29, 2024
14 checks passed
Guy-Bilitski pushed a commit to Guy-Bilitski/peft that referenced this pull request May 13, 2025
There was an error with prefix tuning when some models like Llava passed
past_key_values explicitly, even if it was None, because it resulted in
that argument passed twice (once explicit, once via kwargs). This is now
fixed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Check if past_key_values is provided when using prefix_tuning in peft_model
3 participants