Closed
Description
Reminder
- I have read the README and searched the existing issues.
Reproduction
It appears the additional_target
parameter is not applied when unsloth is used.
The number of trainable parameters only includes the enabled lora layers, not the embedding layer.
For all I can tell, unsloth does support optimizing the embedding layers, too.
LLaMA-Factory/src/llmtuner/model/adapter.py
Lines 144 to 157 in 7f6c248
Maybe this is sufficient?
--- a/src/llmtuner/model/adapter.py
+++ b/src/llmtuner/model/adapter.py
@@ -145,6 +145,8 @@ def init_adapter(
from unsloth import FastLanguageModel # type: ignore
unsloth_peft_kwargs = {"model": model, "max_seq_length": model_args.model_max_length}
+ if finetuning_args.additional_target:
+ unsloth_peft_kwargs["modules_to_save"] = finetuning_args.additional_target
model = FastLanguageModel.get_peft_model(**peft_kwargs, **unsloth_peft_kwargs)
else:
lora_config = LoraConfig(
Expected behavior
No response
System Info
No response
Others
No response
Activity