Skip to content

--additional_target not supported with unsloth? #3200

Closed
@kno10

Description

Reminder

  • I have read the README and searched the existing issues.

Reproduction

It appears the additional_target parameter is not applied when unsloth is used.
The number of trainable parameters only includes the enabled lora layers, not the embedding layer.
For all I can tell, unsloth does support optimizing the embedding layers, too.

if model_args.use_unsloth:
from unsloth import FastLanguageModel # type: ignore
unsloth_peft_kwargs = {"model": model, "max_seq_length": model_args.model_max_length}
model = FastLanguageModel.get_peft_model(**peft_kwargs, **unsloth_peft_kwargs)
else:
lora_config = LoraConfig(
task_type=TaskType.CAUSAL_LM,
inference_mode=False,
modules_to_save=finetuning_args.additional_target,
use_dora=finetuning_args.use_dora,
**peft_kwargs,
)
model = get_peft_model(model, lora_config)

Maybe this is sufficient?

--- a/src/llmtuner/model/adapter.py
+++ b/src/llmtuner/model/adapter.py
@@ -145,6 +145,8 @@ def init_adapter(
                 from unsloth import FastLanguageModel  # type: ignore
 
                 unsloth_peft_kwargs = {"model": model, "max_seq_length": model_args.model_max_length}
+                if finetuning_args.additional_target:
+                    unsloth_peft_kwargs["modules_to_save"] = finetuning_args.additional_target
                 model = FastLanguageModel.get_peft_model(**peft_kwargs, **unsloth_peft_kwargs)
             else:
                 lora_config = LoraConfig(

Expected behavior

No response

System Info

No response

Others

No response

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Labels

    solvedThis problem has been already solved

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions