Skip to content

[Fix] lm_head lora save#4106

Open
Datta0 wants to merge 1 commit intounslothai:mainfrom
Datta0:lora_save
Open

[Fix] lm_head lora save#4106
Datta0 wants to merge 1 commit intounslothai:mainfrom
Datta0:lora_save

Conversation

@Datta0
Copy link
Collaborator

@Datta0 Datta0 commented Feb 25, 2026

Fixes : #4098
lm_head should ideally be a module_to_save and not target_module
This is confirmed to work by @marcandrelarochelle the OP of the issue

Needs: unslothai/unsloth-zoo#515

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Datta0, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses an issue where the lm_head module was not being correctly handled during LoRA model saving. By explicitly defining lm_head as an accepted module and removing the previous workaround, the change ensures that lm_head is properly treated as a module to be saved, leading to more robust and correct LoRA model configurations.

Highlights

  • LM Head Module Handling: The lm_head module has been explicitly added to the accepted_modules set, ensuring it is correctly recognized for LoRA configurations.
  • Simplified LoRA Configuration Logic: Removed the conditional logic within get_peft_model that previously handled lm_head by moving it from target_modules to modules_to_save, streamlining the configuration process.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Activity
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix the saving mechanism for lm_head when using LoRA. The changes involve adding lm_head to the accepted_modules list, allowing it to be directly specified as a target_module, and removing the previous logic that would automatically move lm_head from target_modules to modules_to_save. However, this appears to contradict the pull request description, which states that lm_head should ideally be a module_to_save and not a target_module. Clarification on the intended design for lm_head's role in PEFT configuration is needed to ensure consistency between the code's behavior and its documentation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] lm_head is not trained using LoRA and merging is broken

1 participant