Skip to content

fix: use env-only SM100 workaround for vLLM PDL/MMA path#4035

Open
danielhanchen wants to merge 1 commit intomainfrom
fix/sm100-vllm-env-only
Open

fix: use env-only SM100 workaround for vLLM PDL/MMA path#4035
danielhanchen wants to merge 1 commit intomainfrom
fix/sm100-vllm-env-only

Conversation

@danielhanchen
Copy link
Contributor

Summary

On SM100 (B200/B100), switch the vLLM workaround in fix_vllm_pdl_blackwell() to env vars only and remove runtime monkey-patching of vLLM internals.

This keeps vLLM enabled while avoiding intrusive patching behavior.

What changed

  • File: unsloth/import_fixes.py
  • In fix_vllm_pdl_blackwell():
    • Removed dynamic patching of:
      • vllm.lora.ops.triton_ops.utils.supports_pdl
      • vllm.lora.ops.triton_ops.lora_expand_op.supports_pdl
      • vllm.lora.ops.triton_ops.lora_shrink_op.supports_pdl
      • vllm.lora.ops.triton_ops.fused_moe_lora_op.supports_pdl
    • Added env-only mitigation via setdefault:
      • VLLM_LORA_DISABLE_PDL=1
      • TRITON_DISABLE_PDL=1
      • VLLM_USE_FBGEMM=0
    • Kept behavior scoped to Blackwell (SM100) detection.
    • Added inline comment documenting the observed MMA failure string on this path:
      • Arch conditional MMA instruction used without targeting appropriate compute capability

Why

  • We need vLLM to remain available.
  • We want a less intrusive mitigation than monkey-patching internal vLLM functions.
  • Env vars are the lowest-risk control surface and can be user-overridden.

Validation

1) Import-time env probe on B200

Log: temp/envpr_clean/import_probe.log

  • Before import unsloth:
    • VLLM_LORA_DISABLE_PDL=None
    • TRITON_DISABLE_PDL=None
    • VLLM_USE_FBGEMM=None
  • After import unsloth:
    • VLLM_LORA_DISABLE_PDL='1'
    • TRITON_DISABLE_PDL='1'
    • VLLM_USE_FBGEMM='0'

Re-check after final comment cleanup:

  • Log: temp/envpr_clean/import_probe_post_comment_fix.log
  • Same env results.

2) Actual training runs with the patch

Script: temp/trunc_call_training_probe_forced.py

  • transformers==5.0.0

    • Log: temp/envpr_clean/train_tf500.log
    • RESULT_JSON: train_runtime=6.1545, train_loss=1.863374924659729
  • transformers==4.57.6

    • Log: temp/envpr_clean/train_tf4576.log
    • RESULT_JSON: train_runtime=5.9139, train_loss=1.863374924659729

3) Error-string scan

Searched in temp/envpr_clean/*.log:

  • Arch conditional MMA
  • CUTE_INVALID_CONTROL_PATH
  • Trying to use tma

Result: no matches.

4) Transformers initialization audit (other inits)

Requested check: whether other inits should be upcast to float32.

  • transformers==5.0.0

    • File: transformers/initialization.py
    • Observation: init functions are wrappers around torch init primitives with _is_hf_initialized guard.
    • trunc_normal_ delegates to torch.nn.init.trunc_normal_ directly.
    • No extra float32-cast path in this file.
  • transformers==4.57.6

    • No centralized transformers.initialization module.
    • Relevant model-local init helpers found in:
      • transformers/models/phi4_multimodal/modeling_phi4_multimodal.py
      • transformers/models/siglip/modeling_siglip.py
      • transformers/models/siglip2/modeling_siglip2.py
    • These use local _trunc_normal_ and variance_scaling_ in the tensor dtype.
    • transformers/models/vjepa2/modeling_vjepa2.py already includes an explicit float32 upcast helper (trunc_normal_f32_) before cast back.

Conclusion: no additional global overload needed for other inits in this change.

LoRA impact

  • This change only affects SM100 env defaults that control vLLM LoRA PDL/FBGEMM paths.
  • Core Unsloth LoRA training path is unchanged.
  • Users can still override env values before import if needed.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: bf31695347

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +1035 to +1037
os.environ.setdefault("VLLM_LORA_DISABLE_PDL", "1")
os.environ.setdefault("TRITON_DISABLE_PDL", "1")
os.environ.setdefault("VLLM_USE_FBGEMM", "0")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Apply SM100 workaround before importing vLLM

This change relies only on environment variables (VLLM_LORA_DISABLE_PDL, TRITON_DISABLE_PDL, VLLM_USE_FBGEMM) but unsloth currently imports vLLM earlier via fix_vllm_guided_decoding_params() in the normal init flow (unsloth/__init__.py calls it before fix_vllm_pdl_blackwell), so these setdefault calls can run too late to affect vLLM’s initialization path on SM100. In that common import order, the previous post-import monkey-patch safety net is gone and users can still hit the Blackwell PDL/MMA crash the workaround is meant to prevent.

Useful? React with 👍 / 👎.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @danielhanchen, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refines the vLLM workaround for SM100 GPUs by transitioning from an intrusive monkey-patching approach to a more robust and less disruptive method utilizing environment variables. This change ensures continued compatibility and stability for vLLM on Blackwell architectures while minimizing modifications to vLLM's internal code, providing a cleaner and more maintainable solution.

Highlights

  • Refactored vLLM Workaround: The SM100 (Blackwell) workaround for vLLM's PDL/MMA path was refactored to use environment variables instead of intrusive runtime monkey-patching of vLLM internals.
  • Environment Variable Configuration: The fix_vllm_pdl_blackwell() function now sets VLLM_LORA_DISABLE_PDL=1, TRITON_DISABLE_PDL=1, and VLLM_USE_FBGEMM=0 using os.environ.setdefault when an SM100 GPU is detected.
  • Removed Dynamic Patching: Removed the dynamic patching logic that previously modified supports_pdl functions within vllm.lora.ops.triton_ops.utils, lora_expand_op, lora_shrink_op, and fused_moe_lora_op.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • unsloth/import_fixes.py
    • Removed helper functions and logic for dynamically patching vllm.lora.ops.triton_ops.utils.supports_pdl and related consumer modules.
    • Introduced os.environ.setdefault calls to set VLLM_LORA_DISABLE_PDL, TRITON_DISABLE_PDL, and VLLM_USE_FBGEMM to disable problematic vLLM features on SM100.
    • Updated comments to reflect the change from monkey-patching to environment variable usage.
    • Modified logging for the vLLM version check to debug level and adjusted the message.
    • Added an inline comment explaining the observed MMA failure string.
Activity
  • An import-time environment probe on B200 confirmed that the specified environment variables were correctly set after import unsloth.
  • Successful training runs were conducted with transformers==5.0.0 and transformers==4.57.6, demonstrating the patch's effectiveness.
  • A scan for specific error strings (Arch conditional MMA, CUTE_INVALID_CONTROL_PATH, Trying to use tma) in logs yielded no matches, indicating the workaround prevented the expected issues.
  • An audit of Transformers initialization functions concluded that no additional global float32 upcasting is required for other initializers.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the vLLM workaround for SM100 GPUs by replacing intrusive monkey-patching with setting environment variables. This is a significant improvement for maintainability and robustness, as it uses official toggles and allows users to override the settings. The changes are well-documented and the logic is sound. I have one minor suggestion to improve the readability of a log message.

Comment on lines +1039 to +1042
logger.info(
f"Unsloth: Applied SM100 ({sm100_gpu_name}) vLLM workaround via env vars: "
f"VLLM_LORA_DISABLE_PDL=1, TRITON_DISABLE_PDL=1, VLLM_USE_FBGEMM=0"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better readability in logs, consider formatting this long log message across multiple lines. A single, very long line can be hard to read in terminal outputs and may get truncated.

Suggested change
logger.info(
f"Unsloth: Applied SM100 ({sm100_gpu_name}) vLLM workaround via env vars: "
f"VLLM_LORA_DISABLE_PDL=1, TRITON_DISABLE_PDL=1, VLLM_USE_FBGEMM=0"
)
logger.info(
f"Unsloth: Applied SM100 ({sm100_gpu_name}) vLLM workaround via env vars:"
f"\n - VLLM_LORA_DISABLE_PDL=1\n - TRITON_DISABLE_PDL=1\n - VLLM_USE_FBGEMM=0"
)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant