Skip to content

Conversation

@pathfinder-pf
Copy link
Collaborator

No description provided.

@gemini-code-assist
Copy link

Summary of Changes

Hello @pathfinder-pf, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates tunix v1 by performing a series of foundational updates across the project. It modernizes dependency management, significantly refactors the scheduler's threading model for improved robustness, and adopts flax.nnx to enhance the structure and state management of various neural network models. Additionally, it improves asynchronous operations, refines sampling parameter handling, and expands test coverage for cache behavior.

Highlights

  • Dependency Updates: The pyproject.toml file has been updated to specify newer versions for key dependencies: flax is now required to be >=0.12.0 (previously ~=0.10.7), and jax[tpu] is updated from ~=0.6.2 to a more general jax[tpu].
  • Scheduler Refactoring: The scheduler initialization and management logic has been significantly refactored. The run_scheduler_thread function has been replaced by run_scheduler_loop_thread_after_create and scheduler_loop_after_create, streamlining how scheduler threads are launched and their information is retrieved, moving away from pipe-based communication.
  • Flax NNX Integration: The codebase now integrates flax.nnx for managing model layers and attention backend metadata. This involves importing flax.nnx, using nnx.data() for dynamic state in FlashAttention, and wrapping model layers in nnx.List() within Llama, QWen, Qwen2, Qwen3, and QWen3Moe models, enhancing module composition and state management.
  • Asynchronous Event Loop Handling: The engine.py file now includes robust handling for asynchronous event loops, specifically addressing RuntimeError when asyncio.get_event_loop() is called without an active loop, by creating a new event loop if necessary.
  • Sampling Parameter Enhancements: The sampling_batch_info.py file has been updated to correctly handle sampling_seeds during precompilation, including padding with a default seed and incorporating them into the SamplingMetadata.
  • New Cache Miss Test Cases: Two new test cases, test_cache_miss_prefill and test_cache_miss_decode, have been added to test_features.py to verify the cache_miss_count in the response metadata during both prefill and decode phases of generation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@pathfinder-pf pathfinder-pf changed the title Feat/feat/integrate tunix v1 feat/integrate tunix v1 Oct 16, 2025
@pathfinder-pf pathfinder-pf force-pushed the feat/feat/integrate-tunix-v1 branch 2 times, most recently from ba2207f to 79db117 Compare October 17, 2025 08:32
@pathfinder-pf pathfinder-pf force-pushed the feat/feat/integrate-tunix-v1 branch from 79db117 to 7612226 Compare October 17, 2025 08:36
@pathfinder-pf pathfinder-pf merged commit 28da36b into sgl-project:main Oct 20, 2025
4 checks passed
pathfinder-pf added a commit to primatrix/sglang-jax that referenced this pull request Nov 4, 2025
* add get_default_sampling_params definition

* Merge pull request #6 from primatrix/feat/align-sampling-for-tunix

align sampling param ability according to rfc

* add multinomial_with_seed for sampler and test_sampler.py (#12)

* update flax

fix duplicate register pytree and use nnx.data to wrap FlashAttentionMetadata

* extract scheduler thread

* add event loop

* fix duplicate params

* use server parameters

* add tree_flatten & tree_unflatten

* with mesh

---------

Co-authored-by: aolemila <aolemilaluo@gmail.com>
Co-authored-by: pathfinder-fp <aaaabbbbbb@163.com>
Co-authored-by: aolemila <aolemila@primatrix.ai>
Co-authored-by: pathfinder-fp <slackexplorer@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants