Skip to content

Conversation

cantonios
Copy link
Collaborator

Multihost JAX programs require the same JAX program to run across all hosts. To enforce this, we need to synchronize all input statistics, and repeat preprocessing if the stats have changed.

Also simplified the synchronization program for better efficiency by collecting all input statistics into a single JAX array.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @cantonios, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the preprocessing logic for multihost JAX programs to ensure consistent input statistics across all hosts. It simplifies the synchronization mechanism by consolidating various input statistics into a single JAX array, enabling more efficient aggregation across devices. The system now dynamically re-runs preprocessing if the synchronized statistics indicate a change, ensuring all hosts operate with the same data parameters.

Highlights

  • Unified Input Statistics: Introduced InputStatsPerTable to encapsulate max_ids_per_partition, max_unique_ids_per_partition, and required_buffer_size_per_device, streamlining the management of preprocessing parameters.
  • Simplified Synchronization: Replaced individual variable tracking and manual aggregation with a more efficient JAX-based synchronization. Input statistics are now flattened into a single JAX array and aggregated across all hosts using jax.lax.pmax, reducing complexity and improving performance.
  • Dynamic Preprocessing Re-execution: Implemented logic to detect changes in synchronized input statistics and automatically trigger a re-execution of the preprocessing step. This ensures that all multihost JAX programs consistently use the most up-to-date and globally synchronized input parameters.
  • Code Cleanup: Removed deprecated CPU distribution creation and associated variable management, leading to a cleaner and more maintainable codebase.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the multi-host preprocessing logic for JAX to improve correctness and efficiency. The key change is to synchronize input statistics across all hosts using a single JAX array and a pmax collective, and then re-running the preprocessing step if the statistics have changed. This is a much cleaner and more robust approach than the previous implementation. My review has identified a critical bug where the synchronized statistics were not being used correctly after aggregation, which would prevent the fix from working as intended. I've also pointed out some redundant code and a potential robustness issue with dictionary comparison. After addressing these points, the changes will be a significant improvement.

Multihost JAX programs require the same JAX program to run across
all hosts.  To enforce this, we need to synchronize all input
statistics, and repeat preprocessing if the stats have changed.

Also simplified the synchronization program for better efficiency
by collecting all input statistics into a single JAX array.
Copy link
Collaborator

@hertschuh hertschuh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the jax.pmap(jax.lax.pmax(...)) was already in place before. So the core of the fix is the re-running of the preprocessing if the stats don't match?

Were you able to test this. If so, how?

@cantonios
Copy link
Collaborator Author

So the jax.pmap(jax.lax.pmax(...)) was already in place before. So the core of the fix is the re-running of the preprocessing if the stats don't match?

Yes, that's the intention. I also merged the pmax calls into a single call rather than separate ones for each statistic, and removed a bunch of extra keras variables that it turns out we never actually needed.

Were you able to test this. If so, how?

I'm working on a google-internal multiprocess test. It still fails though when the input size differs across hosts, so converting this PR to a draft.

@cantonios cantonios marked this pull request as draft August 25, 2025 12:20
@cantonios
Copy link
Collaborator Author

So the jax.pmap(jax.lax.pmax(...)) was already in place before. So the core of the fix is the re-running of the preprocessing if the stats don't match?

Yes, that's the intention. I also merged the pmax calls into a single call rather than separate ones for each statistic, and removed a bunch of extra keras variables that it turns out we never actually needed.

Were you able to test this. If so, how?

I'm working on a google-internal multiprocess test. It still fails though when the input size differs across hosts, so converting this PR to a draft.

Unmarking as draft. This PR should fix the issues with preprocessing, and simplifies the code a bit. My local multiprocess test fails even without preprocessing, so there's some other deeper issue that needs to be addressed.

@cantonios cantonios requested a review from hertschuh August 25, 2025 17:53
@cantonios cantonios marked this pull request as ready for review August 25, 2025 17:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants