Skip to content

Conversation

@kashif
Copy link
Collaborator

@kashif kashif commented Oct 27, 2025

What does this PR do?

Proposal to refactor the rollout_func to take the trainer as an argument. With this change, we should be able to use the trainer's generation helpers to generate.

Fixes # (issue)

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a GitHub issue? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Dict containing prompt_ids, completion_ids, logprobs, and env_reward
"""
# 1. Generate completions via vLLM inference server (running on port 8000)
args = trainer.args
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When using the current rollout_func, I found the definition of the vllm payload and how you need to use requests kinda messy/ low level. Mainly because I have to unpack and redefine the config (now trainer).

Here's an idea: could we define a generate function within the trainer that uses closures from the trainer scope to set configuration defaults. The user could then use generate function as a param of rollout_func without having to deal with the underlying vllm logic. They can then override any config params of the function.

from envs.openspiel_env.models import OpenSpielAction

from trl import GRPOConfig, GRPOTrainer, RichProgressCallback, apply_chat_template
from trl.extras.vllm_colocate_async import AsyncVLLMColocateWrapper
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you forgot to push this new file

@qgallouedec
Copy link
Member

I haven't played with OpenEnv much, so I'll leave it to @lewtun or @burtenshaw to review this one. Note that for now this rollout function is considered very experimental and it's likely that's we'll have to change/remove it in the future (depending on how OpenEnv is adopted by the community), so I wouldn't put too much effort to propagate/consolidate it.

Copy link
Member

@sergiopaniego sergiopaniego left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In case we go ahead with the proposal, we'd need to also update the docs/source/openenv.md file

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants