Skip to content

Conversation

@jaredAMD
Copy link

Current run script methods do not aggregate figure of merit performance data as a final summary. The following changes are intended as an example for how this could be achieved.

This PR should be marked as WIP.

Final run methods should review the applicability of these approaches across other architectures, and make use of "standardized" language to ensure that FOM values appear consistently across implementations.

Current run script methods do not aggregate figure of merit performance data as a final summary. The following changes are intended as an example for how this could be achieved. 

**This PR should be marked as WIP.**

Final run methods should review the applicability of these approaches across other architectures, and make use of "standardized" language to ensure that FOM values appear consistently across implementations.
@Xiaoming-AMD
Copy link
Collaborator

Xiaoming-AMD commented Sep 23, 2025

Thanks for the comments!

The log output already includes both per-iteration values and running averages (from log_avg_skip_iterations to the current iteration,and reset every log_avg_reset_interval iterations). For example:

throughput per GPU (TFLOP/s/GPU): 520.9/520.3
tokens per GPU (tokens/s/GPU): 8994.2/8983.3

Here the first number is the current iteration, and the second number is the average so far. The values in the last iteration already represent the metrics for the entire training run, so that line can be taken as the final summary.

image

@Xiaoming-AMD Xiaoming-AMD changed the title Methods to aggregate figures of merit [WIP] Methods to aggregate figures of merit Sep 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants