Skip to content

Conversation

@fengju0213
Copy link
Collaborator

@fengju0213 fengju0213 commented Sep 30, 2025

Description

  • Context creator now preserves the system message and returns the remaining history strictly by
    timestamp, removing all per-message token bookkeeping.

    • Token-limit handling captures the full context, rolls back recent tool-call chains when
      necessary, swaps in an assistant summary, and records summary state (depth, new-record counts,
      last user input). This state blocks back-to-back summaries with negligible progress and caps
      retries at three, preventing summarize→retry loops—even when the immediate overflow comes from a
      tool call.
    • Passing token_limit into ChatAgent now logs a deprecation warning and ignores the value; callers
      should control limits through model backend configuration.

    Pending work:
    1.clean up the modules related to token_limit and the token counter.
    2.Add unit tests that cover input strings capable of triggering the token-limit path.

Checklist

Go over all the following points, and put an x in all the boxes that apply.

  • I have read the CONTRIBUTION guide (required)
  • I have linked this PR to an issue using the Development section on the right sidebar or by adding Fixes #issue-number in the PR description (required)
  • I have checked if any dependencies need to be added or updated in pyproject.toml and uv lock
  • I have updated the tests accordingly (required for a bug fix or a new feature)
  • I have updated the documentation if needed:
  • I have added examples if this is a new feature

If you are unsure about any of these, don't hesitate to ask. We are here to help!

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 30, 2025

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch enhance_update_memory

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Wendong-Fan Wendong-Fan marked this pull request as draft October 5, 2025 09:07
@Wendong-Fan
Copy link
Member

convert to draft as this PR hasn't updated with the approach we discussed

@fengju0213 fengju0213 changed the title feat: retry when tokenlimit and enhance chunking logic feat: summarize when tokenlimit Oct 14, 2025
@fengju0213
Copy link
Collaborator Author

Context creator now preserves the system message and returns the remaining history strictly by
timestamp, removing all per-message token bookkeeping.

Token-limit handling captures the full context, rolls back recent tool-call chains when
necessary, swaps in an assistant summary, and records summary state (depth, new-record counts,
last user input). This state blocks back-to-back summaries with negligible progress and caps
retries at three, preventing summarize→retry loops—even when the immediate overflow comes from a
tool call.
Passing token_limit into ChatAgent now logs a deprecation warning and ignores the value; callers
should control limits through model backend configuration.
Pending work:
1.clean up the modules related to token_limit and the token counter.
2.Add unit tests that cover input strings capable of triggering the token-limit path.

@Wendong-Fan Wendong-Fan marked this pull request as ready for review October 14, 2025 08:32
@hesamsheikh
Copy link
Collaborator

Thanks for the thorough PR @fengju0213. I wonder if the new implementation would cause inconsistencies and fragmentation of the summarization logic already implemented in two places.

  1. ChatAgent.summarize() so the user can explicitly call to save the summarization.
summary_result = agent.summarize(filename="meeting_notes")
# Returns: {"summary": str, "file_path": str, "status": str}

This is mostly used to create workflow.md files of a session

  1. ContextSummarizerToolkit user "USER" as the rolename, whereas here "ASSISTANT" is used.

@fengju0213
Copy link
Collaborator Author

Thanks for the thorough PR @fengju0213. I wonder if the new implementation would cause inconsistencies and fragmentation of the summarization logic already implemented in two places.

  1. ChatAgent.summarize() so the user can explicitly call to save the summarization.
summary_result = agent.summarize(filename="meeting_notes")
# Returns: {"summary": str, "file_path": str, "status": str}

This is mostly used to create workflow.md files of a session

  1. ContextSummarizerToolkit user "USER" as the rolename, whereas here "ASSISTANT" is used.

thanks for reviewing! @hesamsheikh We can probably unify the naming after rolename later. For now, could you help review and summarize the current logic for handling the token limit?

Copy link
Collaborator

@MuggleJinx MuggleJinx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @fengju0213!! Left some comments. Maybe should also consider adding a test case when SUMMARY_MAX_DEPTH is reached.

@fengju0213 fengju0213 force-pushed the enhance_update_memory branch from b1885f3 to bc69be7 Compare October 21, 2025 09:53
@hesamsheikh
Copy link
Collaborator

Hey @fengju0213 just as a reference that might be helpful, I ran Claude Code until it summarized the full context, and here is how it looks:

This session is being continued from a previous conversation that ran out of context. The conversation is summarized below:
Analysis:
Let me chronologically analyze this conversation about improving the xxx:

Summary:
1. **Primary Request and Intent**:
   [list]

2. **Key Technical Concepts**:
   [list]

3. **Files and Code Sections**:
[list]

4. **Errors and Fixes**:
[list]

5. **Problem Solving**:
   [list]

6. **All User Messages**:
   - "in the landing page put the brain more to the right, and space out the two bands a bit more to make room for the left card"
   - "the bands and the left card are not symmetric, the bottom band is too close to it"
   - "as the mouse moves, the position of bands changes too. don't do that."
   - etc

7. **Pending Tasks**:
   - None explicitly stated

8. **Current Work**:
   

9. **Optional Next Step**:
   xxxx
Please continue the conversation from where we left it off without asking the user any further questions. Continue with the last task that you were asked to work on.

I think there are helpful patterns here we can learn from. moreover, if you notice the tone of the summary (like "Continue with the last task...") you notice the role cannot be Assistant here. I think the best approach would be to append the summary to the system message and wipe the memory.

@fengju0213
Copy link
Collaborator Author

Hey @fengju0213 just as a reference that might be helpful, I ran Claude Code until it summarized the full context, and here is how it looks:

This session is being continued from a previous conversation that ran out of context. The conversation is summarized below:
Analysis:
Let me chronologically analyze this conversation about improving the xxx:

Summary:
1. **Primary Request and Intent**:
   [list]

2. **Key Technical Concepts**:
   [list]

3. **Files and Code Sections**:
[list]

4. **Errors and Fixes**:
[list]

5. **Problem Solving**:
   [list]

6. **All User Messages**:
   - "in the landing page put the brain more to the right, and space out the two bands a bit more to make room for the left card"
   - "the bands and the left card are not symmetric, the bottom band is too close to it"
   - "as the mouse moves, the position of bands changes too. don't do that."
   - etc

7. **Pending Tasks**:
   - None explicitly stated

8. **Current Work**:
   

9. **Optional Next Step**:
   xxxx
Please continue the conversation from where we left it off without asking the user any further questions. Continue with the last task that you were asked to work on.

I think there are helpful patterns here we can learn from. moreover, if you notice the tone of the summary (like "Continue with the last task...") you notice the role cannot be Assistant here. I think the best approach would be to append the summary to the system message and wipe the memory.

@hesamsheikh thanks for help suggestion!already made some improvements based on this
I’ve decided to use assistant as the role for our summary messages for now, mainly for the following reasons:

It helps clearly distinguish summarizations from actual user messages.
Later, when generating summaries, we can explicitly guide the agent to focus more on user-provided information.

From reviewing CC’s summaries, it seems that they also include aggregated user details within assistant-role messages — so their summaries might be following a similar approach.

Since this is a new and experimental feature, it would be best to evaluate its effectiveness later through real usage channels like Eigent.

@fengju0213
Copy link
Collaborator Author

@hesamsheikh @MuggleJinx already updated,could help review again?

@fengju0213 fengju0213 added Review Required PR need to be reviewed and removed Waiting for Update PR has been reviewed, need to be updated based on review comment labels Oct 28, 2025
@hesamsheikh
Copy link
Collaborator

I will come back with the review ASAP @fengju0213

@fengju0213
Copy link
Collaborator Author

I will come back with the review ASAP @fengju0213

Thank you so much!

@fengju0213
Copy link
Collaborator Author

@hesamsheikh @MuggleJinx already updated,could you help review again?

Copy link
Collaborator

@hesamsheikh hesamsheikh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work on the PR. I added one comment. just a few questions.

  1. I noticed you changed the summary format a bit, don't you think it's better we keep a list of the user messages in the summary as well? (maybe a truncated one). this could be added programatically without relying on the summarizer agent. it helps reduce reliance on LLM-provided summary.

  2. the summary may happen in the middle of executing a task. shouldn't we keep the last task that is not finished in the summary? something like:
    "- Current Task: What is currently being worked on (if any)\n"
    "- Pending Tasks: What still needs to be done\n"
    I tested this and it may be able to continue from the task it left off, but it relies on the LLM's understanding not explicit prompt instructions and guidelines.

  3. you add a user message to continue from the last task, i thought of an alternative approach which might be more reliable for continuity of the task and avoid the confusion of the agent: We can summarize all the messages before the last user message and add that last message after the context summary, this way the agent knows what the current task is and it is much easier to debug.

@fengju0213
Copy link
Collaborator Author

Great work on the PR. I added one comment. just a few questions.

  1. I noticed you changed the summary format a bit, don't you think it's better we keep a list of the user messages in the summary as well? (maybe a truncated one). this could be added programatically without relying on the summarizer agent. it helps reduce reliance on LLM-provided summary.
  2. the summary may happen in the middle of executing a task. shouldn't we keep the last task that is not finished in the summary? something like:
    "- Current Task: What is currently being worked on (if any)\n"
    "- Pending Tasks: What still needs to be done\n"
    I tested this and it may be able to continue from the task it left off, but it relies on the LLM's understanding not explicit prompt instructions and guidelines.
  3. you add a user message to continue from the last task, i thought of an alternative approach which might be more reliable for continuity of the task and avoid the confusion of the agent: We can summarize all the messages before the last user message and add that last message after the context summary, this way the agent knows what the current task is and it is much easier to debug.

thanks for suggestion,i created a new prompt for summary,we can adjust it in the feature for test!also change the role of "continue message" for better distinction

@hesamsheikh
Copy link
Collaborator

Amazing @fengju0213. Everything looks good to me. I will open an issue after the PR is merged on some ideas and potential areas of improvement that can be tested by the devs.

Copy link
Collaborator

@MuggleJinx MuggleJinx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good! Thanks @fengju0213

@fengju0213
Copy link
Collaborator Author

@hesamsheikh @MuggleJinx thanks for review!

@fengju0213 fengju0213 changed the title feat: summarize when tokenlimit feat: summarize when tokenlimit exceeded or threshold reached Oct 30, 2025
@fengju0213 fengju0213 merged commit 1e816ed into master Oct 30, 2025
13 of 14 checks passed
@fengju0213 fengju0213 deleted the enhance_update_memory branch October 30, 2025 06:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Review Required PR need to be reviewed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants