-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Revert "fix: bracket stripping in gemini responses" #8332
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Issue reopened: CON-4266 Gemini output swallows square bracket |
✅ Review Complete Review of PR #8332: Revert "fix: bracket stripping in gemini responses"OverviewThis PR reverts #8280, which attempted to use SSE streaming for Gemini responses. The revert restores custom JSON array parsing logic because Gemini apparently broke in main with the SSE approach. Critical Issues1. Root Cause Analysis Missing (Blocker)
2. Manual JSON Parsing is Fragile (High Priority) if (buffer.startsWith("[")) {
buffer = buffer.slice(1);
}
if (buffer.endsWith("]")) {
buffer = buffer.slice(0, -1);
}
if (buffer.startsWith(",")) {
buffer = buffer.slice(1);
}
const parts = buffer.split("\n,"); Issues:
3. Testing Gap (High Priority)
4. Incomplete Buffer Handling if (foundIncomplete) {
buffer = parts[parts.length - 1];
} else {
buffer = "";
} If the incomplete part at
Questions5. Was the SSE Implementation Wrong?
6. VertexAI Impact SuggestionsAlternative Approach:
Monitoring:
Verdict
The manual JSON parsing is concerning from a maintainability perspective, but if it's the only way Gemini works, at least make it robust. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No issues found across 3 files
🎉 This PR is included in version 1.27.0 🎉 The release is available on: Your semantic-release bot 📦🚀 |
Reverts #8280, gemini broken in main for both openai adapters and normal llm class