-
Notifications
You must be signed in to change notification settings - Fork 5k
Fix #274: Add warnings for OpenAI news/fundamentals hallucination risk #277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Fix #274: Add warnings for OpenAI news/fundamentals hallucination risk #277
Conversation
…lucination risk - Add UserWarning to get_global_news_openai, get_stock_news_openai, and get_fundamentals_openai - Warnings inform users that OpenAI vendor may hallucinate or provide outdated data - Recommend alternative vendors: alpha_vantage, google, yfinance, or local - Add comprehensive test suite to verify warnings are emitted - Refactor to use shared _warn_hallucination_risk helper function The issue reported that OpenAI was hallucinating and providing outdated news (e.g., fake news from November 2025). This is because the OpenAI vendor relies on the LLM's training data rather than real-time web search. Users should use alternative vendors for reliable, up-to-date news and fundamental data. All tests passing (4/4).
Summary of ChangesHello @godnight10061, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses issue #274 by implementing explicit Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request is a valuable addition, addressing the critical issue of potential hallucinations from the OpenAI vendor by adding user warnings. The introduction of a shared _warn_hallucination_risk helper function is a good refactoring choice, and the accompanying test suite is comprehensive. I've identified one high-severity issue where a warning suggests incorrect alternative vendors, and a medium-severity issue to improve the robustness and consistency of a test case. Addressing these points will make the feature even more solid.
| Returns: | ||
| str: News content (may be hallucinated or outdated) | ||
| """ | ||
| _warn_hallucination_risk(data_type="news", category="news_data") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The warning for get_global_news_openai uses the default list of alternative vendors, which includes 'alpha_vantage' and 'google'. However, according to tradingagents/dataflows/interface.py, the only configured alternative vendor for get_global_news is 'local'. The warning message should be accurate to avoid confusing users with incorrect suggestions.
| _warn_hallucination_risk(data_type="news", category="news_data") | |
| _warn_hallucination_risk(data_type="news", category="news_data", alternatives=["local"]) |
| def test_warning_message_content(self): | ||
| """Test that warning messages contain helpful information about alternatives.""" | ||
| # This test verifies the warning message suggests using alternative vendors | ||
| with patch("tradingagents.dataflows.openai.OpenAI"), \ | ||
| patch("tradingagents.dataflows.openai.get_config") as mock_get_config: | ||
|
|
||
| mock_get_config.return_value = { | ||
| "backend_url": "https://api.openai.com/v1", | ||
| "quick_think_llm": "gpt-4o-mini", | ||
| } | ||
|
|
||
| with warnings.catch_warnings(record=True) as w: | ||
| warnings.simplefilter("always") | ||
|
|
||
| try: | ||
| get_global_news_openai("2024-11-14") | ||
| except Exception: | ||
| pass # We're only testing the warning, not the full execution | ||
|
|
||
| # Check that at least one warning was issued | ||
| assert len(w) > 0 | ||
|
|
||
| # Check that the warning mentions alternatives | ||
| warning_text = str(w[0].message).lower() | ||
| assert any(keyword in warning_text for keyword in [ | ||
| "alpha_vantage", "google", "local", "alternative", "vendor" | ||
| ]) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test uses a with patch(...) statement and a try...except Exception block, which is inconsistent with the decorator-based patching used in other tests in this class. The broad exception handling can hide underlying issues and makes the test fragile. To improve robustness and consistency, this test should be refactored to use decorators for patching and proper mocking, which eliminates the need for the try...except block. The assertions can also be made more specific to check for the presence of each recommended vendor.
Note: If you address my other comment about get_global_news_openai using incorrect alternatives, you will need to update this test's assertions to only check for 'local'.
@patch("tradingagents.dataflows.openai.OpenAI")
@patch("tradingagents.dataflows.openai.get_config")
def test_warning_message_content(self, mock_get_config, mock_openai_class):
"""Test that warning messages contain helpful information about alternatives."""
# This test verifies the warning message suggests using alternative vendors
mock_get_config.return_value = {
"backend_url": "https://api.openai.com/v1",
"quick_think_llm": "gpt-4o-mini",
}
mock_client = Mock()
mock_openai_class.return_value = mock_client
mock_response = Mock()
mock_response.output = [None, Mock(content=[Mock(text="Fake news content")])]
mock_client.responses.create.return_value = mock_response
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always")
get_global_news_openai("2024-11-14")
# Check that at least one warning was issued
assert len(w) > 0
# Check that the warning mentions alternatives
warning_text = str(w[0].message).lower()
assert "alpha_vantage" in warning_text
assert "google" in warning_text
assert "local" in warning_text
The issue reported that OpenAI was hallucinating and providing outdated news (e.g., fake news from November 2025). This is because the OpenAI vendor relies on the LLM's training data rather than real-time web search. Users should use alternative vendors for reliable, up-to-date news and fundamental data.
All tests passing (4/4).