-
Notifications
You must be signed in to change notification settings - Fork 2.7k
bugfix: Write JSON formatted string #3994
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @ftnext, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves a bug related to writing evaluation results to a file. Previously, data that was already a JSON formatted string was being passed through Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Response from ADK Triaging Agent Hello @ftnext, thank you for creating this PR! Could you please provide a testing plan for your changes? This should include details on the unit tests you've added or updated, as well as instructions for manual end-to-end testing. This information will help reviewers to review your PR more efficiently. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request aims to fix a bug where a JSON string was being double-encoded before being written to a file. The change correctly removes the double encoding. However, this is an incomplete fix that introduces breaking changes in other parts of the codebase that read this file, such as get_eval_set_result and related unit tests. My review includes a critical comment explaining the issue and outlining the necessary changes to make this a complete and safe fix.
| logger.info("Writing eval result to file: %s", eval_set_result_file_path) | ||
| with open(eval_set_result_file_path, "w", encoding="utf-8") as f: | ||
| f.write(json.dumps(eval_set_result_json, indent=2)) | ||
| f.write(eval_set_result_json) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change correctly fixes the double JSON encoding. However, this is an incomplete fix that breaks other parts of the code that consume this file.
- Breaks
get_eval_set_result: This method will now fail with aTypeError. It usesjson.load()which now returns adict, but then passes it toEvalSetResult.model_validate_json(), which expects a string. - Breaks Unit Tests: The test
test_save_eval_set_resultwill fail as it now incorrectly compares a string with a dictionary.
To complete this fix, get_eval_set_result and its tests must be updated. For example, get_eval_set_result should use EvalSetResult.model_validate(json.load(file)).
Additionally, this change removes the indent=2 pretty-printing. To restore this for readability, the recommended approach is to modify line 57 to eval_set_result.model_dump_json(indent=2), which is outside the current diff.
Updated JSON serialization method for eval set results. EvalStatus Enum is serialized to int https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_dump >If mode is 'json', the output will only contain JSON serializable types.
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request correctly fixes a bug related to JSON serialization and deserialization of Pydantic models. The changes address an issue where JSON data was being double-encoded upon saving and an incorrect validation method was used upon loading. The switch to model_dump(mode="json") and model_validate() is appropriate and resolves the problem. My review includes a couple of suggestions to improve variable naming for better code clarity and maintainability.
| os.makedirs(app_eval_history_dir) | ||
| # Convert to json and write to file. | ||
| eval_set_result_json = eval_set_result.model_dump_json() | ||
| eval_set_result_json = eval_set_result.model_dump(mode="json") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While this change correctly fixes the serialization bug, the variable eval_set_result_json now holds a Python dictionary rather than a JSON string, making the name misleading. For improved code clarity, I recommend renaming it to something like eval_set_result_data or eval_set_result_dict. Note that this would also require updating the variable name on line 64.
|
|
||
| # need to convert eval_set_result to json | ||
| expected_eval_set_result_json = self.eval_set_result.model_dump_json() | ||
| expected_eval_set_result_json = self.eval_set_result.model_dump(mode="json") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change correctly aligns the test with the fix in the application code. However, similar to the main code, the variable expected_eval_set_result_json now contains a dictionary. To improve readability and consistency, consider renaming it to expected_eval_set_result_data. This naming suggestion also applies to actual_eval_set_result_json on line 88.
|
Hi @ftnext ,Thank you for your contribution! We appreciate you taking the time to submit this pull request. Your PR has been received by the team and is currently under review. We will provide feedback as soon as we have an update to share. |
|
Hi @wyf7107 , can you please review this. |
Please ensure you have read the contribution guide before creating a pull request.
Link to Issue or Description of Change
1. Link to an existing issue (if applicable):
2. Or, if no issue exists, describe the change:
If applicable, please follow the issue templates to provide as much detail as
possible.
Problem:
A clear and concise description of what the problem is.
Solution:
A clear and concise description of what you want to happen and why you choose
this solution.
Testing Plan
Please describe the tests that you ran to verify your changes. This is required
for all PRs that are not small documentation or typo fixes.
Unit Tests:
Please include a summary of passed
pytestresults.Manual End-to-End (E2E) Tests:
Please provide instructions on how to manually test your changes, including any
necessary setup or configuration. Please provide logs or screenshots to help
reviewers better understand the fix.
Checklist
Additional context
EvalStatusEnum is serialized to inthttps://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_dump