Skip to content
This repository was archived by the owner on Apr 23, 2025. It is now read-only.

Conversation

@ipv1337
Copy link

@ipv1337 ipv1337 commented Apr 15, 2025

  • Handle Gemini quota errors more gracefully, logging details at DEBUG.

  • Remove redundant "(Model stopped with no text)" message.

  • Add rich status spinner for thinking and tool execution phases.

  • Update tests to reflect logging changes and mock console status.

  • Restore missing add_to_history/clear_history methods.

  • Fix resulting test failures.

- Handle Gemini quota errors more gracefully, logging details at DEBUG.

- Remove redundant "(Model stopped with no text)" message.

- Add rich status spinner for thinking and tool execution phases.

- Update tests to reflect logging changes and mock console status.

- Restore missing add_to_history/clear_history methods.

- Fix resulting test failures.
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @ipv1337, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

This pull request focuses on improving the error handling and user experience of the Gemini agent. It introduces a rich status spinner to indicate thinking and tool execution phases, handles Gemini quota errors more gracefully by logging details and switching to a fallback model, and removes a redundant message. Additionally, it restores missing add_to_history and clear_history methods, and updates tests to reflect the changes.

Highlights

  • Error Handling: Improved Gemini quota error handling with detailed logging and fallback mechanism.
  • User Experience: Added a rich status spinner for thinking and tool execution phases to provide better feedback to the user.
  • History Management: Restored missing add_to_history and clear_history methods for managing conversation history.
  • Model Updates: Updates the fallback model to gemini-2.0-flash and increases the OLLAMA_MAX_CONTEXT_TOKENS to 80000.

Changelog

Click here to see the changelog
  • TODO_gemini_loop.md
    • Removed outdated TODO items related to sequential tool calls, indicating progress on that front.
  • src/cli_code/models/gemini.py
    • Updated the fallback model to gemini-2.0-flash.
    • Implemented a status spinner using rich to indicate agent thinking and tool execution.
    • Improved error handling for Gemini quota errors, including switching to a fallback model and logging details.
    • Removed the redundant '(Model stopped with no text)' message.
    • Refactored the agent loop to prioritize STOP reason checking and handle cases where the model stops without providing text content.
    • Restored the add_to_history and clear_history methods.
    • Enhanced tool execution confirmation by providing more detailed error messages and ensuring proper history management.
    • Added detailed logging of quota errors at the debug level.
  • src/cli_code/models/ollama.py
    • Increased OLLAMA_MAX_CONTEXT_TOKENS from 8000 to 80000.
  • tests/models/test_gemini.py
    • Updated the FALLBACK_MODEL_NAME_FROM_CODE to match the new fallback model name in src/cli_code/models/gemini.py.
    • Modified the mock_console fixture to provide a functional status context manager with an update method.
    • Refactored mocking strategy to use monkeypatch instead of mocker for better control and clarity.
    • Adjusted assertions in test_generate_simple_tool_call to reflect the updated agent loop logic and history management.
    • Improved assertions in test_generate_user_rejects_edit to verify the presence of the rejection message in the history.
    • Fixed issues with mocking and assertion in multiple tests to ensure correct behavior and prevent false positives.
  • tests/models/test_model_error_handling_additional.py
    • Updated the mock_count_tokens return value to 90000 to be higher than the updated OLLAMA_MAX_CONTEXT_TOKENS value.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


The agent thinks, a spinner bright,
Tools are called, in digital night.
Errors loom, but code stands tall,
A fallback waits, to heed the call.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request introduces several improvements to the Gemini model integration, including enhanced error handling, status indicators, and test updates. The changes appear well-structured and address the issues outlined in the pull request description. However, some modifications require careful review to ensure correctness and maintainability.

Summary of Findings

  • Model Name Update: The fallback model name has been updated from gemini-1.5-flash-latest to gemini-2.0-flash. Ensure this change is intentional and aligns with the desired model behavior and capabilities.
  • STOP Reason Handling: The logic for handling the STOP finish reason has been modified to check for final text content before setting task_completed. Verify that this change correctly handles cases where the model stops without providing text.
  • Tool Execution Status Update: The status spinner is now updated before and after tool execution. Confirm that these updates provide useful feedback to the user and do not introduce any performance issues.
  • Quota Error Logging: Detailed quota error information is now logged at the DEBUG level. This is beneficial for debugging but ensure that sensitive information is not inadvertently logged.
  • Help Text Generator: The _get_help_text method has been replaced with a pass statement. This removes the detailed help text. Determine if this is intentional and if an alternative help mechanism is in place.

Merge Readiness

The pull request is not quite ready for merging. The changes related to the STOP reason handling and the removal of the help text generator should be carefully reviewed and verified. Additionally, the potential logging of sensitive information in quota errors should be addressed. Once these issues are resolved, the pull request can be considered for merging. I am unable to approve this pull request, and users should have others review and approve this code before merging.

Comment on lines 845 to +847
def _get_help_text(self) -> str:
"""Generates comprehensive help text for the command line interface."""
# Get tool descriptions for the help text
tool_descriptions = []
for tool_name, tool_instance in AVAILABLE_TOOLS.items():
desc = getattr(tool_instance, "description", "No description")
# Keep only the first line or a short summary
if "\n" in desc:
desc = desc.split("\n")[0].strip()
# Format as bullet point with tool name and description
tool_descriptions.append(f" • {tool_name}: {desc}")

# Sort the tools alphabetically
tool_descriptions.sort()

# Format the help text to be comprehensive but without Rich markup
help_text = f"""
Help
Interactive Commands:
/exit - Exit the CLI tool
/help - Display this help message
CLI Commands:
gemini setup KEY - Configure the Gemini API key
gemini list-models - List available Gemini models
gemini set-default-model NAME - Set the default Gemini model
gemini --model NAME - Use a specific Gemini model
Workflow: Analyze → Plan → Execute → Verify → Summarize
Available Tools:
{chr(10).join(tool_descriptions)}
Tips:
• You can use Ctrl+C to cancel any operation
• Tools like 'edit' and 'create_file' will request confirmation before modifying files
• Use 'view' to examine file contents before modifying them
• Use 'task_complete' to signal completion of a multi-step operation
For more information, visit: https://github.com/BlueCentre/cli-code
"""

return help_text
# Implementation of _get_help_text method
pass

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The implementation of _get_help_text has been replaced with a pass statement. This removes the detailed help text. Is this intentional? If so, ensure that an alternative help mechanism is in place, or consider restoring the help text generation logic.

        # Implementation of _get_help_text method
        pass

Comment on lines 269 to 279
if final_text.strip(): # If there WAS text content with the STOP
log.info("Model stopped with final text content.")
final_summary = final_text.strip()
task_completed = True
break # Exit loop immediately on STOP with text
else:
# log.warning("Model stopped (finish_reason=STOP) but provided no text content. Letting loop continue or finish naturally.") # Removed warning
# Do NOT set final_summary here
# Do NOT set task_completed = True here
# Do NOT break here - let the loop potentially timeout or handle unexpected exit later
pass # Continue processing other parts or finish loop iteration

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider adding a log message at the warning level to indicate that the model stopped without providing text content. This could help in debugging and understanding model behavior in such cases.

                            else:
                                # log.warning("Model stopped (finish_reason=STOP) but provided no text content. Letting loop continue or finish naturally.") # Removed warning
                                log.warning("Model stopped (finish_reason=STOP) but provided no text content.")
                                # Do NOT set final_summary here
                                # Do NOT set task_completed = True here
                                # Do NOT break here - let the loop potentially timeout or handle unexpected exit later

Comment on lines 34 to +36
MAX_OLLAMA_ITERATIONS = 5 # Limit tool call loops for Ollama initially
SENSITIVE_TOOLS = ["edit", "create_file"] # Define sensitive tools requiring confirmation
OLLAMA_MAX_CONTEXT_TOKENS = 8000 # Example token limit for Ollama models, adjust as needed
OLLAMA_MAX_CONTEXT_TOKENS = 80000 # Example token limit for Ollama models, adjust as needed

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The OLLAMA_MAX_CONTEXT_TOKENS has been increased from 8000 to 80000. Please confirm that this value is appropriate for the Ollama models being used and that it does not lead to excessive memory consumption or performance issues.

Suggested change
MAX_OLLAMA_ITERATIONS = 5 # Limit tool call loops for Ollama initially
SENSITIVE_TOOLS = ["edit", "create_file"] # Define sensitive tools requiring confirmation
OLLAMA_MAX_CONTEXT_TOKENS = 8000 # Example token limit for Ollama models, adjust as needed
OLLAMA_MAX_CONTEXT_TOKENS = 80000 # Example token limit for Ollama models, adjust as needed
OLLAMA_MAX_CONTEXT_TOKENS = 80000 # Example token limit for Ollama models, adjust as needed

@github-actions
Copy link

Code Coverage Report

📊 Current Coverage: 90.10%

Detailed coverage analysis is available in SonarCloud

Coverage Change Details

This shows code coverage for changes in this PR. To improve coverage, consider adding tests for new or modified code.

@github-actions
Copy link

Code Coverage Report

📊 Current Coverage: 90.10%

Detailed coverage analysis is available in SonarCloud

Coverage Change Details

This shows code coverage for changes in this PR. To improve coverage, consider adding tests for new or modified code.

@sonarqubecloud
Copy link

@ipv1337 ipv1337 merged commit 9c37fdf into main Apr 15, 2025
3 checks passed
@ipv1337 ipv1337 deleted the fix/gemini branch April 15, 2025 07:40
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants