-
Notifications
You must be signed in to change notification settings - Fork 3
common/tasks: New dir, add AI-driven forge review workflow] #64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @cgwalters, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request establishes a foundational framework for integrating AI agents into the development workflow, particularly for code reviews. It introduces a dedicated directory for AI tasks and a detailed, human-centric AI-assisted code review process. This process leverages a local JSONL file for AI-generated comments, ensuring human inspection and approval before submission, and is complemented by a new comprehensive Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
This builds on #63 |
Summary of ChangesHello @cgwalters, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request establishes a structured approach for AI-driven development by introducing a dedicated Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Summary of ChangesHello @cgwalters, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request establishes a foundational framework for integrating AI agents into the development workflow, primarily focusing on code review. It introduces a dedicated directory for AI tasks and a detailed, human-supervised AI-assisted code review process. This process ensures that AI contributions are inspectable and editable by humans before being published as draft reviews across various forge platforms. Additionally, it provides a comprehensive set of code review guidelines to standardize quality and practices within the project. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new tasks directory for AI agents, starting with a task for performing AI-assisted code reviews. It also adds a comprehensive REVIEW.md file with code review guidelines. The new workflow for code reviews is well-defined, using a local JSONL file for human inspection before submission. My review focuses on improving the clarity of instructions for the AI, and enhancing the robustness, portability, and consistency of the shell scripts provided in the perform-forge-review.md task.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new tasks directory for AI agents, along with a comprehensive perform-forge-review task and general code review guidelines. The documentation is well-structured and detailed. My review focuses on improving the robustness of the shell script examples provided in the task definition. I've identified a few areas where the scripts could fail or behave unexpectedly, particularly with empty input files or comment bodies containing special characters. I've provided suggestions to make these scripts more resilient. I also have a minor suggestion to improve clarity in AGENTS.md.
|
OK so I did bootc-dev/bcvk#167 (review) which was a test case for this - but the 4th try the agent incorrectly submitted it as not-draft! The instructions are clear...but it messed up. In the end this obviously calls out for a dedicated tool (CLI binary, or MCP server, or both). One thing I think would likely help a lot here with a dedicated tool is having a check where "add review note" requires submitting the matching line text, a lot like how the "file edit" MCP tools tend to operate to increase reliability. Anyways as far as PoC "code" I'd say this was a success and merits further work! |
3e5c77a to
7e59a96
Compare
Cool, that's done. Now obviously I really dislike shell script but this is currently easy to copy-pasta for folks who want to. If this grows beyond the PoC phase I'll probably investigate making it something more like a Real Tool in Rust or so. |
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a well-structured framework for AI-driven tasks, starting with a comprehensive code review workflow. The documentation is thorough, and the scripts for interacting with different forges are a great addition. My review focuses on improving the consistency and reusability of the newly added shell scripts. I've suggested changes to align default file paths and make the review attribution more flexible.
7e59a96 to
4842fdf
Compare
|
OK I spent way too long on this markdown and shell script today! But I am definitely liking the result so far! |
4842fdf to
d2a83de
Compare
And perhaps arguably this should actually live in our devcontainer. OK after some research there are some promising projects out there like https://github.com/google/git-appraise - and in theory maybe we could store our draft review using git notes by default or at least in a compatible format? But we can't hard require pushing as notes of course, most people will want the forge-native review model. |
|
OK this should I think use https://agentskills.io/home instead |
…format Add a tasks directory designed primarily for AI agents to execute, using the Agent Skills specification (https://agentskills.io/). The first skill is perform-forge-review, which defines an AI-augmented human-approved code review workflow. The key design principle is that the AI builds a review in a local JSONL file, which the human can inspect and edit before submission. The review is submitted as a pending/draft review, allowing the human to make final edits in the forge UI before publishing. Structure follows Agent Skills format: - Each skill is a directory containing SKILL.md - Supporting scripts in scripts/ subdirectory - Symlink from .opencode/skill enables native OpenCode discovery Assisted-by: OpenCode (Claude Sonnet 4) Signed-off-by: Colin Walters <walters@verbum.org>
|
Hummm...maybe this should really be in a new bootc-dev/skills repo? We could just instruct in our AGENTS.md to fetch that on demand if "bootc-dev skills" are requested? I'm not totally happy with auto-syncing content like this across all the repos, it will add even more noise. |
|
After some discussion there was a weak consensus to create a separate |
Add a tasks directory designed primarily for AI agents to execute.
These are called "skills" in Claude Code and "commands" in OpenCode,
but they're simply structured markdown files.
The first task is perform-forge-review.md, which defines an AI-augmented
human-approved code review workflow. The key design principle is that
the AI builds a review in a local JSONL file, which the human can
inspect and edit before submission. The review is submitted as a
pending/draft review, allowing the human to make final edits in the
forge UI before publishing.
Assisted-by: OpenCode (Claude Sonnet 4)
Signed-off-by: Colin Walters walters@verbum.org