Skip to content

Comments

Add contribution guidelines for pull requests to copilot review#1053

Open
mbaluda wants to merge 5 commits intomainfrom
mbaluda/copilot-review
Open

Add contribution guidelines for pull requests to copilot review#1053
mbaluda wants to merge 5 commits intomainfrom
mbaluda/copilot-review

Conversation

@mbaluda
Copy link
Collaborator

@mbaluda mbaluda commented Feb 23, 2026

Added guidelines for reviewing PRs with copilot.

Description

Adds guidelines for Copilot code review, based on the existing checklist

Change request type

  • Release or process automation (GitHub workflows, internal scripts)
  • Internal documentation
  • External documentation
  • Query files (.ql, .qll, .qls or unit tests)
  • External scripts (analysis report or other code shipped as part of a release)

Rules with added or modified queries

  • No rules added
  • Queries have been added for the following rules:
    • rule number here
  • Queries have been modified for the following rules:
    • rule number here

Release change checklist

A change note (development_handbook.md#change-notes) is required for any pull request which modifies:

  • The structure or layout of the release artifacts.
  • The evaluation performance (memory, execution time) of an existing query.
  • The results of an existing query in any circumstance.

If you are only adding new rule queries, a change note is not required.

Author: Is a change note required?

  • Yes
  • No

🚨🚨🚨
Reviewer: Confirm that format of shared queries (not the .qll file, the
.ql file that imports it) is valid by running them within VS Code.

  • Confirmed

Reviewer: Confirm that either a change note is not required or the change note is required and has been added.

  • Confirmed

Query development review checklist

For PRs that add new queries or modify existing queries, the following checklist should be completed by both the author and reviewer:

Author

  • Have all the relevant rule package description files been checked in?
  • Have you verified that the metadata properties of each new query is set appropriately?
  • Do all the unit tests contain both "COMPLIANT" and "NON_COMPLIANT" cases?
  • Are the alert messages properly formatted and consistent with the style guide?
  • Have you run the queries on OpenPilot and verified that the performance and results are acceptable?
    As a rule of thumb, predicates specific to the query should take no more than 1 minute, and for simple queries be under 10 seconds. If this is not the case, this should be highlighted and agreed in the code review process.
  • Does the query have an appropriate level of in-query comments/documentation?
  • Have you considered/identified possible edge cases?
  • Does the query not reinvent features in the standard library?
  • Can the query be simplified further (not golfed!)

Reviewer

  • Have all the relevant rule package description files been checked in?
  • Have you verified that the metadata properties of each new query is set appropriately?
  • Do all the unit tests contain both "COMPLIANT" and "NON_COMPLIANT" cases?
  • Are the alert messages properly formatted and consistent with the style guide?
  • Have you run the queries on OpenPilot and verified that the performance and results are acceptable?
    As a rule of thumb, predicates specific to the query should take no more than 1 minute, and for simple queries be under 10 seconds. If this is not the case, this should be highlighted and agreed in the code review process.
  • Does the query have an appropriate level of in-query comments/documentation?
  • Have you considered/identified possible edge cases?
  • Does the query not reinvent features in the standard library?
  • Can the query be simplified further (not golfed!)

Added guidelines for reviewing PRs with copilot.
Copilot AI review requested due to automatic review settings February 23, 2026 12:25
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds contribution guidelines for automated PR reviews by GitHub Copilot. The new .github/copilot-instructions.md file provides a streamlined checklist based on the existing pull request template, focusing on the essential review criteria for change notes and query development.

Changes:

  • Added .github/copilot-instructions.md containing guidelines for Copilot to review pull requests, including change note requirements and query development checklist items
Comments suppressed due to low confidence (1)

.github/copilot-instructions.md:20

  • The original PR template includes two important checklist items that are missing from this file: "Have you considered/identified possible edge cases?" and "Have you run the queries on OpenPilot and verified that the performance and results are acceptable?" (with the 1 minute/10 seconds guideline). These items are important quality checks that should be retained in the Copilot review guidelines.
For PRs that add new queries or modify existing queries, also consider the following review checklist:

- Confirm that the output format of shared queries is valid.
- Have all the relevant rule package description files been checked in?
- Have you verified that the metadata properties of each new query is set appropriately?
- Do all the unit tests contain both "COMPLIANT" and "NON_COMPLIANT" cases?
- Are all the alerts in the expected file annotated as NON_COMPLIANT in the test source file?
- Are the alert messages properly formatted and consistent with the style guide?
- Does the query have an appropriate level of in-query comments/documentation?
- Does the query not reinvent features in the standard library?
- Can the query be simplified further (not golfed!)

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Clarified the requirement for change notes in pull requests and updated wording for reporting checks.
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 1 out of 1 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

mbaluda and others added 2 commits February 23, 2026 13:48
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Copy link
Collaborator

@MichaelRFairhurst MichaelRFairhurst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fantastic Mauro, thanks for thinking of and taking the initiative on this!!

@@ -0,0 +1,20 @@
A change note is required for any pull request which modifies:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
A change note is required for any pull request which modifies:
---
description: 'Code review guidelines for GitHub copilot in this project'
applyTo: '**'
excludeAgent: ["coding-agent"]
---
# Code Review Instructions
A change note is required for any pull request which modifies:

And perhaps move to .github/instructions/code-review.instructions.md

- Does the query not reinvent features in the standard library?
- Can the query be simplified further (not golfed!).

In your review output, list only those checklist items that are not satisfied or are uncertain, but also report any other problems you find outside this checklist; do not mention checklist items that clearly pass.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took the liberty of expanding this a good amount. Your latest reviews shows the value of describing this process to Copilot in detail, I think!

Suggested change
In your review output, list only those checklist items that are not satisfied or are uncertain, but also report any other problems you find outside this checklist; do not mention checklist items that clearly pass.
In your review output, list only those checklist items that are not satisfied or are uncertain, but also report any other problems you find outside this checklist; do not mention checklist items that clearly pass.
## Validating tests and .expected files
The test infrastructure for CodeQL that we use in this project involves the creation of a test directory with the following structure:
- Test root is `some/path/test/path/to/feature` (mirrors `some/path/src/path/to/query`)
- At least one test `c` or `c++` file, typically named `test.c`/`test.cpp`, with lines annotated `// COMPLIANT` or `// NON_COMPLIANT`
- A `.ql` file with test query logic, or a `.qlref` file referring to the production query logic
- A matching `FOO.expected` file to go with each `FOO.ql` or `FOO.qlref`, containing the test query results for the test `c` or `c++` files
- Note that some test directories simply have a `testref` file, to document that a certain query is tested in a different directory.
As a code reviewer, it is critical to ensure that the results in the `.expected` file match the comments in the test file.
The `.expected` file uses a columnar format:
- For example, a basic row may look like `| test.cpp:8:22:8:37 | element | message |`.
- For a query with `select x, "test"`, the columns are | x.getLocation() | x.toString() | "test" |`
- An alert with placeholders will use `$@` in the message, and have additional `element`/`string` columns for placeholder, e.g. `| test.cpp:8:22:8:37 | ... + ... | Invalid add of $@. | test.cpp:7:5:7:12 | my_var | deprecated variable my_var |`.
- Remember, there is one `.expected` file for each `.ql` or `.qlref` file.
- Each `.expected` file will contain the results for all test c/cpp files.
- The `toString()` format of QL objects is deliberately terse for performance reasons.
- For certain queries such as "path problems", the results may be grouped into categories via text lines with the category name, e.g. `nodes` and `edges` and `problems`.
Reviewing tests in this style can be tedious and error prone, but fundamental to the effectiveness of our TDD requirements in this project.
When reviewing tests, it is critical to:
- Check that each `NON_COMPLIANT` case in the test file has a row in the correct `.expected` file referring to the correct location.
- Check that each row in each `.expected` file has a `NON_COMPLIANT` case in the test file at the correct location.
- Check that there are no `.expected` rows that refer to test code cases marked as `COMPLIANT`, or with no comment
- Note that it is OK if the locations of the comment are not precisely aligned with the alert
- Check that the alert message and placeholders are accurate and understandable.
- Consider the "test coverage" of the query, are each of its logical statements effectively exercised individually, collectively? The test should neither be overly bloated nor under specified.
- Consider the edge cases of the language itself, will the analysis work in non-trivial cases, are all relevant language concepts tested here? This doesn't need to be exhaustive, but it should be thoughfully thorough.

@MichaelRFairhurst
Copy link
Collaborator

I had too much fun with this idea 😂

Feel free to just plain ship it!! 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants