Skip to content
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 14 additions & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,14 +36,17 @@ On [GitHub Codespaces](https://github.com/features/codespaces) it's even simpler
> If your pull request introduces a large change that materially impacts the work of the CLI or the rest of the repository (e.g., you're introducing new templates, arguments, or otherwise major changes), make sure that it was **discussed and agreed upon** by the project maintainers. Pull requests with large changes that did not have a prior conversation and agreement will be closed.

1. Fork and clone the repository
1. Configure and install the dependencies: `uv sync`
1. Configure and install the dependencies: `uv sync --extra test`
1. Make sure the CLI works on your machine: `uv run specify --help`
1. Create a new branch: `git checkout -b my-branch-name`
1. Make your change, add tests, and make sure everything still works
1. Test the CLI functionality with a sample project if relevant
1. Push to your fork and submit a pull request
1. Wait for your pull request to be reviewed and merged.

For the detailed test workflow, command-selection prompt, and PR reporting template, see [`TESTING.md`](./TESTING.md).
Activate the project virtual environment (see the Setup block in [`TESTING.md`](./TESTING.md)), then install the CLI from your working tree (`uv pip install -e .` after `uv sync --extra test`) or otherwise ensure the shell uses the local `specify` binary before running the manual slash-command tests described below.

Here are a few things you can do that will increase the likelihood of your pull request being accepted:

- Follow the project's coding conventions.
Expand All @@ -62,6 +65,14 @@ When working on spec-kit:
3. Test script functionality in the `scripts/` directory
4. Ensure memory files (`memory/constitution.md`) are updated if major process changes are made

### Recommended validation flow

For the smoothest review experience, validate changes in this order:

1. **Run focused automated checks first** — use the quick verification commands in [`TESTING.md`](./TESTING.md) to catch packaging, scaffolding, and configuration regressions early.
2. **Run manual workflow tests second** — if your change affects slash commands or the developer workflow, follow [`TESTING.md`](./TESTING.md) to choose the right commands, run them in an agent, and capture results for your PR.
3. **Use local release packages when debugging packaged output** — if you need to inspect the exact files CI-style packaging produces, generate local release packages as described below.

### Testing template and command changes locally

Running `uv run specify init` pulls released packages, which won’t include your local changes.
Expand All @@ -85,6 +96,8 @@ To test your templates, commands, and other changes locally, follow these steps:

Navigate to your test project folder and open the agent to verify your implementation.

If you only need to validate generated file structure and content before doing manual agent testing, start with the focused automated checks in [`TESTING.md`](./TESTING.md). Keep this section for the cases where you need to inspect the exact packaged output locally.

## AI contributions in Spec Kit

> [!IMPORTANT]
Expand Down
64 changes: 59 additions & 5 deletions TESTING.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,59 @@
# Manual Testing Guide
# Testing Guide

This document is the detailed testing companion to [`CONTRIBUTING.md`](./CONTRIBUTING.md).

Use it for three things:

1. running quick automated checks before manual testing,
2. manually testing affected slash commands through an AI agent, and
3. capturing the results in a PR-friendly format.

Any change that affects a slash command's behavior requires manually testing that command through an AI agent and submitting results with the PR.

## Process
## Recommended order

1. **Sync your environment** — install the project and test dependencies.
2. **Run focused automated checks** — especially for packaging, scaffolding, agent config, and generated-file changes.
3. **Run manual agent tests** — for any affected slash commands.
4. **Paste results into your PR** — include both command-selection reasoning and manual test results.

## Quick automated checks

Run these before manual testing when your change affects packaging, scaffolding, templates, release artifacts, or agent wiring.

### Environment setup

```bash
cd <spec-kit-repo>
uv sync --extra test
source .venv/bin/activate # On Windows: .venv\Scripts\activate
```

### Generated package structure and content

```bash
uv run python -m pytest tests/test_core_pack_scaffold.py -q
```

This validates the generated files that CI-style packaging depends on, including directory layout, file names, frontmatter/TOML validity, placeholder replacement, `.specify/` path rewrites, and parity with `create-release-packages.sh`.

### Agent configuration and release wiring consistency

```bash
uv run python -m pytest tests/test_agent_config_consistency.py -q
```

Run this when you change agent metadata, release scripts, context update scripts, or artifact naming.

### Optional single-agent packaging spot check

```bash
AGENTS=copilot SCRIPTS=sh ./.github/workflows/scripts/create-release-packages.sh v1.0.0
```

Inspect `.genreleases/sdd-copilot-package-sh/` and the matching ZIP in `.genreleases/` when you want to review the exact packaged output for one agent/script combination.

## Manual testing process

1. **Identify affected commands** — use the [prompt below](#determining-which-tests-to-run) to have your agent analyze your changed files and determine which commands need testing.
2. **Set up a test project** — scaffold from your local branch (see [Setup](#setup)).
Expand All @@ -13,19 +64,22 @@ Any change that affects a slash command's behavior requires manually testing tha
## Setup

```bash
# Install the CLI from your local branch
# Install the project and test dependencies from your local branch
cd <spec-kit-repo>
uv venv .venv
uv sync --extra test
source .venv/bin/activate # On Windows: .venv\Scripts\activate
uv pip install -e .
# Ensure the `specify` binary in this environment points at your working tree so the agent runs the branch you're testing.

# Initialize a test project using your local changes
specify init /tmp/speckit-test --ai <agent> --offline
uv run specify init /tmp/speckit-test --ai <agent> --offline
cd /tmp/speckit-test

# Open in your agent
```

If you are testing the packaged output rather than the live source tree, create a local release package first as described in [`CONTRIBUTING.md`](./CONTRIBUTING.md).

## Reporting results

Paste this into your PR:
Expand Down
Loading