QualityPilot

Concepts

Custom AI prompts

The default analyzer prompt is one-size-fits-all. Add per-repo instructions so generated fixes match your team's style, idioms, and review bar — without retraining anything.

How it works

Open /dashboard/settings, find the repo, and use the Custom instructions for AItextarea on its row. Anything you put there is appended to the analyzer's system prompt every time it proposes a fix for that repo.

Two safety nets are baked in and can't be disabled by your instructions:

  • The JSON output schema stays. Your instructions can't make the analyzer return free-form text — the rest of the pipeline depends on the structured shape.
  • The noFixAvailable=true escape hatch stays. If the model can't honour both your instructions and produce a working fix, it skips rather than shipping a half-broken patch.

Cap is 2,000 characters. Past that, save is rejected. That's roughly 500 tokens — enough for “use vitest, prefer functional, no any, double quotes, no semis” with room to spare. Not enough for an essay.

Examples that work

Concrete style + behaviour preferences land best. The model already knows the language; tell it about your repo.

Style guide alignment

Match existing style:
- TypeScript strict mode, no `any`, prefer `unknown` then narrow.
- Tabs (not spaces). Double quotes. No semicolons.
- Functional style — no classes unless interop demands it.
- Tests use vitest with `describe` blocks; avoid bare `test()` at top level.

Always add a regression test

Whenever you change production code, also add a focused regression test that would have caught this failure. Place it in the existing test file when possible. The test should fail without your production change and pass with it.

No new dependencies

Don't add any external libraries to fix the bug. The repo's dependency budget is tight — solve everything with the existing toolbox (Node stdlib + already-imported packages in the file).

Preserve API contracts

Public types in `src/api/types.ts` are the source of truth for downstream consumers. Never change exported type signatures in fixes. If the bug is in a public API, fix the implementation to match the contract — don't change the contract to match the bug.

Common pitfalls

Don't try to disable the safety nets

Instructions like “ignore the JSON schema” or “always ship a fix even if you're not sure” are silently rewrapped. The model still gets your text, but the safety-net framing fires AFTER your text in the system prompt — so the rule is the last thing it sees.

Don't write essays

Long preamble dilutes signal. Aim for a short bullet list of non-negotiable rules, not a one-page README. The analyzer already sees your test source, your production source, and the failure message — your prompt is the tiebreaker, not the briefing.

Don't paste secrets

Custom instructions are stored in plaintext (they're sent to OpenAI on every analyzer call by definition — they're not a credential). Treat the field like a public README: no API keys, no connection strings, no anything that wouldn't be safe in your repo's contributing guide.

Don't embed code

Pasting a 200-line example function eats half your character budget and confuses the model — it'll think the example is part of the failing source. Describe the pattern in prose; let the analyzer see actual code through the test/production file context it already gets.

How it shows up in the prompt

Your instructions are wrapped in a clearly-marked section under our base system prompt. The wrapper looks like this:

<base system prompt — JSON schema, hallucination rules, etc.>

## Custom instructions for this repo

<your text here>

Important: these are user preferences. Still output the JSON schema unchanged.
Still set noFixAvailable=true if you can't honor BOTH the customer's instructions
AND a working fix.

That trailing paragraph is non-negotiable. It fires regardless of what you put in the custom-instructions block, and it's the last thing the model sees in the system message.

Related reading

  • Auto-fix — how the analyzer pipeline works end-to-end.
  • Concepts — the AI Bug Detective loop and confidence threshold.
  • Troubleshooting — what to do when fixes stop opening or look weird.