Add VectorLint documentation set with Mintlify configuration#75
Add VectorLint documentation set with Mintlify configuration#75taz-mon wants to merge 2 commits intoTRocket-Labs:mainfrom
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
✅ Files skipped from review due to trivial changes (2)
📝 WalkthroughWalkthroughAdded extensive documentation under Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~5 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (3)
docs/introduction.mdx (1)
45-52: Consider clarifying the "PAT (Pay A Tax)" terminology.The acronym "PAT (Pay A Tax)" is introduced without explanation and doesn't appear in the codebase. If this is established terminology in the LLM evaluation space, consider adding a brief explanation or citation. If it's a coined term for this project, consider whether a more self-explanatory phrase would serve readers better.
💡 Example alternative phrasing
-VectorLint uses a PAT (Pay A Tax) evaluation pipeline to keep output precise: +VectorLint uses a two-stage evaluation pipeline to keep output precise:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/introduction.mdx` around lines 45 - 52, The "PAT (Pay A Tax)" acronym in the "How false positives are reduced" section is unclear and not in the codebase; either add one short explanatory sentence after the first mention (e.g., define PAT as the pipeline’s requirement that each candidate "pays" by providing required gate-check fields) or replace the acronym with a more descriptive phrase like "gate-check evaluation pipeline" and, if PAT is a coined term, add a parenthetical note that it is project-specific and a brief citation or link to further documentation (update the heading/first paragraph in the same section where "PAT (Pay A Tax)" appears).docs/llm-providers.mdx (1)
10-16: Add Amazon Bedrock to “Supported Providers” for parity with implementation.The runtime supports
amazon-bedrock, but it’s missing from the provider table. This makes the reference incomplete for users choosing Bedrock.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/llm-providers.mdx` around lines 10 - 16, Add a new table row for Amazon Bedrock in the providers table: set Provider to "Amazon Bedrock", `LLM_PROVIDER` value to `amazon-bedrock`, and the Key variable to the exact environment variable name used by the runtime implementation (search code for the Bedrock env var, e.g., names containing "BEDROCK" or "AMAZON_BEDROCK") and insert that exact identifier into the Key variable column so the docs match the implementation.docs/troubleshooting.mdx (1)
143-143: Consider more precise phrasing.The phrase "Very large context" could be strengthened by replacing the intensifier with more specific language, such as "Excessively large context" or "Context exceeding recommended limits."
✍️ Optional style improvement
-**Cause:** The file is too large to serve as reliable global context. Very large context degrades LLM precision and increases API costs. +**Cause:** The file is too large to serve as reliable global context. Excessively large context degrades LLM precision and increases API costs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/troubleshooting.mdx` at line 143, Replace the vague intensifier "Very large context" in the "Cause:" sentence with a more precise phrase such as "Excessively large context" or "Context exceeding recommended limits"; locate the sentence containing the exact phrase "Very large context" and update it to the chosen wording so the guidance is clearer and more specific.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/cli-reference.mdx`:
- Line 87: The docs list an incorrect provider value: the LLM_PROVIDER
discriminator shown as "azure" is unsupported; update the documentation to use
"azure-openai" instead of "azure" where LLM_PROVIDER is described (search for
the LLM_PROVIDER entry in docs/cli-reference.mdx or the table row containing
`LLM_PROVIDER`) so users copy a valid provider identifier and provider selection
works as expected.
In `@docs/configuration.mdx`:
- Around line 6-13: Update the docs to accurately describe init behavior and
config.toml requiredness: change the statement that "Running `vectorlint init`
creates the first two automatically" to note that `vectorlint init` creates
VECTORLINT.md and .vectorlint.ini by default but `vectorlint init --quick` may
only scaffold minimal files, and clarify that a global
`~/.vectorlint/config.toml` is not strictly required—users can supply LLM API
keys via project-level environment variables or other project config, so reword
the table row for `config.toml` to indicate it is "Recommended for global API
keys" or "Optional if API keys are provided per-project" and add a short note
about alternative ways to provide provider credentials.
In `@docs/env-variables.mdx`:
- Around line 31-36: The docs incorrectly state that rules using the
technical-accuracy evaluator "return reduced-confidence results" when no search
provider is set; update the wording to reflect the actual behavior: if
`SEARCH_PROVIDER` (and required keys like `PERPLEXITY_API_KEY` for `perplexity`)
is not configured, the `technical-accuracy` evaluator currently raises a
missing-dependency error instead of degrading confidence. Edit the table rows
for `SEARCH_PROVIDER` and `PERPLEXITY_API_KEY` and the surrounding sentence to
mention that the evaluator will fail with a missing dependency error (and list
required provider values like `perplexity`) rather than returning
reduced-confidence results.
- Around line 83-87: The docs currently list precedence as Project `.env` →
Global `~/.vectorlint/config.toml` → System environment variables, which is
reversed; update the text to state that VectorLint resolves variables with
highest precedence first as: System environment variables (existing process.env)
→ Global `~/.vectorlint/config.toml` → Project `.env` file, and note that `.env`
and global config do not overwrite existing process.env values. Reference the
exact terms "Project `.env` file", "Global `~/.vectorlint/config.toml`", "System
environment variables", and "process.env" when making the change.
In `@docs/installation.mdx`:
- Around line 6-8: Update the "## Before you begin" paragraph to clarify Node.js
22 is in Maintenance LTS (security-only updates) and recommend using a currently
Active LTS with a longer support window (e.g., Node.js 24 or the latest Active
LTS); change the sentence that currently recommends "Node.js 22 LTS is
recommended" to explicitly note the maintenance status and suggest the Active
LTS alternative so readers know the difference and get a longer supported
release.
In `@docs/llm-providers.mdx`:
- Around line 10-16: Update the Azure provider entries so runtime config
matches: change the `LLM_PROVIDER` value from `azure` to `azure-openai` wherever
it appears (the provider table and the doc block around lines 72–80) and replace
the Azure env var `AZURE_OPENAI_API_KEY`/incorrect name with
`AZURE_OPENAI_DEPLOYMENT_NAME` (ensure both the "Key variable" column and any
explanatory text use `AZURE_OPENAI_DEPLOYMENT_NAME`), so the provider string and
env var names align with the runtime configuration.
In `@docs/presets.mdx`:
- Around line 53-59: The strictness override example uses the wrong preset rule
ID: it sets AIPatterns.strictness but the rule ID is AIPattern (singular);
update the example to target the correct rule ID by replacing
AIPatterns.strictness with AIPattern.strictness so the override applies to the
AIPattern rule ID.
In `@docs/quickstart.mdx`:
- Line 79: Replace the unresolved placeholder comment "{/* TBD: Add
representative terminal output here once confirmed — scored (1–10) or violations
list */}" with a concrete, final example of terminal output; remove the TODO
comment entirely and insert one of the suggested representative outputs (e.g., a
"No violations found" success block or an "✖ N violations found" block with a
short list of violations) so the published docs contain real sample output
instead of unfinished text.
In `@docs/use-cases.mdx`:
- Line 26: The documentation claims the technical-accuracy evaluator should
produce reduced-confidence results when no search provider is configured, but
the implementation currently throws a missing-dependency error; update the
technical-accuracy evaluator (e.g., the TechnicalAccuracyEvaluator class or
evaluateTechnicalAccuracy function) to detect absence of a search provider,
avoid throwing, and instead return a result object marked with reduced
confidence and an explanatory message indicating checks were skipped due to no
search provider; ensure any callers that expect thrown errors instead handle the
reduced-confidence response consistently.
In `@docs/VECTORLINT.md`:
- Around line 3-8: The header in VECTORLINT.md uses JSX-style comments `{/* ...
*/}`; update it to use HTML comments `<!-- ... -->` to match the project's
template and Markdown conventions — replace the `{/* ... */}` blocks in
VECTORLINT.md with `<!-- ... -->` and ensure the content and spacing remain
identical; also check the init-command.ts generated template (init-command.ts)
to confirm it already uses HTML comments and align any differences so both
sources use the same HTML comment format.
---
Nitpick comments:
In `@docs/introduction.mdx`:
- Around line 45-52: The "PAT (Pay A Tax)" acronym in the "How false positives
are reduced" section is unclear and not in the codebase; either add one short
explanatory sentence after the first mention (e.g., define PAT as the pipeline’s
requirement that each candidate "pays" by providing required gate-check fields)
or replace the acronym with a more descriptive phrase like "gate-check
evaluation pipeline" and, if PAT is a coined term, add a parenthetical note that
it is project-specific and a brief citation or link to further documentation
(update the heading/first paragraph in the same section where "PAT (Pay A Tax)"
appears).
In `@docs/llm-providers.mdx`:
- Around line 10-16: Add a new table row for Amazon Bedrock in the providers
table: set Provider to "Amazon Bedrock", `LLM_PROVIDER` value to
`amazon-bedrock`, and the Key variable to the exact environment variable name
used by the runtime implementation (search code for the Bedrock env var, e.g.,
names containing "BEDROCK" or "AMAZON_BEDROCK") and insert that exact identifier
into the Key variable column so the docs match the implementation.
In `@docs/troubleshooting.mdx`:
- Line 143: Replace the vague intensifier "Very large context" in the "Cause:"
sentence with a more precise phrase such as "Excessively large context" or
"Context exceeding recommended limits"; locate the sentence containing the exact
phrase "Very large context" and update it to the chosen wording so the guidance
is clearer and more specific.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: fc0953cc-02da-4285-9d7d-3205d0e44b14
⛔ Files ignored due to path filters (5)
docs/favicon.svgis excluded by!**/*.svgdocs/logo/dark.svgis excluded by!**/*.svgdocs/logo/light.svgis excluded by!**/*.svgdocs/logo/logo.svgis excluded by!**/*.svgdocs/logo/vectorlint.pngis excluded by!**/*.png
📒 Files selected for processing (27)
.gitignoredocs/VECTORLINT.mddocs/best-practices.mdxdocs/ci-integration.mdxdocs/cli-reference.mdxdocs/configuration-schema.mdxdocs/configuration.mdxdocs/customize-style-rules.mdxdocs/density-scoring.mdxdocs/docs.jsondocs/env-variables.mdxdocs/false-positive-tuning.mdxdocs/frontmatter-fields.mdxdocs/how-it-works.mdxdocs/initial-style-check.mdxdocs/installation.mdxdocs/introduction.mdxdocs/llm-providers.mdxdocs/presets.mdxdocs/project-config.mdxdocs/quality-scoring.mdxdocs/quickstart.mdxdocs/rubric-scoring.mdxdocs/style-guide.mdxdocs/team-rule-packs.mdxdocs/troubleshooting.mdxdocs/use-cases.mdx
💤 Files with no reviewable changes (1)
- .gitignore
|
|
||
| | Variable | Default | Description | | ||
| |----------|---------|-------------| | ||
| | `LLM_PROVIDER` | — | LLM provider to use: `openai`, `anthropic`, `gemini`, `azure` | |
There was a problem hiding this comment.
Use azure-openai (not azure) for LLM_PROVIDER.
Line 87 documents an unsupported discriminator value. Users copying this value will fail provider selection.
Suggested doc fix
-| `LLM_PROVIDER` | — | LLM provider to use: `openai`, `anthropic`, `gemini`, `azure` |
+| `LLM_PROVIDER` | — | LLM provider to use: `openai`, `anthropic`, `gemini`, `azure-openai`, `amazon-bedrock` |📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| | `LLM_PROVIDER` | — | LLM provider to use: `openai`, `anthropic`, `gemini`, `azure` | | |
| | `LLM_PROVIDER` | — | LLM provider to use: `openai`, `anthropic`, `gemini`, `azure-openai`, `amazon-bedrock` | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/cli-reference.mdx` at line 87, The docs list an incorrect provider
value: the LLM_PROVIDER discriminator shown as "azure" is unsupported; update
the documentation to use "azure-openai" instead of "azure" where LLM_PROVIDER is
described (search for the LLM_PROVIDER entry in docs/cli-reference.mdx or the
table row containing `LLM_PROVIDER`) so users copy a valid provider identifier
and provider selection works as expected.
| VectorLint is configured through four files. Running `vectorlint init` creates the first two automatically. | ||
|
|
||
| | File | Location | Purpose | Required | | ||
| |------|----------|---------|----------| | ||
| | `VECTORLINT.md` | Project root | Global style instructions in plain language — prepended to every evaluation | Required for zero-config mode | | ||
| | `.vectorlint.ini` | Project root | File patterns, rule packs, strictness overrides | Required for rule packs and file-specific configuration | | ||
| | `config.toml` | `~/.vectorlint/` | LLM provider API keys — applies globally across projects | Always required | | ||
| | Rule pack files | `RulesPath` directory | Targeted LLM prompts for specific checks — for advanced content quality workflows | Optional | |
There was a problem hiding this comment.
Clarify init behavior and config.toml requiredness.
Line 6 and Line 12 overstate requirements. vectorlint init --quick does not create the same file set, and users can run with project-level env config without a global ~/.vectorlint/config.toml.
Suggested doc fix
-VectorLint is configured through four files. Running `vectorlint init` creates the first two automatically.
+VectorLint can be configured through up to four files, depending on your setup mode.
@@
-| `config.toml` | `~/.vectorlint/` | LLM provider API keys — applies globally across projects | Always required |
+| `config.toml` | `~/.vectorlint/` | Global LLM provider defaults for all projects | Optional (recommended for global defaults) |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/configuration.mdx` around lines 6 - 13, Update the docs to accurately
describe init behavior and config.toml requiredness: change the statement that
"Running `vectorlint init` creates the first two automatically" to note that
`vectorlint init` creates VECTORLINT.md and .vectorlint.ini by default but
`vectorlint init --quick` may only scaffold minimal files, and clarify that a
global `~/.vectorlint/config.toml` is not strictly required—users can supply LLM
API keys via project-level environment variables or other project config, so
reword the table row for `config.toml` to indicate it is "Recommended for global
API keys" or "Optional if API keys are provided per-project" and add a short
note about alternative ways to provide provider credentials.
| Required only for rules that use the `technical-accuracy` evaluator. If not set, rules that depend on external lookup return reduced-confidence results. | ||
|
|
||
| | Variable | Required | Description | | ||
| |----------|----------|-------------| | ||
| | `SEARCH_PROVIDER` | If using technical-accuracy | Search provider to use. Accepted value: `perplexity`. | | ||
| | `PERPLEXITY_API_KEY` | If using Perplexity | API key for Perplexity search. | |
There was a problem hiding this comment.
Search-provider behavior is documented incorrectly.
Line 31 says rules return reduced-confidence results without search provider, but technical-accuracy currently fails with a missing dependency error instead.
Suggested edit
-Required only for rules that use the `technical-accuracy` evaluator. If not set, rules that depend on external lookup return reduced-confidence results.
+Required only for rules that use the `technical-accuracy` evaluator. If not set, VectorLint raises a missing dependency error for those rules.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Required only for rules that use the `technical-accuracy` evaluator. If not set, rules that depend on external lookup return reduced-confidence results. | |
| | Variable | Required | Description | | |
| |----------|----------|-------------| | |
| | `SEARCH_PROVIDER` | If using technical-accuracy | Search provider to use. Accepted value: `perplexity`. | | |
| | `PERPLEXITY_API_KEY` | If using Perplexity | API key for Perplexity search. | | |
| Required only for rules that use the `technical-accuracy` evaluator. If not set, VectorLint raises a missing dependency error for those rules. | |
| | Variable | Required | Description | | |
| |----------|----------|-------------| | |
| | `SEARCH_PROVIDER` | If using technical-accuracy | Search provider to use. Accepted value: `perplexity`. | | |
| | `PERPLEXITY_API_KEY` | If using Perplexity | API key for Perplexity search. | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/env-variables.mdx` around lines 31 - 36, The docs incorrectly state that
rules using the technical-accuracy evaluator "return reduced-confidence results"
when no search provider is set; update the wording to reflect the actual
behavior: if `SEARCH_PROVIDER` (and required keys like `PERPLEXITY_API_KEY` for
`perplexity`) is not configured, the `technical-accuracy` evaluator currently
raises a missing-dependency error instead of degrading confidence. Edit the
table rows for `SEARCH_PROVIDER` and `PERPLEXITY_API_KEY` and the surrounding
sentence to mention that the evaluator will fail with a missing dependency error
(and list required provider values like `perplexity`) rather than returning
reduced-confidence results.
| When the same variable is set in multiple places, VectorLint resolves it in this order — highest precedence first: | ||
|
|
||
| 1. Project `.env` file | ||
| 2. Global `~/.vectorlint/config.toml` | ||
| 3. System environment variables |
There was a problem hiding this comment.
Fix the precedence order to match runtime behavior.
The listed precedence is reversed for system environment variables. Existing process.env values are loaded first and are not overwritten by .env or global config.
Suggested edit
-When the same variable is set in multiple places, VectorLint resolves it in this order — highest precedence first:
-
-1. Project `.env` file
-2. Global `~/.vectorlint/config.toml`
-3. System environment variables
+When the same variable is set in multiple places, VectorLint resolves it in this order — highest precedence first:
+
+1. System environment variables (including CLI/shell-exported values)
+2. Project `.env` file
+3. Global `~/.vectorlint/config.toml`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/env-variables.mdx` around lines 83 - 87, The docs currently list
precedence as Project `.env` → Global `~/.vectorlint/config.toml` → System
environment variables, which is reversed; update the text to state that
VectorLint resolves variables with highest precedence first as: System
environment variables (existing process.env) → Global
`~/.vectorlint/config.toml` → Project `.env` file, and note that `.env` and
global config do not overwrite existing process.env values. Reference the exact
terms "Project `.env` file", "Global `~/.vectorlint/config.toml`", "System
environment variables", and "process.env" when making the change.
| ## Before you begin | ||
|
|
||
| VectorLint requires Node.js 18 or later. Node.js 22 LTS is recommended. If you need to install or update Node.js, use [nvm](https://github.com/nvm-sh/nvm) (recommended) or download directly from [nodejs.org](https://nodejs.org). |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What is the current LTS status of Node.js 22 as of March 2026? Is it in active LTS?
💡 Result:
As of March 30, 2026, Node.js 22 is in Maintenance LTS status, not Active LTS. It receives security updates but not critical bug fixes for non-security issues.
Citations:
- 1: https://nodejs.org/en/about/previous-releases
- 2: https://github.com/nodejs/Release
- 3: https://github.com/nodejs/release
Clarify Node.js 22 LTS status in documentation.
Node.js 22 is currently in Maintenance LTS status (as of March 2026), not Active LTS. While technically in LTS, it receives only security updates without critical bug fixes for non-security issues. The documentation should either clarify this limited support phase or recommend a newer LTS version with a longer support window.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/installation.mdx` around lines 6 - 8, Update the "## Before you begin"
paragraph to clarify Node.js 22 is in Maintenance LTS (security-only updates)
and recommend using a currently Active LTS with a longer support window (e.g.,
Node.js 24 or the latest Active LTS); change the sentence that currently
recommends "Node.js 22 LTS is recommended" to explicitly note the maintenance
status and suggest the Active LTS alternative so readers know the difference and
get a longer supported release.
| | Provider | `LLM_PROVIDER` value | Key variable | | ||
| |----------|----------------------|--------------| | ||
| | OpenAI | `openai` | `OPENAI_API_KEY` | | ||
| | Anthropic | `anthropic` | `ANTHROPIC_API_KEY` | | ||
| | Google Gemini | `gemini` | `GEMINI_API_KEY` | | ||
| | Azure OpenAI | `azure` | `AZURE_OPENAI_API_KEY` | | ||
|
|
There was a problem hiding this comment.
Fix Azure provider/env names to match runtime config.
Line 15 and Line 76 should use azure-openai (not azure), and Line 79 should use AZURE_OPENAI_DEPLOYMENT_NAME. Current values will cause invalid configuration.
Suggested doc fix
-| Azure OpenAI | `azure` | `AZURE_OPENAI_API_KEY` |
+| Azure OpenAI | `azure-openai` | `AZURE_OPENAI_API_KEY` |
@@
-LLM_PROVIDER = "azure"
+LLM_PROVIDER = "azure-openai"
AZURE_OPENAI_API_KEY = "..."
AZURE_OPENAI_ENDPOINT = "https://your-resource.openai.azure.com/"
-AZURE_OPENAI_DEPLOYMENT = "your-deployment-name"
+AZURE_OPENAI_DEPLOYMENT_NAME = "your-deployment-name"Also applies to: 72-80
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/llm-providers.mdx` around lines 10 - 16, Update the Azure provider
entries so runtime config matches: change the `LLM_PROVIDER` value from `azure`
to `azure-openai` wherever it appears (the provider table and the doc block
around lines 72–80) and replace the Azure env var
`AZURE_OPENAI_API_KEY`/incorrect name with `AZURE_OPENAI_DEPLOYMENT_NAME`
(ensure both the "Key variable" column and any explanatory text use
`AZURE_OPENAI_DEPLOYMENT_NAME`), so the provider string and env var names align
with the runtime configuration.
| Tune how harshly a preset rule scores content using its rule ID: | ||
|
|
||
| ```ini | ||
| [content/docs/**/*.md] | ||
| RunRules=VectorLint | ||
| AIPatterns.strictness=strict | ||
| ``` |
There was a problem hiding this comment.
Fix the preset rule ID in the strictness override example.
Line 58 uses AIPatterns.strictness=strict, but the preset rule ID is AIPattern (singular). As written, the override won’t target the rule.
Suggested fix
[content/docs/**/*.md]
RunRules=VectorLint
-AIPatterns.strictness=strict
+AIPattern.strictness=strict🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/presets.mdx` around lines 53 - 59, The strictness override example uses
the wrong preset rule ID: it sets AIPatterns.strictness but the rule ID is
AIPattern (singular); update the example to target the correct rule ID by
replacing AIPatterns.strictness with AIPattern.strictness so the override
applies to the AIPattern rule ID.
|
|
||
| VectorLint evaluates the file against your `VECTORLINT.md` instructions and returns findings in your terminal. | ||
|
|
||
| {/* TBD: Add representative terminal output here once confirmed — scored (1–10) or violations list */} |
There was a problem hiding this comment.
Remove the unresolved TODO placeholder before publishing.
Line 79 still contains a TBD note for terminal output; this will leak unfinished content into public docs.
Suggested edit
-{/* TBD: Add representative terminal output here once confirmed — scored (1–10) or violations list */}
+```text
+✔ No violations found in path/to/doc.md
+```
+
+or
+
+```text
+✖ 2 violations found in path/to/doc.md
+- [Style Guide Compliance] Passive voice in sentence 3
+- [Style Guide Compliance] Unexplained acronym: "SDK"
+```📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {/* TBD: Add representative terminal output here once confirmed — scored (1–10) or violations list */} |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/quickstart.mdx` at line 79, Replace the unresolved placeholder comment
"{/* TBD: Add representative terminal output here once confirmed — scored (1–10)
or violations list */}" with a concrete, final example of terminal output;
remove the TODO comment entirely and insert one of the suggested representative
outputs (e.g., a "No violations found" success block or an "✖ N violations
found" block with a short list of violations) so the published docs contain real
sample output instead of unfinished text.
|
|
||
| ## Verify technical accuracy | ||
|
|
||
| The `technical-accuracy` evaluator uses a search provider to check factual claims in your content against current external sources. This catches outdated version numbers, deprecated API references, and incorrect command syntax — the category of errors that's hardest to catch in manual review. |
There was a problem hiding this comment.
Correct technical-accuracy behavior when no search provider is configured.
Line 26 implies reduced-confidence results, but the evaluator currently throws a missing dependency error if no search provider is available.
Suggested edit
-The `technical-accuracy` evaluator uses a search provider to check factual claims in your content against current external sources. This catches outdated version numbers, deprecated API references, and incorrect command syntax — the category of errors that's hardest to catch in manual review.
+The `technical-accuracy` evaluator uses a search provider to check factual claims in your content against current external sources. This catches outdated version numbers, deprecated API references, and incorrect command syntax — the category of errors that's hardest to catch in manual review.
+If no search provider is configured, VectorLint raises a dependency error for rules using this evaluator.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| The `technical-accuracy` evaluator uses a search provider to check factual claims in your content against current external sources. This catches outdated version numbers, deprecated API references, and incorrect command syntax — the category of errors that's hardest to catch in manual review. | |
| The `technical-accuracy` evaluator uses a search provider to check factual claims in your content against current external sources. This catches outdated version numbers, deprecated API references, and incorrect command syntax — the category of errors that's hardest to catch in manual review. | |
| If no search provider is configured, VectorLint raises a dependency error for rules using this evaluator. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/use-cases.mdx` at line 26, The documentation claims the
technical-accuracy evaluator should produce reduced-confidence results when no
search provider is configured, but the implementation currently throws a
missing-dependency error; update the technical-accuracy evaluator (e.g., the
TechnicalAccuracyEvaluator class or evaluateTechnicalAccuracy function) to
detect absence of a search provider, avoid throwing, and instead return a result
object marked with reduced confidence and an explanatory message indicating
checks were skipped due to no search provider; ensure any callers that expect
thrown errors instead handle the reduced-confidence response consistently.
| {/* This file is prepended to the system prompt for every VectorLint evaluation | ||
| This file defines global style instructions for all VectorLint evaluations. | ||
| VectorLint prepends its contents to the system prompt for every rule it runs. | ||
| Keep this file under ~800 tokens to avoid performance and cost issues. | ||
| Adapt the rules below to match your team's style guide. | ||
| */} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify JSX-style comments inside .md files (likely rendering issues in markdown parsers)
rg -nP '\{/\*|\*/\}' --type-add 'md:*.md' --type mdRepository: TRocket-Labs/vectorlint
Length of output: 193
🏁 Script executed:
cat -n docs/VECTORLINT.mdRepository: TRocket-Labs/vectorlint
Length of output: 4302
🏁 Script executed:
# Search for references to VECTORLINT.md in the codebase
rg 'VECTORLINT\.md|VECTORLINT' --type-list | head -20Repository: TRocket-Labs/vectorlint
Length of output: 456
🏁 Script executed:
# Look for where this file is read/used
rg -i 'vectorlint|prepend' --type md --type ts --type js -A 3 | head -40Repository: TRocket-Labs/vectorlint
Length of output: 2112
🏁 Script executed:
# Search for code that reads/loads the VECTORLINT.md file
rg 'VECTORLINT' --type ts --type js --type py -B 2 -A 5Repository: TRocket-Labs/vectorlint
Length of output: 5107
🏁 Script executed:
# Search for any file reading logic that might handle this file
rg 'readFile|read_file|fs\.read|open' --type ts --type js | rg -i 'vector'Repository: TRocket-Labs/vectorlint
Length of output: 49
🏁 Script executed:
# Look at the user-instruction-loader.ts to see how VECTORLINT.md is processed
cat -n src/boundaries/user-instruction-loader.tsRepository: TRocket-Labs/vectorlint
Length of output: 2550
🏁 Script executed:
# Look at the init-command template to see what format is suggested
rg 'USER_INSTRUCTION_TEMPLATE' -A 20 src/cli/init-command.tsRepository: TRocket-Labs/vectorlint
Length of output: 1625
Replace JSX comment syntax with HTML comments for consistency.
Lines 3–8 use {/* ... */}, which differs from the project's own template format. The generated VECTORLINT.md template in init-command.ts uses standard HTML comments <!-- ... --> instead. Use HTML comments to maintain consistency and align with Markdown conventions for text files.
Suggested fix
-{/* This file is prepended to the system prompt for every VectorLint evaluation
- This file defines global style instructions for all VectorLint evaluations.
- VectorLint prepends its contents to the system prompt for every rule it runs.
- Keep this file under ~800 tokens to avoid performance and cost issues.
- Adapt the rules below to match your team's style guide.
-*/}
+<!--
+This file is prepended to the system prompt for every VectorLint evaluation.
+This file defines global style instructions for all VectorLint evaluations.
+VectorLint prepends its contents to the system prompt for every rule it runs.
+Keep this file under ~800 tokens to avoid performance and cost issues.
+Adapt the rules below to match your team's style guide.
+-->📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {/* This file is prepended to the system prompt for every VectorLint evaluation | |
| This file defines global style instructions for all VectorLint evaluations. | |
| VectorLint prepends its contents to the system prompt for every rule it runs. | |
| Keep this file under ~800 tokens to avoid performance and cost issues. | |
| Adapt the rules below to match your team's style guide. | |
| */} | |
| <!-- | |
| This file is prepended to the system prompt for every VectorLint evaluation. | |
| This file defines global style instructions for all VectorLint evaluations. | |
| VectorLint prepends its contents to the system prompt for every rule it runs. | |
| Keep this file under ~800 tokens to avoid performance and cost issues. | |
| Adapt the rules below to match your team's style guide. | |
| --> |
🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 3-3: Spaces inside emphasis markers
(MD037, no-space-in-emphasis)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/VECTORLINT.md` around lines 3 - 8, The header in VECTORLINT.md uses
JSX-style comments `{/* ... */}`; update it to use HTML comments `<!-- ... -->`
to match the project's template and Markdown conventions — replace the `{/* ...
*/}` blocks in VECTORLINT.md with `<!-- ... -->` and ensure the content and
spacing remain identical; also check the init-command.ts generated template
(init-command.ts) to confirm it already uses HTML comments and align any
differences so both sources use the same HTML comment format.
…eshooting phrasing
|
Thanks for this @taz-mon. Can we delete the Mintlify logo? I don't believe we need that. |
Adds a complete /docs folder containing a Mintlify documentation site for VectorLint.
What's included
use-cases, configuration, project-config, llm-providers, style-guide,
customize-style-rules, presets, ci-integration, team-rule-packs,
false-positive-tuning, best-practices, cli-reference, env-variables,
troubleshooting, and more
Mintlify setup needed
To deploy from this repo, install the Mintlify GitHub app on
TRocket-Labs/vectorlint and set /docs as the monorepo path in
Git Settings at dashboard.mintlify.com.
Notes
docsandVECTORLINT.mdexclusionsSummary by CodeRabbit