MCP (Model Context Protocol) server providing AI coding agents with universal, language-agnostic development rules.
codeops-mcp bundles 11 curated rule documents that teach AI agents how to code, test, plan, commit, gather requirements, reverse-engineer codebases, create technical documentation, upgrade outdated artifacts, disambiguate designs, and behave β across any programming language and project type. It exposes these rules via 5 MCP tools.
| Rule | Description |
|---|---|
| code | 30 coding standards: DRY, testing, documentation, architecture, type safety |
| testing | Test commands, workflows, coverage requirements, debugging strategies |
| git-commands | Git commit protocols (gitcm/gitcmp), message format, push workflow |
| make_plan | Complete protocol for creating and executing multi-document implementation plans |
| requirements | Requirements gathering & documentation protocol (make_requirements) |
| retro_requirements | Reverse-engineer an existing codebase into structured requirements |
| techdocs | Technical architecture documentation protocol (make_techdocs) |
| upgrade_plan | Upgrade outdated plans and requirements to current standards |
| grill_me | Deep disambiguation protocol β relentless interview before planning or requirements |
| agents | Mandatory AI agent behavior: compliance, context management, multi-session execution |
| project-template | Template for .clinerules/project.md β project-specific toolchain configuration |
| Tool | Description |
|---|---|
get_rule |
Get any rule document by name (supports aliases like "git", "test", "retro") |
list_rules |
List all available rules grouped by category |
search_rules |
Full-text search across all rules with TF-IDF ranking |
analyze_project |
Killer feature β Scan a project directory and auto-generate project.md |
get_setup_guide |
Step-by-step guide for setting up CodeOps in a project |
# Global install
npm install -g codeops-mcp
# Or with yarn
yarn global add codeops-mcpAdd to your MCP client configuration (e.g., Cline, Claude Desktop):
{
"mcpServers": {
"codeops": {
"command": "codeops-mcp"
}
}
}{
"mcpServers": {
"codeops": {
"command": "codeops-mcp",
"args": ["/path/to/custom/docs"]
}
}
}Or via environment variable:
{
"mcpServers": {
"codeops": {
"command": "codeops-mcp",
"env": {
"CODEOPS_DOCS_PATH": "/path/to/custom/docs"
}
}
}
}The two-layer architecture:
- Layer 1: Universal rules (bundled in this package) β Language-agnostic standards for coding, testing, git, planning, and requirements
- Layer 2: Project-specific config (
.clinerules/project.mdin your project) β Toolchain, commands, conventions
All generic rules reference project.md for project-specific settings like build commands, test commands, package manager, etc.
- Run
analyze_project("/path/to/your/project")to auto-detect your toolchain - Save the output to
.clinerules/project.mdin your project - The AI agent automatically applies universal rules using your project's settings
codeops-mcp defines trigger keywords β when you type these phrases, the AI agent executes sophisticated multi-step protocols:
| Keyword | What It Does |
|---|---|
make_plan |
Creates a detailed multi-document implementation plan for a feature |
exec_plan [name] |
Executes an existing plan step by step |
make_requirements |
Discovers, structures, and documents project requirements |
add_requirement |
Adds a new requirement to an existing requirements set |
review_requirements |
Health-checks existing requirements for gaps and inconsistencies |
retro_requirements |
Reverse-engineers an existing codebase into structured requirements |
make_techdocs |
Creates VitePress-compatible technical architecture documentation |
review_techdocs |
Reviews and updates existing technical documentation |
upgrade_plan [name] |
Upgrades an outdated plan to current CodeOps standards |
upgrade_requirements |
Upgrades outdated requirements to current CodeOps standards |
grill_me |
Relentless interview to eliminate ambiguity before planning or requirements |
gitcm |
Stages all changes and commits with a detailed conventional commit message |
gitcmp |
Same as gitcm plus rebase and push |
The protocols form a complete development pipeline:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β REVERSE PATH (existing codebase β requirements β rebuild) β
β β
β retro_requirements β make_requirements β make_plan β exec_plan β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β FORWARD PATH (new project β requirements β implementation) β
β β
β make_requirements β make_plan β exec_plan β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β QUICK PATH (add a feature to existing codebase) β
β β
β make_plan β exec_plan β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β UPGRADE PATH (bring outdated artifacts to current standards) β
β β
β upgrade_plan [feature] / upgrade_requirements β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β DISAMBIGUATION PATH (eliminate ambiguity before any work) β
β β
β grill_me β make_plan β exec_plan β
β grill_me β make_requirements β make_plan β exec_plan β
β grill_me (standalone deep-dive) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
You can use any part of the pipeline independently β they're designed to work together but none requires the others.
The agent automatically loads coding standards and testing rules at the start of every task. These enforce:
- 30 coding rules: DRY, single responsibility, documentation, type safety, 500-line file limit
- Testing workflow: Write tests first, run verification before every commit
- Test coverage: Unit, integration, and end-to-end tests required
You don't need to do anything β just have codeops-mcp installed and the agent follows these rules automatically.
Create and execute structured implementation plans for features of any size.
Creating a plan:
User: make_plan
Agent: What feature would you like to plan?
User: Add JWT authentication to our API
Agent: [Asks clarifying questions, analyzes codebase, then creates:]
plans/jwt-auth/
βββ 00-index.md
βββ 01-requirements.md
βββ 02-current-state.md
βββ 03-auth-middleware.md
βββ 04-token-service.md
βββ 07-testing-strategy.md
βββ 99-execution-plan.md
Executing a plan:
User: exec_plan jwt-auth
Agent: [Reads the execution plan, implements tasks one by one,
runs verification after each task, updates progress,
asks about commits after each verified task]
Commit modes for exec_plan:
| Flag | Behavior |
|---|---|
| (default) | Ask before each commit |
--no-commit |
Never commit β you handle git yourself |
--auto-commit |
Automatically commit and push after each task |
Transform a rough project idea into formal requirement documents through guided discovery.
Example:
User: I want to build a university lab management SaaS. Researchers book lab rooms,
ethics committee approves studies, participants sign up on a public page.
Built with Node, TypeScript, PostgreSQL.
make_requirements
Agent: [Conducts multi-turn discovery interview]
- Maps stakeholders and user types
- Analyzes comparable systems (suggests features you haven't thought of)
- Walks through user journeys to find hidden requirements
- Explores "what happens when..." edge cases
- Produces formal requirement documents:
requirements/
βββ README.md # Index, glossary, dependency graph
βββ RD-01-scaffolding.md # Project setup
βββ RD-02-data-model.md # Database schema
βββ RD-03-auth.md # Authentication & RBAC
βββ RD-04-lab-booking.md # Core booking functionality
βββ ...
βββ RD-12-deployment.md # Production deployment
Each RD document can then be fed into make_plan for implementation:
User: make_plan
Agent: I found requirement documents. Which RD would you like to implement?
User: RD-04-lab-booking.md
Agent: [Creates implementation plan based on the requirement document]
Additional keywords:
add_requirementβ Add a new RD to an existing setreview_requirementsβ Run a health check on all requirements (gaps, inconsistencies, scope creep)
Analyze an existing codebase and produce a reconstruction brief β detailed enough to rebuild the entire application.
Example:
User: retro_requirements
Agent: [Systematically analyzes the codebase in 10 phases:]
Phase 0: Reconnaissance β manifests, directory structure, tech stack
Phase 1: Structural Analysis β layers, modules, entry points, patterns
Phase 2: Data Model β entities, relationships, constraints
Phase 3: API Surface β endpoints, CLI commands, public interfaces
Phase 4: Behavior Catalog β features translated to requirement statements
Phase 5: Business Rules β validation, authorization, domain logic
Phase 6: Cross-Cutting β auth, errors, logging, caching
Phase 7: Integrations β external APIs, databases, services
Phase 8: Gaps & Debt β TODOs, missing tests, security gaps
Phase 9: Synthesis β produces the reconstruction brief
Output:
requirements/_retro/
βββ 00-project-profile.md
βββ 01-architecture-analysis.md
βββ ...
βββ 08-gaps-and-debt.md
βββ 09-reconstruction-brief.md β Feed this to make_requirements
Scope control for large codebases:
retro_requirements --scope src/auth # Analyze only the auth module
retro_requirements --continue # Resume an interrupted session
The reconstruction brief is designed as input for make_requirements, completing the full reverse β forward pipeline.
Plans and requirements created with codeops-mcp are automatically stamped with the CodeOps version. When rules evolve, previously created plans may become outdated. The upgrade protocol brings them up to current standards.
How it works:
- Plans created with
make_planinclude a> **CodeOps Version**: X.Y.Zstamp - When you run
exec_plan, the agent detects outdated or pre-versioning plans and suggests upgrading - The upgrade is non-destructive β all user-authored content (technical specs, scope decisions, task states) is preserved
Upgrading a plan:
User: upgrade_plan jwt-auth
Agent: [Reads all plan documents, compares against current templates]
Upgrade Report: jwt-auth
Current Version: 1.5.0 (or "none β pre-versioning")
Target Version: 1.7.0
Will Be Added: commit mode flags, security checklist, techdocs step
Will Be Updated: session protocol, success criteria
Will Be Preserved: all technical specs, task states, scope decisions
Proceed with upgrade?
Upgrading requirements:
User: upgrade_requirements
Agent: [Reads all RD documents, compares against current templates,
adds missing sections like security considerations]
Generate and maintain VitePress-compatible technical architecture documentation from your codebase.
User: make_techdocs
Agent: [Analyzes codebase architecture and produces:]
docs/
βββ index.md # Home page with techdocs: true frontmatter
βββ architecture/
β βββ overview.md # System overview and diagrams
β βββ data-model.md # Entity relationships
β βββ api-design.md # API surface documentation
βββ decisions/
βββ ADR-001-*.md # Architecture Decision Records
Documentation is automatically maintained during plan execution β the agent checks for architectural changes after each phase and updates docs accordingly.
Safe, structured git commits with detailed conventional commit messages.
User: gitcm
Agent: [Stages all changes, writes a detailed commit message to a temp file,
commits using git commit -F, cleans up]
Result:
feat(auth): add JWT token refresh endpoint
- Add POST /api/auth/refresh endpoint
- Implement token rotation with refresh token family tracking
- Add rate limiting (5 refreshes per minute per user)
- Tests added for all edge cases
User: gitcmp
Agent: [Same as gitcm, plus rebase and push. Reports conflicts if any.]
Key safety rules:
- Commit messages are ALWAYS written to a file (never inline
-mflag) - Verification (build + test) runs before every commit
- Conflicts are reported to the user β never auto-resolved
Eliminate ambiguity before planning or requirements gathering through a structured, relentless interview.
Example β standalone:
User: grill_me
I want to add a caching layer to my API
Agent: [Identifies the design tree β major decision branches:]
1. What are you caching? (responses, queries, computed values)
2. Cache backend? (Redis, in-memory, CDN)
3. Invalidation strategy? (TTL, event-driven, manual)
4. Cache key design? (naming convention, namespacing)
Agent: [Walks each branch one decision at a time:]
"For Branch 1, we need to decide: what exactly are you caching?"
β User answers
"You said database queries. That implies [consequence]. Is that OK?"
β Drills deeper into sub-decisions
β Surfaces assumptions: "I'm now assuming X, Y, Z. Correct?"
β Moves to next branch only when current one is fully resolved
Output: A shared understanding summary with all decisions, assumptions,
constraints, and deferrals β ready for make_plan or make_requirements
Example β as a prefix to planning:
User: grill_me
I want to add webhook support to our notification system.
Once we're aligned, let's make_plan.
Agent: [Runs full grill-me protocol on webhooks β retry strategy,
payload format, authentication, rate limiting, failure handling,
deduplication β resolving every ambiguity]
Agent: [Transitions to make_plan with Phase 1.1 already complete]
Aliases: grill-me, grill, disambiguate, deep-dive, interview
Auto-detect your project's toolchain and generate a configuration file:
User: analyze_project /path/to/my/project
Agent: [Reads package.json/Cargo.toml/go.mod/pyproject.toml, scans directory
structure, detects language, framework, test runner, build tools]
Output: A complete .clinerules/project.md with:
- Build, test, and verify commands
- Directory layout
- Coding conventions
- Git conventions
- Cross-references to all rule documents
Incremental updates: If .clinerules/project.md already exists, analyze_project merges the fresh scan with your existing file β auto-detectable sections are refreshed while user-customized sections (coding conventions, special rules) are preserved verbatim.
# Install dependencies
yarn install
# Build
yarn build
# Run tests (107 tests across 4 test files)
yarn test
# Watch mode
yarn test:watchsrc/
βββ index.ts # MCP server entry point
βββ config.ts # Configuration resolution
βββ types/
β βββ index.ts # Type definitions & constants
βββ store/
β βββ rule-store.ts # In-memory document store
β βββ search-engine.ts # TF-IDF search engine
βββ tools/
β βββ get-rule.ts # Get rule by name
β βββ list-rules.ts # List all rules
β βββ search-rules.ts # Full-text search
β βββ analyze-project.ts # Project analysis & project.md generation
β βββ get-setup-guide.ts # Setup instructions
βββ __tests__/
βββ store/ # Store & search engine tests
βββ tools/ # Tool integration tests
docs/ # 11 bundled rule markdown files
MIT