Thanks for your interest in contributing! This guide explains how to propose changes, open PRs, and keep the project healthy. The project is currently at v0.1.0 and aims to remain fully local-first using LM Studio.
Branding note: public-facing name is DédalosStudio — API’s QA Solutions; repository and code identifiers use Daedalus Studio.
- Code of Conduct
- Project layout
- Prerequisites
- Environment setup
- Run project (dev)
- Branching & workflow
- Commit convention
- Pull Requests
- Coding guidelines
- Adding features safely
- UI contributions
- Prompt & schema changes
- Versioning & releases
- Issue reporting
Be kind, constructive, and respectful. Assume good intent. We welcome learners and professionals alike. Harassment or discrimination is not tolerated.
.
├─ src/
│ ├─ ai/ # provider.ts (LM Studio/OpenAI client)
│ ├─ io/ # filesystem helpers & writers
│ ├─ prompt/ # system/tasks prompt content (playbook)
│ ├─ schemas/ # response.schema.json (IA output validation)
│ ├─ cli.ts # main generator (CLI)
│ └─ server.ts # backend for UI (HTTP + SSE)
├─ ui/
│ ├─ public/logo.svg
│ └─ src/App.tsx # React + Vite web UI
├─ out/ # generated artifacts (gitignored)
├─ .env # environment variables (local only)
├─ package.json # root scripts (CLI + server)
└─ README.md # product docs
- Node.js 18+, pnpm
- LM Studio (Desktop) with a local model (e.g.
Qwen/Qwen2.5-Coder-3B-Instruct-GGUF) - Git & GitHub account (for PRs)
- Install deps:
pnpm install
cd ui && pnpm install && cd ..- Create
.envat repo root:
OPENAI_API_KEY=lm-studio
OPENAI_BASE_URL=http://127.0.0.1:1234/v1
OPENAI_MODEL=qwen2.5-coder-3b-instruct
LLM_MODEL=qwen2.5-coder-3b-instruct
PORT=3030- Start LM Studio:
- Load the model (e.g.,
qwen2.5-coder-3b-instruct-q4_k_m.gguf). - Start the local server on port 1234.
- Test:
http://127.0.0.1:1234/v1/modelsreturns JSON.
- Load the model (e.g.,
If using OpenAI/other providers later, adapt the
OPENAI_*entries accordingly.
Backend:
pnpm serve
# http://127.0.0.1:3030UI:
cd ui
pnpm dev
# http://127.0.0.1:5173CLI (optional):
pnpm dev --useAI \
--apiName demo-api \
--mode new \
--swagger ./spec.json \
--feature ./tests.featureLa IA es obligatoria: asegúrate de que LM Studio/OpenAI esté disponible o el comando fallará.
- Base branch:
main(always stable/green). - Create feature branches from
main:feat/<short-purpose>fix/<short-purpose>docs/<short-purpose>
- Keep PRs small and focused. Link issues where possible.
Suggested flow:
git checkout -b feat/query-headers-mapping
# code, test locally
git add -A
git commit -m "feat(cli): map query params and headers from feature tables"
git push -u origin feat/query-headers-mapping
# open PR -> review -> merge squash
Use Conventional Commits:
feat(ui): add live logs streamfix(server): recursive artifact listing for subfolderschore(brand): logo + header gradientdocs(readme): setup with LM Studio
Rules:
- lowercase type:
feat|fix|docs|chore|refactor|perf|test|build|ci. - optional scope in parentheses.
- concise subject, imperative.
- Keep diffs minimal, relevant, and incremental.
- Ensure builds run locally (backend/UI/CLI as applicable).
- Include before/after notes or screenshots for UI.
- Checklist:
- No console errors in UI/Server
-
pnpm serveruns and/healthis OK - UI
pnpm devstarts, generation works end-to-end - Artifacts appear under
out/<apiName>/ - README/CONTRIBUTING updated if necessary
- TypeScript/ESM modules.
- Avoid breaking changes to CLI flags and server endpoints unless versioned.
- Keep functions small and testable. Prefer pure helpers for parsing/mapping.
- Never remove existing fixes without a replacement (e.g., preserve
[CONTEXT:*]). - Schema validation with AJV must pass before writing files.
- When enriching collections:
- Ensure x-correlator pre-request script exists.
- Ensure Dynamic v0.1.0 minimal tests are present.
- Only auto-inject
Content-Type: application/jsonfor non-GET when missing.
- Keep
/generate,/generate/streamand/zip-currentstable. - Artifact listing must read subfolders under
out/. - SSE events:
event: logandevent: resultonly.
- Keep API_BASE pointing to backend (
127.0.0.1:3030). - Don’t couple UI to provider; UI talks only to the backend.
- Keep brand colors consistent: Postman Orange
#FF6C37, Sky#0EA5E9, Navy#0B3C5D.
When adding/altering generators:
- Update helpers (e.g., for parsing new Gherkin tables).
- Extend mapping functions (e.g., path params) without breaking existing ones.
- Keep placeholders
[CONTEXT:*]verbatim. - Write end-to-end once with a small
spec.json+tests.feature. - Verify artifacts in UI/CLI and that
analysis*.mdis generated.
Example plan for path params mapping:
- Parse table
And the path parameters are set to→{ name, value }[]. - Replace
{name}segments inrequest.url.pathandraw. - Never percent-encode placeholders; let Postman handle runtime if needed.
- Keep the form layout simple. Avoid library bloat.
- For visual changes, update
ui/public/logo.svgand header gradient inApp.tsx. - Expose a Download All (.zip) when
collectionsCount > 1. - Don’t change endpoints. The UI is a thin client over the server.
- Prompt updates: edit
src/prompt/system.tsandsrc/prompt/tasks.ts. - Schema updates: edit
src/schemas/response.schema.jsonand bump minor. - Any schema change must be reflected in AJV validation and file writers.
- Use SemVer:
MAJOR.MINOR.PATCH. - Tag releases:
git tag v0.1.0
git push --tags- Keep
README.mdupdated with the current version and major changes.
- Include OS, Node.js, and whether you use LM Studio.
- Attach minimal spec.json and tests.feature to reproduce.
- Paste backend logs (
/generate/stream), and console output if relevant.
Thanks for helping Daedalus Studio grow! 🙌