Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
39 commits
Select commit Hold shift + click to select a range
d8ecaf4
feat(persistence): add unified persistence layer with event store, to…
rayhpeng Apr 7, 2026
94eee95
feat(auth): release-validation pass for 2.0-rc — 12 blockers + simpli…
greatmengqi Apr 9, 2026
848ace9
feat: replace auto-admin creation with secure interactive first-boot …
Copilot Apr 11, 2026
7ff9077
feat(dependencies): add langchain-ollama and ollama packages with opt…
forx11 Apr 11, 2026
716cae2
docs: fix review feedback - source-map paths, memory API routes, supp…
Copilot Apr 11, 2026
814a488
docs: complete all English and Chinese documentation pages
Copilot Apr 11, 2026
88f822a
docs: fill all TBD documentation pages and add new harness module pages
Copilot Apr 11, 2026
56d5fa3
feat(persistence):Unified persistence layer with event store, feedbac…
rayhpeng Apr 12, 2026
44d9953
feat: Add metadata and descriptions to various documentation pages in…
foreleven Apr 12, 2026
00a90bb
refactor: Remove init_token handling from admin initialization logic …
foreleven Apr 12, 2026
2e05f38
feat(persistence): per-user filesystem isolation, run-scoped APIs, an…
rayhpeng Apr 12, 2026
db5ad86
feat: enhance chat history loading with new hooks and UI components (…
foreleven Apr 19, 2026
98a5b34
fix: resolve merge conflict in pnpm-lock.yaml and clean up better-aut…
foreleven Apr 26, 2026
3b71e2d
feat: add request parameter to generate_suggestions endpoint for enha…
foreleven Apr 26, 2026
829e82a
fix the lint error in backend
WillemJiang Apr 26, 2026
28381e1
fix the lint errors in frontend
WillemJiang Apr 26, 2026
9eca429
fix the lint errors in the frontend
WillemJiang Apr 26, 2026
3f88045
try to fix the frontend e2e test errors
WillemJiang Apr 26, 2026
64a43bc
fix the lint error by updating the .prettierignore
WillemJiang Apr 26, 2026
e4ff444
Fixed the warning message of uv
WillemJiang Apr 26, 2026
c5d57b4
fix: resolve make dev and test-e2e errors (#2570)
yangzheli Apr 26, 2026
16aedf4
Potential fix for pull request finding 'Unused import'
WillemJiang Apr 26, 2026
653b7ae
Apply the code reviewer suggestion of abstractmethod
WillemJiang Apr 26, 2026
7bf618d
Refactor DeerFlow to use Gateway's LangGraph-compatible API
foreleven Apr 26, 2026
35ef8b7
feat: add default database configuration for AppConfig and update exa…
foreleven Apr 26, 2026
ac18b9c
Apply the code reviewer suggestion of abstractmethod
WillemJiang Apr 26, 2026
60754f0
fix the unit tests error of agent provider
WillemJiang Apr 26, 2026
eba6c0e
fix unit tests of test_upload_files and test_shutdown
WillemJiang Apr 26, 2026
897dae5
fix the lint error of backend
WillemJiang Apr 26, 2026
da174df
feat: implement process-local internal authentication for Gateway and…
foreleven Apr 26, 2026
ed9ebfa
fix: enforce 'request' parameter requirement in require_auth decorator
foreleven Apr 26, 2026
748429e
fix(frontend): add missing mock routes for runs-list, models, and sug…
yangzheli Apr 26, 2026
b8bc482
refactor: root release config in gateway runtime (#2611)
greatmengqi Apr 27, 2026
4e4e4f9
fix(security): harden auth system and fix run journal logic bug (#2593)
WillemJiang Apr 28, 2026
69649d8
Fix the issues when reviewing 2566 persistant part (#2604)
WillemJiang Apr 28, 2026
e82940c
refactor: thread release config through lead path (#2612)
greatmengqi Apr 28, 2026
844ad8e
Merge branch 'main' into release/2.0-rc
WillemJiang Apr 28, 2026
64f4dc1
fixed the CI build errors
WillemJiang Apr 28, 2026
11afd32
Fix the log Injection error of skills.py
WillemJiang Apr 28, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
4 changes: 4 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -34,5 +34,9 @@ INFOQUEST_API_KEY=your-infoquest-api-key

# GitHub API Token
# GITHUB_TOKEN=your-github-token

# Database (only needed when config.yaml has database.backend: postgres)
# DATABASE_URL=postgresql://deerflow:password@localhost:5432/deerflow
#
# WECOM_BOT_ID=your-wecom-bot-id
# WECOM_BOT_SECRET=your-wecom-bot-secret
36 changes: 1 addition & 35 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# DeerFlow - Unified Development Environment

.PHONY: help config config-upgrade check install setup doctor dev dev-pro dev-daemon dev-daemon-pro start start-pro start-daemon start-daemon-pro stop up up-pro down clean docker-init docker-start docker-start-pro docker-stop docker-logs docker-logs-frontend docker-logs-gateway
.PHONY: help config config-upgrade check install setup doctor dev dev-daemon start start-daemon stop up down clean docker-init docker-start docker-stop docker-logs docker-logs-frontend docker-logs-gateway

BASH ?= bash
BACKEND_UV_RUN = cd backend && uv run
Expand All @@ -26,25 +26,19 @@ help:
@echo " make install - Install all dependencies (frontend + backend + pre-commit hooks)"
@echo " make setup-sandbox - Pre-pull sandbox container image (recommended)"
@echo " make dev - Start all services in development mode (with hot-reloading)"
@echo " make dev-pro - Start in dev + Gateway mode (experimental, no LangGraph server)"
@echo " make dev-daemon - Start dev services in background (daemon mode)"
@echo " make dev-daemon-pro - Start dev daemon + Gateway mode (experimental)"
@echo " make start - Start all services in production mode (optimized, no hot-reloading)"
@echo " make start-pro - Start in prod + Gateway mode (experimental)"
@echo " make start-daemon - Start prod services in background (daemon mode)"
@echo " make start-daemon-pro - Start prod daemon + Gateway mode (experimental)"
@echo " make stop - Stop all running services"
@echo " make clean - Clean up processes and temporary files"
@echo ""
@echo "Docker Production Commands:"
@echo " make up - Build and start production Docker services (localhost:2026)"
@echo " make up-pro - Build and start production Docker in Gateway mode (experimental)"
@echo " make down - Stop and remove production Docker containers"
@echo ""
@echo "Docker Development Commands:"
@echo " make docker-init - Pull the sandbox image"
@echo " make docker-start - Start Docker services (mode-aware from config.yaml, localhost:2026)"
@echo " make docker-start-pro - Start Docker in Gateway mode (experimental, no LangGraph container)"
@echo " make docker-stop - Stop Docker development services"
@echo " make docker-logs - View Docker development logs"
@echo " make docker-logs-frontend - View Docker frontend logs"
Expand Down Expand Up @@ -123,41 +117,21 @@ dev:
@$(PYTHON) ./scripts/check.py
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --dev

# Start all services in dev + Gateway mode (experimental: agent runtime embedded in Gateway)
dev-pro:
@$(PYTHON) ./scripts/check.py
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --dev --gateway

# Start all services in production mode (with optimizations)
start:
@$(PYTHON) ./scripts/check.py
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --prod

# Start all services in prod + Gateway mode (experimental)
start-pro:
@$(PYTHON) ./scripts/check.py
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --prod --gateway

# Start all services in daemon mode (background)
dev-daemon:
@$(PYTHON) ./scripts/check.py
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --dev --daemon

# Start daemon + Gateway mode (experimental)
dev-daemon-pro:
@$(PYTHON) ./scripts/check.py
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --dev --gateway --daemon

# Start prod services in daemon mode (background)
start-daemon:
@$(PYTHON) ./scripts/check.py
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --prod --daemon

# Start prod daemon + Gateway mode (experimental)
start-daemon-pro:
@$(PYTHON) ./scripts/check.py
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --prod --gateway --daemon

# Stop all services
stop:
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --stop
Expand All @@ -182,10 +156,6 @@ docker-init:
docker-start:
@$(RUN_WITH_GIT_BASH) ./scripts/docker.sh start

# Start Docker in Gateway mode (experimental)
docker-start-pro:
@$(RUN_WITH_GIT_BASH) ./scripts/docker.sh start --gateway

# Stop Docker development environment
docker-stop:
@$(RUN_WITH_GIT_BASH) ./scripts/docker.sh stop
Expand All @@ -208,10 +178,6 @@ docker-logs-gateway:
up:
@$(RUN_WITH_GIT_BASH) ./scripts/deploy.sh

# Build and start production services in Gateway mode
up-pro:
@$(RUN_WITH_GIT_BASH) ./scripts/deploy.sh --gateway

# Stop and remove production containers
down:
@$(RUN_WITH_GIT_BASH) ./scripts/deploy.sh down
44 changes: 10 additions & 34 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -243,9 +243,6 @@ make up # Build images and start all production services
make down # Stop and remove containers
```

> [!NOTE]
> The LangGraph agent server currently runs via `langgraph dev` (the open-source CLI server).

Access: http://localhost:2026

See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed Docker development guide.
Expand Down Expand Up @@ -289,53 +286,31 @@ On Windows, run the local development flow from Git Bash. Native `cmd.exe` and P

#### Startup Modes

DeerFlow supports multiple startup modes across two dimensions:

- **Dev / Prod** — dev enables hot-reload; prod uses pre-built frontend
- **Standard / Gateway** — standard uses a separate LangGraph server (4 processes); Gateway mode (experimental) embeds the agent runtime in the Gateway API (3 processes)
DeerFlow runs the agent runtime inside the Gateway API. Development mode enables hot-reload; production mode uses a pre-built frontend.

| | **Local Foreground** | **Local Daemon** | **Docker Dev** | **Docker Prod** |
|---|---|---|---|---|
| **Dev** | `./scripts/serve.sh --dev`<br/>`make dev` | `./scripts/serve.sh --dev --daemon`<br/>`make dev-daemon` | `./scripts/docker.sh start`<br/>`make docker-start` | — |
| **Dev + Gateway** | `./scripts/serve.sh --dev --gateway`<br/>`make dev-pro` | `./scripts/serve.sh --dev --gateway --daemon`<br/>`make dev-daemon-pro` | `./scripts/docker.sh start --gateway`<br/>`make docker-start-pro` | — |
| **Prod** | `./scripts/serve.sh --prod`<br/>`make start` | `./scripts/serve.sh --prod --daemon`<br/>`make start-daemon` | — | `./scripts/deploy.sh`<br/>`make up` |
| **Prod + Gateway** | `./scripts/serve.sh --prod --gateway`<br/>`make start-pro` | `./scripts/serve.sh --prod --gateway --daemon`<br/>`make start-daemon-pro` | — | `./scripts/deploy.sh --gateway`<br/>`make up-pro` |

| Action | Local | Docker Dev | Docker Prod |
|---|---|---|---|
| **Stop** | `./scripts/serve.sh --stop`<br/>`make stop` | `./scripts/docker.sh stop`<br/>`make docker-stop` | `./scripts/deploy.sh down`<br/>`make down` |
| **Restart** | `./scripts/serve.sh --restart [flags]` | `./scripts/docker.sh restart` | — |

> **Gateway mode** eliminates the LangGraph server process — the Gateway API handles agent execution directly via async tasks, managing its own concurrency.

#### Why Gateway Mode?

In standard mode, DeerFlow runs a dedicated [LangGraph Platform](https://langchain-ai.github.io/langgraph/) server alongside the Gateway API. This architecture works well but has trade-offs:

| | Standard Mode | Gateway Mode |
|---|---|---|
| **Architecture** | Gateway (REST API) + LangGraph (agent runtime) | Gateway embeds agent runtime |
| **Concurrency** | `--n-jobs-per-worker` per worker (requires license) | `--workers` × async tasks (no per-worker cap) |
| **Containers / Processes** | 4 (frontend, gateway, langgraph, nginx) | 3 (frontend, gateway, nginx) |
| **Resource usage** | Higher (two Python runtimes) | Lower (single Python runtime) |
| **LangGraph Platform license** | Required for production images | Not required |
| **Cold start** | Slower (two services to initialize) | Faster |

Both modes are functionally equivalent — the same agents, tools, and skills work in either mode.
Gateway owns `/api/langgraph/*` and translates those public LangGraph-compatible paths to its native `/api/*` routers behind nginx.

#### Docker Production Deployment

`deploy.sh` supports building and starting separately. Images are mode-agnostic — runtime mode is selected at start time:
`deploy.sh` supports building and starting separately:

```bash
# One-step (build + start)
deploy.sh # standard mode (default)
deploy.sh --gateway # gateway mode
deploy.sh

# Two-step (build once, start with any mode)
# Two-step (build once, start later)
deploy.sh build # build all images
deploy.sh start # start in standard mode
deploy.sh start --gateway # start in gateway mode
deploy.sh start # start pre-built images

# Stop
deploy.sh down
Expand Down Expand Up @@ -375,8 +350,8 @@ DeerFlow supports receiving tasks from messaging apps. Channels auto-start when

```yaml
channels:
# LangGraph Server URL (default: http://localhost:2024)
langgraph_url: http://localhost:2024
# LangGraph-compatible Gateway API base URL (default: http://localhost:8001/api)
langgraph_url: http://localhost:8001/api
# Gateway API URL (default: http://localhost:8001)
gateway_url: http://localhost:8001

Expand Down Expand Up @@ -444,6 +419,7 @@ channels:
Notes:
- `assistant_id: lead_agent` calls the default LangGraph assistant directly.
- If `assistant_id` is set to a custom agent name, DeerFlow still routes through `lead_agent` and injects that value as `agent_name`, so the custom agent's SOUL/config takes effect for IM channels.
- IM channel workers call Gateway's LangGraph-compatible API internally and automatically attach process-local internal auth plus the CSRF cookie/header pair required for thread and run creation.

Set the corresponding API keys in your `.env` file:

Expand Down Expand Up @@ -504,7 +480,7 @@ WECOM_BOT_SECRET=your_bot_secret
4. Make sure backend dependencies include `wecom-aibot-python-sdk`. The channel uses a WebSocket long connection and does not require a public callback URL.
5. The current integration supports inbound text, image, and file messages. Final images/files generated by the agent are also sent back to the WeCom conversation.

When DeerFlow runs in Docker Compose, IM channels execute inside the `gateway` container. In that case, do not point `channels.langgraph_url` or `channels.gateway_url` at `localhost`; use container service names such as `http://langgraph:2024` and `http://gateway:8001`, or set `DEER_FLOW_CHANNELS_LANGGRAPH_URL` and `DEER_FLOW_CHANNELS_GATEWAY_URL`.
When DeerFlow runs in Docker Compose, IM channels execute inside the `gateway` container. In that case, do not point `channels.langgraph_url` or `channels.gateway_url` at `localhost`; use container service names such as `http://gateway:8001/api` and `http://gateway:8001`, or set `DEER_FLOW_CHANNELS_LANGGRAPH_URL` and `DEER_FLOW_CHANNELS_GATEWAY_URL`.

**Commands**

Expand Down
Loading
Loading