- CortexPilot already has three public first-run cases. The goal here is
- to make them easier to discover, explain, and reuse as proof-first
- assets.
-
+
+
+
+
- | Case |
- Current proof state |
- What to inspect today |
+ Case |
+ Current proof state |
+ What to inspect today |
- | news_digest |
- Official release-proven first public baseline |
-
+ | news_digest |
+ Official release-proven first public baseline |
+
healthy proof summary
·
single-run benchmark summary
|
- | topic_brief |
- Public showcase only; not yet equally release-proven |
- Treat as discovery/story surface until a dedicated healthy proof and benchmark bundle exists. |
+ topic_brief |
+ Public showcase only; not yet equally release-proven |
+ Treat as discovery/story surface until a dedicated healthy proof and benchmark bundle exists. |
- | page_brief |
- Public showcase only; browser-backed path, not yet equally release-proven |
- Treat as discovery/story surface until a dedicated healthy proof and benchmark bundle exists. |
+ page_brief |
+ Public showcase only; browser-backed path, not yet equally release-proven |
+ Treat as discovery/story surface until a dedicated healthy proof and benchmark bundle exists. |
- First run to proof to share
-
- - Start one public pack
- - Confirm it in Command Tower, Workflow Cases, and Proof & Replay
- - Reuse the Workflow Case as a share-ready recap asset
-
-
- For starter-kit users, the quickest truthful check today is still:
- prove the repo-local path first, keep the host-tool integration
- read-only, then confirm the same news_digest
- proof/baseline/recap trio still makes sense for your setup.
-
+ The proof pack for news_digest
+
+ This is the smallest reusable trust bundle on the public surface today:
+ one healthy proof summary, one single-run benchmark, one Workflow Case
+ recap, and one storefront status ledger that says what is still missing.
+
+
+
+
+ | Artifact |
+ Why it matters |
+ Open now |
+
+
+
+
+ | Healthy proof summary |
+ Shows the tracked healthy result path for the official public baseline. |
+ Open proof summary |
+
+
+ | Single-run benchmark summary |
+ Shows the current public performance baseline without pretending it is a broad benchmark campaign. |
+ Open benchmark summary |
+
+
+ | Workflow Case recap |
+ Shows the strongest share-ready artifact for why Workflow Cases matter outside the operator UI. |
+ Open recap asset |
+
+
+ | Proof-pack manifest |
+ Shows the machine-readable directory of what is in the public trust bundle today and what is still missing. |
+ Open proof-pack manifest |
+
+
+ | Global proof-pack index |
+ Shows the storefront-wide index of proven bundles, showcase bundles, and current public proof gaps. |
+ Open proof-pack index |
+
+
+ | Demo-status ledger |
+ Shows which storefront captures are healthy, degraded, missing, or still only explanation assets. |
+ Open demo-status ledger |
+
+
+
Why this helps discovery
@@ -119,6 +358,9 @@
Proof files you can inspect today
Tracked healthy proof summary for news_digest
Tracked single-run benchmark baseline for news_digest
Tracked Workflow Case recap asset for news_digest
+ Machine-readable proof-pack manifest for news_digest
+ Global proof-pack index across public proven and showcase bundles
+ Machine-readable live-capture contract for the tracked healthy GIF and English-first public capture set
Demo-status ledger for healthy, degraded, and missing storefront assets
Benchmark methodology and wording boundary
@@ -127,7 +369,14 @@ Current gaps that still matter
- The public healthy path is strongest for news_digest; the other two cases still need their own healthy proof and benchmark bundles.
- The current benchmark story is a tracked single-run baseline, not a broad release average.
- - The current recap story now has one tracked news_digest Workflow Case asset, but the storefront still lacks a healthy live-capture GIF and a broader benchmark artifact.
+ - The current recap story now has one tracked news_digest Workflow Case asset, tracked healthy local captures and proof assets, and one remaining broader benchmark gap.
+
+
+ What we still do not claim
+
+ - We do not claim that topic_brief or page_brief are equally release-proven yet.
+ - We do not claim that a storyboard or degraded local capture counts as healthy end-to-end proof.
+ - We do not claim that the current benchmark file is a broad multi-run release average.
diff --git a/package.json b/package.json
index 38e97ec..5dd7995 100644
--- a/package.json
+++ b/package.json
@@ -19,7 +19,7 @@
"truth:triage": "bash scripts/truth_triage.sh",
"precommit:quality": "CORTEXPILOT_HOST_COMPAT=1 bash scripts/pre_commit_quality_gate.sh",
"docs:render": "bash scripts/run_governance_py.sh scripts/render_docs.py",
- "docs:check": "bash scripts/run_governance_py.sh scripts/check_docs_navigation_registry.py && bash scripts/run_governance_py.sh scripts/check_docs_manual_fact_boundary.py && bash scripts/run_governance_py.sh scripts/check_docs_render_freshness.py",
+ "docs:check": "bash scripts/run_governance_py.sh scripts/check_docs_navigation_registry.py && bash scripts/run_governance_py.sh scripts/check_docs_manual_fact_boundary.py && bash scripts/run_governance_py.sh scripts/check_frontdoor_contract.py && bash scripts/run_governance_py.sh scripts/check_storefront_proof_assets.py && bash scripts/run_governance_py.sh scripts/check_docs_render_freshness.py",
"scan:secrets": "bash scripts/security_scan.sh",
"scan:workflow-security": "bash scripts/check_workflow_static_security.sh",
"scan:trivy": "bash scripts/check_trivy_repo_scan.sh",
@@ -101,8 +101,10 @@
"bench:e2e:speed:report-only": "bash scripts/bench_e2e_speed.sh --report-only",
"e2e:pm-chat": "bash scripts/e2e_pm_chat_command_tower_success.sh",
"e2e:pm-chat:real": "CORTEXPILOT_E2E_RUN_MODE=real CORTEXPILOT_E2E_RUNNER=agents CORTEXPILOT_E2E_REEXEC_STRICT=true bash scripts/e2e_pm_chat_command_tower_success.sh",
- "ci": "bash scripts/docker_ci.sh ci",
- "ci:host": "CORTEXPILOT_HOST_COMPAT=1 bash scripts/ci.sh",
+ "ci": "bash scripts/ci_local_fast.sh",
+ "ci:host": "bash scripts/ci_local_fast.sh",
+ "ci:strict": "bash scripts/docker_ci.sh ci",
+ "ci:strict:host": "CORTEXPILOT_HOST_COMPAT=1 bash scripts/ci.sh",
"ci:nightly:full": "bash scripts/ci_nightly_full.sh",
"dashboard:install": "bash scripts/run_workspace_app.sh dashboard typecheck",
"dashboard:dev": "bash scripts/run_workspace_app.sh dashboard dev",
diff --git a/packages/frontend-shared/uiCopy.js b/packages/frontend-shared/uiCopy.js
index 449f5c9..ff0c81f 100644
--- a/packages/frontend-shared/uiCopy.js
+++ b/packages/frontend-shared/uiCopy.js
@@ -8,7 +8,7 @@ const UI_COPY = {
homePhase2: {
heroTitle: "Command Tower for Codex and Claude Code workflows",
heroSubtitle:
- "Start one workflow case, watch Command Tower, then inspect Proof & Replay. CortexPilot keeps Codex and Claude Code work, MCP tools, evidence, and replay on one governed operator path.",
+ "See one proven workflow first, then choose the right adoption path. CortexPilot keeps Codex and Claude Code work, evidence, and replay on one governed operator path instead of scattered local scripts and logs.",
startFirstTaskLabel: "Start first task",
startNewTaskLabel: "Start new task",
viewLatestRunsLabel: "View latest runs",
@@ -34,11 +34,11 @@ const UI_COPY = {
desc: "Inspect evidence bundles, compare reruns, and replay failures before you trust the result.",
},
],
- publicTemplatesTitle: "Three public first-run cases",
+ publicTemplatesTitle: "One proven workflow, two showcase expansions",
publicTemplatesDescription:
- "Start with one public, read-only workflow case. `news_digest` is the official first public baseline; `topic_brief` and `page_brief` are showcase paths from the same front door.",
- publicTemplatesActionLabel: "Open task creation",
- publicTemplatesActionHref: "/pm",
+ "Start with `news_digest` first. It is the official public baseline. `topic_brief` and `page_brief` stay useful, but they are still showcase paths until they earn their own healthy proof bundles.",
+ publicTemplatesActionLabel: "Open proof pack",
+ publicTemplatesActionHref: "/use-cases/",
publicTemplateCards: [
{
href: "/pm?template=news_digest",
@@ -127,8 +127,8 @@ const UI_COPY = {
],
integrationTitle: "Choose the right adoption path",
integrationDescription:
- "Use the compatibility matrix as the front-door router, keep the use-case guide as the lighter proof-first side door, then open protocol, playbooks, packages, or AI surfaces only after the real job is clear.",
- proofFirstActionLabel: "Open use-case guide",
+ "Use the compatibility matrix as the main router, keep the proof-first guide as the fastest way to believe the product story, then open protocol, playbooks, packages, or AI surfaces only after the real job is clear.",
+ proofFirstActionLabel: "See first proven workflow",
proofFirstActionHref: "/use-cases/",
integrationCards: [
{
@@ -722,7 +722,7 @@ const UI_COPY = {
homePhase2: {
heroTitle: "面向 Codex 和 Claude Code 工作流的指挥塔",
heroSubtitle:
- "先启动一个工作流案例,再观察指挥塔,最后核对证明与回放。CortexPilot 把 Codex / Claude Code 工作、MCP 工具、证据和回放放进同一条受治理的操作路径。",
+ "先看一个已证明的工作流,再决定采用路径。CortexPilot 把 Codex / Claude Code 工作、证据和回放收进同一条受治理的操作路径,而不是散落在本地脚本和日志里。",
startFirstTaskLabel: "启动首个任务",
startNewTaskLabel: "启动新任务",
viewLatestRunsLabel: "查看最近 runs",
@@ -748,11 +748,11 @@ const UI_COPY = {
desc: "在真正信任结果前,先检查证据包、对比重跑和失败回放。",
},
],
- publicTemplatesTitle: "三个公开首跑 use case",
+ publicTemplatesTitle: "一个已证明工作流,两个展示扩展",
publicTemplatesDescription:
- "从一个公开、只读的 workflow case 开始。`news_digest` 是官方首个公开基线;`topic_brief` 和 `page_brief` 是同一前门下的展示路径。",
- publicTemplatesActionLabel: "打开任务创建",
- publicTemplatesActionHref: "/pm",
+ "先从 `news_digest` 开始。它是官方公开基线。`topic_brief` 和 `page_brief` 仍然有用,但在拿到各自健康证明包之前,它们仍属于展示路径。",
+ publicTemplatesActionLabel: "打开证明包",
+ publicTemplatesActionHref: "/use-cases/",
publicTemplateCards: [
{
href: "/pm?template=news_digest",
@@ -841,8 +841,8 @@ const UI_COPY = {
],
integrationTitle: "选择正确的采用路径",
integrationDescription:
- "先把 compatibility matrix 当成前门主路由,把 use-case guide 当成更轻的 proof-first 侧门,再在真正任务已经明确后进入协议、playbook、package 或 AI 页面,而不是在首页一次性读完所有说明。",
- proofFirstActionLabel: "打开 use-case 指南",
+ "先把 compatibility matrix 当成主路由,把 proof-first 指南当成最快建立信任的入口,再在任务真正明确后进入协议、playbook、package 或 AI 页面。",
+ proofFirstActionLabel: "查看首个已证明工作流",
proofFirstActionHref: "/use-cases/",
integrationCards: [
{
diff --git a/packages/frontend-shared/uiCopy.ts b/packages/frontend-shared/uiCopy.ts
index 9246120..a7faec4 100644
--- a/packages/frontend-shared/uiCopy.ts
+++ b/packages/frontend-shared/uiCopy.ts
@@ -844,7 +844,7 @@ const UI_COPY: Record = {
homePhase2: {
heroTitle: "Command Tower for Codex and Claude Code workflows",
heroSubtitle:
- "Start one workflow case, watch Command Tower, then inspect Proof & Replay. CortexPilot keeps Codex and Claude Code work, MCP tools, evidence, and replay on one governed operator path.",
+ "See one proven workflow first, then choose the right adoption path. CortexPilot keeps Codex and Claude Code work, evidence, and replay on one governed operator path instead of scattered local scripts and logs.",
startFirstTaskLabel: "Start first task",
startNewTaskLabel: "Start new task",
viewLatestRunsLabel: "View latest runs",
@@ -870,11 +870,11 @@ const UI_COPY: Record = {
desc: "Inspect evidence bundles, compare reruns, and replay failures before you trust the result.",
},
],
- publicTemplatesTitle: "Three public first-run cases",
+ publicTemplatesTitle: "One proven workflow, two showcase expansions",
publicTemplatesDescription:
- "Start with one public, read-only workflow case. `news_digest` is the official first public baseline; `topic_brief` and `page_brief` are showcase paths from the same front door.",
- publicTemplatesActionLabel: "Open task creation",
- publicTemplatesActionHref: "/pm",
+ "Start with `news_digest` first. It is the official public baseline. `topic_brief` and `page_brief` stay useful, but they are still showcase paths until they earn their own healthy proof bundles.",
+ publicTemplatesActionLabel: "Open proof pack",
+ publicTemplatesActionHref: "/use-cases/",
publicTemplateCards: [
{
href: "/pm?template=news_digest",
@@ -963,8 +963,8 @@ const UI_COPY: Record = {
],
integrationTitle: "Choose the right adoption path",
integrationDescription:
- "Use the compatibility matrix as the front-door router, keep the use-case guide as the lighter proof-first side door, then open protocol, playbooks, packages, or AI surfaces only after the real job is clear.",
- proofFirstActionLabel: "Open use-case guide",
+ "Use the compatibility matrix as the main router, keep the proof-first guide as the fastest way to believe the product story, then open protocol, playbooks, packages, or AI surfaces only after the real job is clear.",
+ proofFirstActionLabel: "See first proven workflow",
proofFirstActionHref: "/use-cases/",
integrationCards: [
{
@@ -1896,7 +1896,7 @@ const UI_COPY: Record = {
homePhase2: {
heroTitle: "面向 Codex 和 Claude Code 工作流的指挥塔",
heroSubtitle:
- "先启动一个工作流案例,再观察指挥塔,最后核对证明与回放。CortexPilot 把 Codex / Claude Code 工作、MCP 工具、证据和回放放进同一条受治理的操作路径。",
+ "先看一个已证明的工作流,再决定采用路径。CortexPilot 把 Codex / Claude Code 工作、证据和回放收进同一条受治理的操作路径,而不是散落在本地脚本和日志里。",
startFirstTaskLabel: "启动首个任务",
startNewTaskLabel: "启动新任务",
viewLatestRunsLabel: "查看最近 runs",
@@ -1922,11 +1922,11 @@ const UI_COPY: Record = {
desc: "在真正信任结果前,先检查证据包、对比重跑和失败回放。",
},
],
- publicTemplatesTitle: "三个公开首跑 use case",
+ publicTemplatesTitle: "一个已证明工作流,两个展示扩展",
publicTemplatesDescription:
- "从一个公开、只读的 workflow case 开始。`news_digest` 是官方首个公开基线;`topic_brief` 和 `page_brief` 是同一前门下的展示路径。",
- publicTemplatesActionLabel: "打开任务创建",
- publicTemplatesActionHref: "/pm",
+ "先从 `news_digest` 开始。它是官方公开基线。`topic_brief` 和 `page_brief` 仍然有用,但在拿到各自健康证明包之前,它们仍属于展示路径。",
+ publicTemplatesActionLabel: "打开证明包",
+ publicTemplatesActionHref: "/use-cases/",
publicTemplateCards: [
{
href: "/pm?template=news_digest",
@@ -2015,8 +2015,8 @@ const UI_COPY: Record = {
],
integrationTitle: "选择正确的采用路径",
integrationDescription:
- "先把 compatibility matrix 当成前门主路由,把 use-case guide 当成更轻的 proof-first 侧门,再在真正任务已经明确后进入协议、playbook、package 或 AI 页面,而不是在首页一次性读完所有说明。",
- proofFirstActionLabel: "打开 use-case 指南",
+ "先把 compatibility matrix 当成主路由,把 proof-first 指南当成最快建立信任的入口,再在任务真正明确后进入协议、playbook、package 或 AI 页面。",
+ proofFirstActionLabel: "查看首个已证明工作流",
proofFirstActionHref: "/use-cases/",
integrationCards: [
{
diff --git a/scripts/check_docs_render_freshness.py b/scripts/check_docs_render_freshness.py
index 7fb6ffc..a08386f 100644
--- a/scripts/check_docs_render_freshness.py
+++ b/scripts/check_docs_render_freshness.py
@@ -65,6 +65,9 @@ def main() -> int:
if not output.exists():
errors.append(f"manifest output missing: {output_rel}")
continue
+ freshness_strategy = str(item.get("freshness_strategy") or "timestamp").strip().lower()
+ if freshness_strategy == "existence_only":
+ continue
output_mtime = output.stat().st_mtime
for source_rel in item.get("source_inputs") or []:
source = ROOT / str(source_rel)
diff --git a/scripts/check_frontdoor_contract.py b/scripts/check_frontdoor_contract.py
new file mode 100644
index 0000000..bd66fd6
--- /dev/null
+++ b/scripts/check_frontdoor_contract.py
@@ -0,0 +1,239 @@
+#!/usr/bin/env python3
+from __future__ import annotations
+
+import argparse
+from html.parser import HTMLParser
+from pathlib import Path
+import re
+
+
+ROOT = Path(__file__).resolve().parents[1]
+INDEX_PATH = ROOT / "docs" / "index.html"
+USE_CASES_PATH = ROOT / "docs" / "use-cases" / "index.html"
+COMPATIBILITY_PATH = ROOT / "docs" / "compatibility" / "index.html"
+PROOF_SUMMARY_PATH = ROOT / "docs" / "releases" / "assets" / "news-digest-healthy-proof-2026-03-27.md"
+BENCHMARK_SUMMARY_PATH = ROOT / "docs" / "releases" / "assets" / "news-digest-benchmark-summary-2026-03-27.md"
+WORKFLOW_RECAP_PATH = ROOT / "docs" / "releases" / "assets" / "news-digest-workflow-case-recap-2026-03-27.md"
+PROOF_PACK_MANIFEST_PATH = ROOT / "docs" / "releases" / "assets" / "news-digest-proof-pack-2026-03-27.json"
+PROOF_PACK_INDEX_PATH = ROOT / "docs" / "assets" / "storefront" / "proof-pack-index.json"
+DEMO_STATUS_PATH = ROOT / "docs" / "assets" / "storefront" / "demo-status.md"
+
+
+def parse_args() -> argparse.Namespace:
+ parser = argparse.ArgumentParser(
+ description="Validate the static public front-door contract for CortexPilot."
+ )
+ return parser.parse_args()
+
+
+def _normalize_text(value: str) -> str:
+ return re.sub(r"\s+", " ", value).strip()
+
+
+class _AnchorParser(HTMLParser):
+ def __init__(self) -> None:
+ super().__init__()
+ self.anchors: list[tuple[str, str]] = []
+ self._current_href: str | None = None
+ self._parts: list[str] = []
+
+ def handle_starttag(self, tag: str, attrs: list[tuple[str, str | None]]) -> None:
+ if tag.lower() != "a":
+ return
+ href = dict(attrs).get("href")
+ if href:
+ self._current_href = href
+ self._parts = []
+
+ def handle_data(self, data: str) -> None:
+ if self._current_href is not None:
+ self._parts.append(data)
+
+ def handle_endtag(self, tag: str) -> None:
+ if tag.lower() != "a" or self._current_href is None:
+ return
+ text = _normalize_text("".join(self._parts))
+ self.anchors.append((self._current_href, text))
+ self._current_href = None
+ self._parts = []
+
+
+def _read_html(path: Path) -> str:
+ if not path.exists():
+ raise FileNotFoundError(path)
+ return path.read_text(encoding="utf-8")
+
+
+def _parse_anchors(path: Path) -> list[tuple[str, str]]:
+ parser = _AnchorParser()
+ parser.feed(_read_html(path))
+ return parser.anchors
+
+
+def _require_substrings(path: Path, content: str, required: list[str], errors: list[str]) -> None:
+ normalized = _normalize_text(content)
+ for snippet in required:
+ if _normalize_text(snippet) not in normalized:
+ errors.append(f"{path.relative_to(ROOT)} missing required text: {snippet}")
+
+
+def _require_anchor(path: Path, anchors: list[tuple[str, str]], href: str, text: str, errors: list[str]) -> None:
+ target = (_normalize_text(href), _normalize_text(text))
+ normalized = [(_normalize_text(item_href), _normalize_text(item_text)) for item_href, item_text in anchors]
+ if target not in normalized:
+ errors.append(
+ f"{path.relative_to(ROOT)} missing required link: text='{text}' href='{href}'"
+ )
+
+
+def main() -> int:
+ _ = parse_args()
+ errors: list[str] = []
+
+ required_paths = [
+ INDEX_PATH,
+ USE_CASES_PATH,
+ COMPATIBILITY_PATH,
+ PROOF_SUMMARY_PATH,
+ BENCHMARK_SUMMARY_PATH,
+ WORKFLOW_RECAP_PATH,
+ PROOF_PACK_MANIFEST_PATH,
+ PROOF_PACK_INDEX_PATH,
+ DEMO_STATUS_PATH,
+ ]
+ for path in required_paths:
+ if not path.exists():
+ errors.append(f"required front-door artifact missing: {path.relative_to(ROOT)}")
+
+ if errors:
+ print("❌ [frontdoor-contract] missing required artifacts:")
+ for item in errors:
+ print(f"- {item}")
+ return 1
+
+ index_html = _read_html(INDEX_PATH)
+ use_cases_html = _read_html(USE_CASES_PATH)
+ compatibility_html = _read_html(COMPATIBILITY_PATH)
+
+ _require_substrings(
+ INDEX_PATH,
+ index_html,
+ [
+ "See the first proven workflow",
+ "Choose the right adoption path",
+ "repo-backed operator control plane, not a hosted product",
+ "shipped MCP surface remains read-only",
+ "news_digest",
+ "topic_brief",
+ "page_brief",
+ ],
+ errors,
+ )
+
+ _require_substrings(
+ USE_CASES_PATH,
+ use_cases_html,
+ [
+ "First proven workflow and public proof pack",
+ "news_digest",
+ "only official release-proven public baseline",
+ "topic_brief",
+ "page_brief",
+ "not yet equally release-proven",
+ "What we still do not claim",
+ ],
+ errors,
+ )
+
+ _require_substrings(
+ COMPATIBILITY_PATH,
+ compatibility_html,
+ [
+ "One truthful compatibility matrix for modern coding-agent teams.",
+ "read-only MCP",
+ "See the first proven workflow",
+ ],
+ errors,
+ )
+
+ use_case_anchors = _parse_anchors(USE_CASES_PATH)
+ _require_anchor(
+ USE_CASES_PATH,
+ use_case_anchors,
+ "../releases/assets/news-digest-healthy-proof-2026-03-27.md",
+ "Open proof summary",
+ errors,
+ )
+ _require_anchor(
+ USE_CASES_PATH,
+ use_case_anchors,
+ "../releases/assets/news-digest-benchmark-summary-2026-03-27.md",
+ "Open benchmark summary",
+ errors,
+ )
+ _require_anchor(
+ USE_CASES_PATH,
+ use_case_anchors,
+ "../releases/assets/news-digest-workflow-case-recap-2026-03-27.md",
+ "Open recap asset",
+ errors,
+ )
+ _require_anchor(
+ USE_CASES_PATH,
+ use_case_anchors,
+ "../releases/assets/news-digest-proof-pack-2026-03-27.json",
+ "Open proof-pack manifest",
+ errors,
+ )
+ _require_anchor(
+ USE_CASES_PATH,
+ use_case_anchors,
+ "../assets/storefront/proof-pack-index.json",
+ "Open proof-pack index",
+ errors,
+ )
+ _require_anchor(
+ USE_CASES_PATH,
+ use_case_anchors,
+ "../assets/storefront/demo-status.md",
+ "Open demo-status ledger",
+ errors,
+ )
+
+ index_anchors = _parse_anchors(INDEX_PATH)
+ _require_anchor(
+ INDEX_PATH,
+ index_anchors,
+ "./use-cases/",
+ "See the first proven workflow",
+ errors,
+ )
+ _require_anchor(
+ INDEX_PATH,
+ index_anchors,
+ "./compatibility/",
+ "Choose the right adoption path",
+ errors,
+ )
+
+ compatibility_anchors = _parse_anchors(COMPATIBILITY_PATH)
+ _require_anchor(
+ COMPATIBILITY_PATH,
+ compatibility_anchors,
+ "../use-cases/",
+ "See the first proven workflow",
+ errors,
+ )
+
+ if errors:
+ print("❌ [frontdoor-contract] public front-door contract violations:")
+ for item in errors:
+ print(f"- {item}")
+ return 1
+
+ print("✅ [frontdoor-contract] public front-door contract satisfied")
+ return 0
+
+
+if __name__ == "__main__":
+ raise SystemExit(main())
diff --git a/scripts/check_github_control_plane.py b/scripts/check_github_control_plane.py
index 4fc2bd9..2d165e7 100644
--- a/scripts/check_github_control_plane.py
+++ b/scripts/check_github_control_plane.py
@@ -32,6 +32,16 @@ def _repo_path_exists(relative_path: str) -> bool:
return (ROOT / relative_path).exists()
+def _security_feature_status(repo_payload: dict, feature_name: str) -> str:
+ security = repo_payload.get("security_and_analysis")
+ if not isinstance(security, dict):
+ return ""
+ feature = security.get(feature_name)
+ if not isinstance(feature, dict):
+ return ""
+ return str(feature.get("status") or "").strip()
+
+
def main() -> int:
parser = argparse.ArgumentParser(description="Validate live GitHub control-plane settings against repo policy.")
parser.add_argument("--policy", default=str(DEFAULT_POLICY))
@@ -102,6 +112,19 @@ def main() -> int:
vulnerability_alerts_required = bool((platform_evidence.get("vulnerability_alerts") or {}).get("required"))
if vulnerability_alerts_required and vuln_alerts_code != 0:
errors.append(f"vulnerability alerts not proven: {vuln_alerts_payload}")
+ for feature_name in (
+ "secret_scanning",
+ "secret_scanning_push_protection",
+ "secret_scanning_non_provider_patterns",
+ "secret_scanning_validity_checks",
+ ):
+ feature_rule = platform_evidence.get(feature_name) if isinstance(platform_evidence.get(feature_name), dict) else {}
+ if feature_rule.get("required"):
+ status = _security_feature_status(repo_payload, feature_name)
+ if status != "enabled":
+ errors.append(
+ f"{feature_name} drift: actual={status or 'missing'!r} expected='enabled'"
+ )
dependabot_rule = platform_evidence.get("dependabot_config") if isinstance(platform_evidence.get("dependabot_config"), dict) else {}
dependabot_path = str(dependabot_rule.get("path") or "").strip()
if dependabot_path and not _repo_path_exists(dependabot_path):
@@ -137,6 +160,7 @@ def main() -> int:
"environments": env_payload if env_code == 0 else {"error": env_payload},
"private_vulnerability_reporting": pvr_payload if pvr_code == 0 else {"error": pvr_payload},
"vulnerability_alerts": {"enabled": True} if vuln_alerts_code == 0 else {"error": vuln_alerts_payload},
+ "security_and_analysis": repo_payload.get("security_and_analysis") if repo_code == 0 else {"error": repo_payload},
"codeql_default_setup": codeql_payload if codeql_code == 0 else {"error": codeql_payload},
"dependabot_alerts": dependabot_payload if dependabot_code == 0 else {"error": dependabot_payload},
"errors": errors,
diff --git a/scripts/check_storefront_proof_assets.py b/scripts/check_storefront_proof_assets.py
new file mode 100644
index 0000000..eee67cf
--- /dev/null
+++ b/scripts/check_storefront_proof_assets.py
@@ -0,0 +1,260 @@
+#!/usr/bin/env python3
+from __future__ import annotations
+
+import argparse
+import importlib.util
+import json
+from pathlib import Path
+
+
+ROOT = Path(__file__).resolve().parents[1]
+PROOF_PACK_INDEX = ROOT / "docs" / "assets" / "storefront" / "proof-pack-index.json"
+DEMO_STATUS_PATH = ROOT / "docs" / "assets" / "storefront" / "demo-status.md"
+LIVE_CAPTURE_REQUIREMENTS_PATH = ROOT / "docs" / "assets" / "storefront" / "live-capture-requirements.json"
+SHARE_KIT_PATH = ROOT / "docs" / "runbooks" / "storefront-share-kit.md"
+USE_CASES_PATH = ROOT / "docs" / "use-cases" / "index.html"
+
+
+def parse_args() -> argparse.Namespace:
+ parser = argparse.ArgumentParser(
+ description="Validate the public storefront proof asset contract."
+ )
+ return parser.parse_args()
+
+
+def _load_json(path: Path) -> dict:
+ payload = json.loads(path.read_text(encoding="utf-8"))
+ if not isinstance(payload, dict):
+ raise ValueError(f"{path} must contain a JSON object")
+ return payload
+
+
+def _require(condition: bool, message: str, errors: list[str]) -> None:
+ if not condition:
+ errors.append(message)
+
+
+def _asset_exists(path_text: str, errors: list[str], *, reason: str) -> None:
+ path = ROOT / path_text
+ if not path.exists():
+ errors.append(f"{reason}: missing asset {path_text}")
+
+
+def _require_text(path: Path, snippets: list[str], errors: list[str]) -> None:
+ text = path.read_text(encoding="utf-8")
+ for snippet in snippets:
+ if snippet not in text:
+ errors.append(f"{path.relative_to(ROOT)} missing required text: {snippet}")
+
+
+def _load_generator_module() -> object:
+ script_path = Path(__file__).resolve().with_name("generate_storefront_proof_pack_index.py")
+ spec = importlib.util.spec_from_file_location("cortexpilot_generate_storefront_proof_pack_index", script_path)
+ module = importlib.util.module_from_spec(spec)
+ assert spec and spec.loader
+ spec.loader.exec_module(module)
+ return module
+
+
+def main() -> int:
+ _ = parse_args()
+ errors: list[str] = []
+
+ if not PROOF_PACK_INDEX.exists():
+ print("❌ [storefront-proof-assets] proof-pack index missing")
+ return 1
+ if not LIVE_CAPTURE_REQUIREMENTS_PATH.exists():
+ print("❌ [storefront-proof-assets] live capture requirements missing")
+ return 1
+
+ generator = _load_generator_module()
+ generator.ROOT = ROOT
+ generator.REGISTRY_PATH = ROOT / "configs" / "storefront_proof_bundle_registry.json"
+ generator.OUTPUT_PATH = PROOF_PACK_INDEX
+ registry_payload = generator._load_json(generator.REGISTRY_PATH)
+ registry_payload["source_registry"] = generator.REGISTRY_PATH.relative_to(ROOT).as_posix()
+ expected_payload = generator.build_index(registry_payload)
+ current_payload = _load_json(PROOF_PACK_INDEX)
+ if current_payload != expected_payload:
+ errors.append("proof-pack index drifted from generator output")
+
+ payload = current_payload
+ live_capture_requirements = _load_json(LIVE_CAPTURE_REQUIREMENTS_PATH)
+ _require(
+ payload.get("artifact_type") == "cortexpilot_public_proof_pack_index",
+ "proof-pack index has unexpected artifact_type",
+ errors,
+ )
+ _require(
+ live_capture_requirements.get("artifact_type") == "cortexpilot_storefront_live_capture_requirements",
+ "live capture requirements has unexpected artifact_type",
+ errors,
+ )
+
+ vocabulary = payload.get("vocabulary_contract")
+ _require(isinstance(vocabulary, dict), "proof-pack index missing vocabulary_contract", errors)
+ if isinstance(vocabulary, dict):
+ _require(
+ vocabulary.get("proven_workflow_label") == "first proven workflow",
+ "proof-pack index must pin the proven workflow label",
+ errors,
+ )
+ _require(
+ vocabulary.get("proof_pack_label") == "public proof pack",
+ "proof-pack index must pin the proof pack label",
+ errors,
+ )
+
+ bundles = payload.get("bundles")
+ _require(isinstance(bundles, list), "proof-pack index missing bundles[]", errors)
+ bundle_map = {}
+ if isinstance(bundles, list):
+ for item in bundles:
+ if isinstance(item, dict) and isinstance(item.get("bundle_id"), str):
+ bundle_map[item["bundle_id"]] = item
+
+ for bundle_id in ("news_digest", "topic_brief", "page_brief"):
+ _require(bundle_id in bundle_map, f"missing proof bundle `{bundle_id}`", errors)
+
+ news = bundle_map.get("news_digest", {})
+ if isinstance(news, dict):
+ _require(news.get("proof_state") == "release_proven", "news_digest must stay release_proven", errors)
+ _require(
+ news.get("claim_scope") == "official_first_public_baseline",
+ "news_digest must stay the official first public baseline",
+ errors,
+ )
+ pack_manifest = str(news.get("pack_manifest") or "").strip()
+ _require(bool(pack_manifest), "news_digest must reference a pack_manifest", errors)
+ if pack_manifest:
+ _asset_exists(pack_manifest, errors, reason="news_digest pack manifest")
+
+ capture_contract = news.get("capture_contract")
+ _require(isinstance(capture_contract, dict), "news_digest missing capture_contract", errors)
+ if isinstance(capture_contract, dict):
+ _require(
+ capture_contract.get("healthy_live_capture_gif_present") is True,
+ "news_digest capture contract must acknowledge the landed healthy live-capture GIF",
+ errors,
+ )
+ _require(
+ capture_contract.get("healthy_english_first_public_capture_set_present") is True,
+ "news_digest capture contract must acknowledge the landed healthy English-first public capture set",
+ errors,
+ )
+
+ missing = news.get("missing_expected_artifacts")
+ _require(isinstance(missing, list), "news_digest missing expected_artifacts list", errors)
+ if isinstance(missing, list):
+ required_missing = {"broader_multi_round_benchmark"}
+ missing_set = {str(item) for item in missing}
+ if not required_missing.issubset(missing_set):
+ errors.append("news_digest missing_expected_artifacts must retain the broader benchmark gap")
+ forbidden_missing = {
+ "healthy_live_capture_gif",
+ "healthy_english_first_public_capture_set",
+ }
+ if forbidden_missing & missing_set:
+ errors.append("news_digest missing_expected_artifacts still lists landed healthy capture assets")
+
+ assets = news.get("assets")
+ _require(isinstance(assets, list), "news_digest missing assets[]", errors)
+ if isinstance(assets, list):
+ roles = {str(item.get("role")) for item in assets if isinstance(item, dict)}
+ required_roles = {
+ "healthy_proof_summary",
+ "healthy_proof_summary_machine",
+ "benchmark_summary",
+ "benchmark_summary_machine",
+ "workflow_case_recap",
+ "demo_status_ledger",
+ "dashboard_home_capture",
+ "dashboard_command_tower_capture",
+ "dashboard_runs_capture",
+ "healthy_live_capture_gif",
+ }
+ if not required_roles.issubset(roles):
+ errors.append("news_digest bundle lost one or more required proof asset roles")
+ for item in assets:
+ if not isinstance(item, dict):
+ errors.append("news_digest assets[] must contain objects")
+ continue
+ path_text = str(item.get("path") or "").strip()
+ if not path_text:
+ errors.append("news_digest assets[] contains an entry without path")
+ continue
+ _asset_exists(path_text, errors, reason="news_digest asset")
+
+ for showcase_id in ("topic_brief", "page_brief"):
+ bundle = bundle_map.get(showcase_id, {})
+ if isinstance(bundle, dict):
+ _require(
+ bundle.get("proof_state") == "showcase_only",
+ f"{showcase_id} must stay showcase_only until its own healthy proof bundle exists",
+ errors,
+ )
+ missing = bundle.get("missing_expected_artifacts")
+ _require(
+ isinstance(missing, list) and len(missing) > 0,
+ f"{showcase_id} must keep explicit missing_expected_artifacts",
+ errors,
+ )
+
+ _require_text(
+ DEMO_STATUS_PATH,
+ [
+ "Healthy backend-backed dashboard capture set",
+ "Healthy backend-backed live GIF",
+ "safe repo-side proof of a healthy local first public path",
+ ],
+ errors,
+ )
+ _require_text(
+ SHARE_KIT_PATH,
+ [
+ "Healthy backend-backed dashboard capture set",
+ "Healthy backend-backed live GIF",
+ "safe to reference as repo-tracked proof, not as proof of live GitHub publication",
+ ],
+ errors,
+ )
+ _require_text(
+ USE_CASES_PATH,
+ [
+ "tracked healthy local captures and proof assets",
+ "The current benchmark story is a tracked single-run baseline, not a broad release average.",
+ "Global proof-pack index across public proven and showcase bundles",
+ ],
+ errors,
+ )
+ requirements_assets = live_capture_requirements.get("required_assets")
+ _require(isinstance(requirements_assets, list), "live capture requirements missing required_assets[]", errors)
+ if isinstance(requirements_assets, list):
+ required_ids = {
+ "healthy_live_capture_gif",
+ "healthy_english_first_dashboard_home_capture",
+ "healthy_english_first_command_tower_capture",
+ "healthy_english_first_runs_capture",
+ }
+ seen_ids = {str(item.get("asset_id")) for item in requirements_assets if isinstance(item, dict)}
+ if not required_ids.issubset(seen_ids):
+ errors.append("live capture requirements lost one or more required asset ids")
+ for item in requirements_assets:
+ if not isinstance(item, dict):
+ errors.append("live capture requirements entries must be objects")
+ continue
+ if str(item.get("status") or "").strip() != "present":
+ errors.append("live capture requirements must mark landed assets as present")
+
+ if errors:
+ print("❌ [storefront-proof-assets] public proof asset contract violations:")
+ for item in errors:
+ print(f"- {item}")
+ return 1
+
+ print("✅ [storefront-proof-assets] public proof asset contract satisfied")
+ return 0
+
+
+if __name__ == "__main__":
+ raise SystemExit(main())
diff --git a/scripts/ci_local_fast.sh b/scripts/ci_local_fast.sh
new file mode 100644
index 0000000..d29d44f
--- /dev/null
+++ b/scripts/ci_local_fast.sh
@@ -0,0 +1,29 @@
+#!/usr/bin/env bash
+set -euo pipefail
+
+ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
+cd "$ROOT_DIR"
+
+source "$ROOT_DIR/scripts/lib/toolchain_env.sh"
+
+export PYTHONDONTWRITEBYTECODE=1
+export CORTEXPILOT_HOST_COMPAT=1
+
+RUNNER_TEMP_DIR="${RUNNER_TEMP:-$ROOT_DIR/.runtime-cache/cache/tmp/runner}"
+mkdir -p "$RUNNER_TEMP_DIR"
+export RUNNER_TEMP="$RUNNER_TEMP_DIR"
+
+echo "🚦 [ci-local-fast] start hosted-aligned local fast gate"
+
+# Keep the default local CI path lightweight and deterministic.
+# Full strict CI remains available via npm run ci:strict.
+CORTEXPILOT_DOCTOR_REQUIRE_DOCKER=0 \
+CORTEXPILOT_DOCTOR_REQUIRE_SUDO=0 \
+bash scripts/ci_control_plane_doctor.sh
+
+bash scripts/test_ci_policy_resolution.sh
+bash scripts/test_perf_smoke_policy_resolution.sh
+bash scripts/check_workflow_static_security.sh
+bash scripts/test_quick.sh --no-related
+
+echo "✅ [ci-local-fast] completed"
diff --git a/scripts/generate_storefront_proof_pack_index.py b/scripts/generate_storefront_proof_pack_index.py
new file mode 100644
index 0000000..91a5828
--- /dev/null
+++ b/scripts/generate_storefront_proof_pack_index.py
@@ -0,0 +1,160 @@
+#!/usr/bin/env python3
+from __future__ import annotations
+
+import argparse
+import json
+from pathlib import Path
+from typing import Any
+
+
+ROOT = Path(__file__).resolve().parents[1]
+REGISTRY_PATH = ROOT / "configs" / "storefront_proof_bundle_registry.json"
+OUTPUT_PATH = ROOT / "docs" / "assets" / "storefront" / "proof-pack-index.json"
+
+PRIMARY_ROLE_MAP: dict[str, dict[str, Any]] = {
+ "proof_summary_markdown": {
+ "role": "healthy_proof_summary",
+ "format": "markdown",
+ "truth_class": "repo_side_proof",
+ "required_for_claim": True,
+ },
+ "proof_summary_json": {
+ "role": "healthy_proof_summary_machine",
+ "format": "json",
+ "truth_class": "repo_side_machine_summary",
+ "required_for_claim": True,
+ },
+ "benchmark_summary_markdown": {
+ "role": "benchmark_summary",
+ "format": "markdown",
+ "truth_class": "repo_side_benchmark",
+ "required_for_claim": True,
+ },
+ "benchmark_summary_json": {
+ "role": "benchmark_summary_machine",
+ "format": "json",
+ "truth_class": "repo_side_machine_summary",
+ "required_for_claim": True,
+ },
+ "workflow_case_recap_markdown": {
+ "role": "workflow_case_recap",
+ "format": "markdown",
+ "truth_class": "share_ready_recap",
+ "required_for_claim": True,
+ },
+ "demo_status_markdown": {
+ "role": "demo_status_ledger",
+ "format": "markdown",
+ "truth_class": "truth_boundary_ledger",
+ "required_for_claim": True,
+ },
+}
+
+
+def parse_args() -> argparse.Namespace:
+ parser = argparse.ArgumentParser(description="Generate the public storefront proof-pack index.")
+ parser.add_argument("--registry", default=str(REGISTRY_PATH))
+ parser.add_argument("--output", default=str(OUTPUT_PATH))
+ parser.add_argument("--check", action="store_true")
+ return parser.parse_args()
+
+
+def _load_json(path: Path) -> dict[str, Any]:
+ payload = json.loads(path.read_text(encoding="utf-8"))
+ if not isinstance(payload, dict):
+ raise SystemExit(f"❌ [generate-storefront-proof-pack-index] expected JSON object: {path}")
+ return payload
+
+
+def _normalize_asset(path_text: str, descriptor: dict[str, Any]) -> dict[str, Any]:
+ item = {"path": path_text}
+ item.update(descriptor)
+ return item
+
+
+def _supporting_asset(path_text: str, key: str) -> dict[str, Any]:
+ ext = Path(path_text).suffix.lower().lstrip(".") or "unknown"
+ return {
+ "path": path_text,
+ "role": key,
+ "format": ext,
+ "truth_class": "supporting_capture",
+ "required_for_claim": False,
+ }
+
+
+def build_index(registry_payload: dict[str, Any]) -> dict[str, Any]:
+ bundles_payload = registry_payload.get("bundles")
+ if not isinstance(bundles_payload, list):
+ raise SystemExit("❌ [generate-storefront-proof-pack-index] registry missing bundles[]")
+
+ rendered_bundles: list[dict[str, Any]] = []
+ for bundle in bundles_payload:
+ if not isinstance(bundle, dict):
+ raise SystemExit("❌ [generate-storefront-proof-pack-index] bundle entries must be objects")
+ rendered = dict(bundle)
+ assets: list[dict[str, Any]] = []
+ pack_manifest_rel = str(bundle.get("pack_manifest") or "").strip()
+ if pack_manifest_rel:
+ pack_manifest = _load_json(ROOT / pack_manifest_rel)
+ rendered.setdefault("safe_public_claims", bundle.get("safe_public_claims", []))
+ rendered.setdefault("forbidden_claims", bundle.get("forbidden_claims", []))
+ rendered.setdefault("missing_expected_artifacts", bundle.get("missing_expected_artifacts", []))
+
+ primary_assets = pack_manifest.get("primary_assets")
+ if isinstance(primary_assets, dict):
+ for key, descriptor in PRIMARY_ROLE_MAP.items():
+ path_text = str(primary_assets.get(key) or "").strip()
+ if path_text:
+ assets.append(_normalize_asset(path_text, descriptor))
+
+ supporting_assets = pack_manifest.get("supporting_assets")
+ if isinstance(supporting_assets, dict):
+ for key, path_value in supporting_assets.items():
+ path_text = str(path_value or "").strip()
+ if path_text:
+ assets.append(_supporting_asset(path_text, key))
+
+ rendered["pack_manifest"] = pack_manifest_rel
+
+ rendered["assets"] = assets
+ rendered_bundles.append(rendered)
+
+ return {
+ "artifact_type": "cortexpilot_public_proof_pack_index",
+ "generated_by": "scripts/generate_storefront_proof_pack_index.py",
+ "source_registry": str(Path(registry_payload.get("source_registry") or "configs/storefront_proof_bundle_registry.json")),
+ "vocabulary_contract": registry_payload.get("vocabulary_contract", {}),
+ "bundles": rendered_bundles,
+ }
+
+
+def main() -> int:
+ args = parse_args()
+ registry_path = Path(args.registry).expanduser().resolve()
+ output_path = Path(args.output).expanduser().resolve()
+
+ registry_payload = _load_json(registry_path)
+ registry_payload["source_registry"] = registry_path.relative_to(ROOT).as_posix()
+ rendered = build_index(registry_payload)
+ rendered_json = json.dumps(rendered, ensure_ascii=False, indent=2) + "\n"
+
+ if args.check:
+ if not output_path.exists():
+ print(f"❌ [generate-storefront-proof-pack-index] missing output: {output_path.relative_to(ROOT)}")
+ return 1
+ current = output_path.read_text(encoding="utf-8")
+ if current != rendered_json:
+ print("❌ [generate-storefront-proof-pack-index] output drift detected")
+ return 1
+ print("✅ [generate-storefront-proof-pack-index] output is up to date")
+ return 0
+
+ output_path.parent.mkdir(parents=True, exist_ok=True)
+ output_path.write_text(rendered_json, encoding="utf-8")
+ print(f"✅ [generate-storefront-proof-pack-index] wrote {output_path.relative_to(ROOT)}")
+ return 0
+
+
+if __name__ == "__main__":
+ raise SystemExit(main())
diff --git a/scripts/pre_commit_quality_gate.sh b/scripts/pre_commit_quality_gate.sh
index 97ad0b6..336886f 100755
--- a/scripts/pre_commit_quality_gate.sh
+++ b/scripts/pre_commit_quality_gate.sh
@@ -66,7 +66,7 @@ run_gate() {
GATE_LOGS+=("$log_file")
}
-echo "🚦 [pre-commit-quality-gate] scope=$scope start parallel core quality gates"
+echo "🚦 [pre-commit-quality-gate] scope=$scope start fast local commit gates"
run_gate "lint" bash scripts/pre_commit_lint_gate.sh
run_gate "governance_python_entrypoints" bash scripts/check_governance_python_entrypoints.sh
@@ -76,9 +76,6 @@ run_gate "actionlint" bash scripts/check_actionlint.sh
run_gate "zizmor" bash scripts/check_zizmor.sh --offline "$ROOT_DIR"
run_gate "docs_navigation_registry" bash scripts/run_governance_py.sh scripts/check_docs_navigation_registry.py
run_gate "docs_fact_boundary" bash scripts/run_governance_py.sh scripts/check_docs_manual_fact_boundary.py
-run_gate "github_security_alerts" bash scripts/run_governance_py.sh scripts/check_github_security_alerts.py --mode require --repo xiaojiou176-open/CortexPilot-public
-run_gate "doc_drift" bash scripts/hooks/doc_drift_gate.sh
-run_gate "doc_sync" bash scripts/hooks/doc_sync_gate.sh
if [[ "$run_test_smell" -eq 1 ]]; then
run_gate "test_smell" bash scripts/test_smell_gate.sh
else
diff --git a/scripts/pre_push_quality_gate.sh b/scripts/pre_push_quality_gate.sh
index ce2963c..a52cd3c 100755
--- a/scripts/pre_push_quality_gate.sh
+++ b/scripts/pre_push_quality_gate.sh
@@ -135,8 +135,9 @@ require_skip_precommit_break_glass_or_fail() {
echo "🚦 [pre-push-quality-gate] local-first layered gate start"
# Layered gate strategy:
-# - Pre-commit handles: lint, doc_drift, doc_sync, test_smell (incremental)
-# - Pre-push handles: env governance, contract checks, incremental tests, external probe
+# - Pre-commit handles: cheap local commit gates + incremental test_smell
+# - Pre-push default handles: light repo contracts + quick tests
+# - Pre-push strict bundle handles: scanners, reports, external probe, broader local CI mirror
# - CI handles: full comprehensive checks (catch-all for --no-verify bypass)
# Check if pre-commit already passed recently (within 5 minutes)
@@ -159,44 +160,42 @@ else
fi
if [[ "$skip_lint_doc_gates" != "1" ]]; then
- echo "🔍 [pre-push-quality-gate] running lint and doc gates (pre-commit not detected)"
+ echo "🔍 [pre-push-quality-gate] running lint gate (pre-commit not detected)"
bash scripts/pre_commit_lint_gate.sh
- bash scripts/hooks/doc_drift_gate.sh
- bash scripts/hooks/doc_sync_gate.sh
else
- echo "⏭️ [pre-push-quality-gate] skipped lint/doc gates (already passed in pre-commit)"
+ echo "⏭️ [pre-push-quality-gate] skipped duplicate lint gate (already passed in pre-commit)"
fi
-# Pre-push exclusive gates (not in pre-commit)
-echo "🔍 [pre-push-quality-gate] running pre-push exclusive gates"
+# Pre-push fast-path gates (not in pre-commit)
+echo "🔍 [pre-push-quality-gate] running pre-push fast-path gates"
bash scripts/check_governance_python_entrypoints.sh
bash scripts/check_workflow_static_security.sh
-bash scripts/check_secret_scan_closeout.sh --mode current
-bash scripts/check_trivy_repo_scan.sh
bash scripts/run_governance_py.sh scripts/check_repo_positioning.py
bash scripts/run_governance_py.sh scripts/check_relocation_residues.py
-bash scripts/run_governance_py.sh scripts/check_github_security_alerts.py --mode require --repo xiaojiou176-open/CortexPilot-public
-bash scripts/run_governance_py.sh scripts/check_developer_facing_english.py
-bash scripts/run_governance_py.sh scripts/check_third_party_asset_registry.py
-bash scripts/run_governance_py.sh scripts/check_root_semantic_cleanliness.py
bash scripts/run_governance_py.sh scripts/check_env_governance.py --mode gate --max-deprecated-count 10 --max-deprecated-ratio 0.03
-bash scripts/run_governance_py.sh scripts/refresh_governance_evidence_manifest.py
-bash scripts/run_governance_py.sh scripts/build_governance_scorecard.py --enforce
-bash scripts/run_governance_py.sh scripts/build_governance_closeout_report.py --mode pre-push
-bash scripts/run_governance_py.sh scripts/check_active_report_identity.py
-bash scripts/run_governance_py.sh scripts/check_workflow_runner_governance.py
-bash scripts/run_governance_py.sh scripts/check_docs_render_freshness.py
bash scripts/run_governance_py.sh scripts/check_changed_scope_map.py
bash scripts/run_governance_py.sh scripts/check_e2e_marker_consistency.py
echo "ℹ️ [pre-push-quality-gate] skip desktop Cargo.lock audit in the default path; Linux/BSD desktop native graph review stays manual-only via bash scripts/docker_ci.sh lane desktop-native-smoke, and excluded unsupported-surface advisories must remain declared in configs/cargo_audit_ignored_advisories.json + governance closeout."
# Local-first layered rule:
-# pre-push runs a strict local CI profile (heavier than pre-commit, lighter than remote strict CI),
-# and remote CI remains the highest-strictness second-pass verifier.
-# Break-glass escape is explicit + auditable via reason/ticket.
-run_local_ci="${CORTEXPILOT_PRE_PUSH_RUN_CI_DOUBLE_CHECK:-1}"
+# pre-push runs a lightweight fast path by default and keeps the old strict
+# local mirror as an explicit opt-in. Remote CI remains the highest-strictness
+# second-pass verifier.
+run_local_ci="${CORTEXPILOT_PRE_PUSH_RUN_CI_DOUBLE_CHECK:-0}"
if [[ "$run_local_ci" == "1" ]]; then
- echo "🚦 [pre-push-quality-gate] running layered local verification bundle"
+ echo "🚦 [pre-push-quality-gate] running strict local verification bundle (opt-in)"
+ bash scripts/check_secret_scan_closeout.sh --mode current
+ bash scripts/check_trivy_repo_scan.sh
+ bash scripts/run_governance_py.sh scripts/check_github_security_alerts.py --mode require --repo xiaojiou176-open/CortexPilot-public
+ bash scripts/run_governance_py.sh scripts/check_developer_facing_english.py
+ bash scripts/run_governance_py.sh scripts/check_third_party_asset_registry.py
+ bash scripts/run_governance_py.sh scripts/check_root_semantic_cleanliness.py
+ bash scripts/run_governance_py.sh scripts/check_workflow_runner_governance.py
+ bash scripts/run_governance_py.sh scripts/check_docs_render_freshness.py
+ bash scripts/run_governance_py.sh scripts/refresh_governance_evidence_manifest.py
+ bash scripts/run_governance_py.sh scripts/build_governance_scorecard.py --enforce
+ bash scripts/run_governance_py.sh scripts/build_governance_closeout_report.py --mode pre-push
+ bash scripts/run_governance_py.sh scripts/check_active_report_identity.py
PRE_PUSH_EXTERNAL_PROBE_PROVIDER_MODE="$(resolve_pre_push_probe_provider_mode_or_fail)"
# Incremental test mode: only run tests related to changed files
@@ -260,8 +259,11 @@ EOF
--provider-api-mode "${PRE_PUSH_EXTERNAL_PROBE_PROVIDER_MODE}" \
--hard-timeout-sec "${CORTEXPILOT_PRE_PUSH_EXTERNAL_PROBE_TIMEOUT_SEC:-120}"
elif [[ "$run_local_ci" == "0" ]]; then
+ echo "🚦 [pre-push-quality-gate] running fast local verification bundle (default)"
+ bash ./scripts/test_quick.sh --no-related
+elif [[ "$run_local_ci" == "off" ]]; then
if [[ "${CORTEXPILOT_PRE_PUSH_BREAK_GLASS:-0}" != "1" ]]; then
- echo "❌ [pre-push-quality-gate] CI double-check disabled without break-glass" >&2
+ echo "❌ [pre-push-quality-gate] off mode requires break-glass" >&2
echo "Set CORTEXPILOT_PRE_PUSH_BREAK_GLASS=1 with reason/ticket to bypass." >&2
exit 1
fi
@@ -275,13 +277,13 @@ elif [[ "$run_local_ci" == "0" ]]; then
"${CORTEXPILOT_PRE_PUSH_BREAK_GLASS_REASON}" \
"${CORTEXPILOT_PRE_PUSH_BREAK_GLASS_TICKET}" \
"run_local_ci=${run_local_ci}")"
- echo "⚠️ [pre-push-quality-gate] break-glass: skip local CI double-check"
+ echo "⚠️ [pre-push-quality-gate] break-glass: skip all local verification bundles"
echo " reason=${CORTEXPILOT_PRE_PUSH_BREAK_GLASS_REASON}"
echo " ticket=${CORTEXPILOT_PRE_PUSH_BREAK_GLASS_TICKET}"
echo " audit_log=${audit_log_path}"
else
- echo "❌ [pre-push-quality-gate] invalid CORTEXPILOT_PRE_PUSH_RUN_CI_DOUBLE_CHECK=${run_local_ci}" >&2
+ echo "❌ [pre-push-quality-gate] invalid CORTEXPILOT_PRE_PUSH_RUN_CI_DOUBLE_CHECK=${run_local_ci} (expected: 0|1|off)" >&2
exit 1
fi
-echo "✅ [pre-push-quality-gate] local-first heavy gate passed"
+echo "✅ [pre-push-quality-gate] local-first layered gate passed"
diff --git a/scripts/render_docs.py b/scripts/render_docs.py
index 433301f..226e057 100644
--- a/scripts/render_docs.py
+++ b/scripts/render_docs.py
@@ -103,6 +103,7 @@ def _inject_fragments() -> None:
def main() -> int:
args = parse_args()
if not args.inject_only:
+ _run(["python3", "scripts/generate_storefront_proof_pack_index.py"])
_run(["python3", "scripts/ui_button_inventory.py", "--surface", "all"])
_run(["python3", "scripts/sync_ui_button_matrix.py", "--tiers", "P0,P1"])
_write_fragments()
diff --git a/tooling/search/search_engine.py b/tooling/search/search_engine.py
index 2d5f5ce..47d3ec6 100644
--- a/tooling/search/search_engine.py
+++ b/tooling/search/search_engine.py
@@ -271,20 +271,45 @@ def _write_web_error_artifacts(artifacts_dir: Path, error: str, page: Any | None
def _pick_chat_input_locator(page: Any) -> Any | None:
selectors = (
- "textarea",
- "input[type='text']",
+ "[data-placeholder='Ask anything'][contenteditable='true']",
+ ".tiptap.ProseMirror[contenteditable='true']",
+ "[aria-label='Enter a prompt for Gemini'][contenteditable='true']",
"[role='textbox'][contenteditable='true']",
"[contenteditable='true'][role='textbox']",
- "[aria-label='Enter a prompt for Gemini'][contenteditable='true']",
".ql-editor[contenteditable='true']",
+ "textarea",
+ "input[type='text']",
)
for selector in selectors:
locator = page.locator(selector)
- if locator.count() > 0:
+ if locator.count() <= 0:
+ continue
+ if not hasattr(locator, "nth"):
return locator.first
+ for index in range(locator.count()):
+ candidate = locator.nth(index)
+ try:
+ if candidate.is_visible():
+ return candidate
+ except Exception: # noqa: BLE001
+ return locator.first
return None
+def _activate_chat_input(locator: Any) -> None:
+ try:
+ locator.click(timeout=5000)
+ return
+ except Exception:
+ pass
+ try:
+ locator.click(timeout=5000, force=True)
+ return
+ except Exception:
+ pass
+ locator.focus()
+
+
def _chat_provider_search(
query: str,
provider: str,
@@ -353,7 +378,7 @@ def _chat_provider_search(
locator = _pick_chat_input_locator(page)
if locator is None:
raise RuntimeError("input box not found")
- locator.click()
+ _activate_chat_input(locator)
locator.fill(query)
locator.press("Enter")
page.wait_for_timeout(3000)
diff --git a/tooling/search_pipeline.py b/tooling/search_pipeline.py
index 8a4a726..2e26e18 100644
--- a/tooling/search_pipeline.py
+++ b/tooling/search_pipeline.py
@@ -275,24 +275,24 @@ def _build_digest_result(
break
normalized_status = str(status_override or "").strip().upper()
- template_label = "资讯摘要" if task_template == "news_digest" else "主题简报"
+ template_label = "news digest" if task_template == "news_digest" else "topic brief"
if normalized_status == "FAILED":
summary = (
- f"“{topic}”{template_label}未能完成。"
- f" {failure_reason_zh or '检索链路未通过,请查看高级证据获取详细失败上下文。'}"
+ f"The {template_label} for '{topic}' did not complete successfully."
+ " Review failure_reason_zh and the evidence bundle for the detailed provider failure context."
).strip()
status = "FAILED"
elif digest_sources:
- preview = "、".join(item["title"] for item in digest_sources[:3])
+ preview = ", ".join(item["title"] for item in digest_sources[:3])
summary = (
- f"已围绕“{topic}”汇总 {len(digest_sources)} 条公开来源,覆盖最近 {time_range} 的检索结果。"
- f" 当前优先可读来源包括:{preview}。"
+ f"Collected {len(digest_sources)} public-source result(s) about '{topic}' from the last {time_range}."
+ f" Current readable source highlights: {preview}."
)
status = "SUCCESS"
failure_reason_zh = None
else:
- summary = f"未检索到与“{topic}”相关的公开来源结果,请稍后重试或调整检索范围。"
+ summary = f"No public-source results were found for '{topic}'. Retry later or widen the search scope."
status = "EMPTY"
failure_reason_zh = failure_reason_zh or "未检索到公开来源结果"