ai engineer building infrastructure for the next generation of coding agents. π οΈ
currently building redcon - context optimization layer for AI coding agents. massive repo contexts kill your token budget. redcon fixes that. π₯
file context:
baseline: 12,228 tokens
cold start: 7,749 tokens (-37%)
warm cache: 919 tokens (-92%)
command output (git diff, pytest, grep, ls -R...):
git diff: 8,078 β 244 tokens (-97%)
pytest: 2,555 β 669 tokens (-74%)
grep: 7,015 β 1,623 tokens (-77%)
find: 3,398 β 636 tokens (-81%)
11 compressors. sub-1ms parse. zero info loss at compact level (verified by must-preserve regex gates). π―
pip install redcon
agent β redcon β model
less tokens. less money. same results. πΈ
i design and build LLM-powered systems that actually work in production - RAG pipelines, agentic workflows, evaluation frameworks, and the infra underneath it all.
9+ years writing code. the last 3 spent deep in AI engineering.





