Epsilon — everything you need to know¶
Epsilon is an orchestration framework for multi-step AI workflows. You give it a task and a topology, and it handles decomposition, execution, coordination, retries, and recording. You can use the built-in agent or plug in your own implementation — a LangChain agent, a Python script, a deterministic function, anything that can read a task and write a file.
Epsilon runs entirely on your machine. No data leaves your environment. You bring your own model keys, your own infrastructure, and your own agent if you want one.
What Epsilon is not¶
Epsilon is not a model, not a hosted platform, and not a replacement for your agent. It is the orchestration layer between "I have an agent that works" and "I have a repeatable multi-step workflow." If you just need one model call, you do not need Epsilon.
Quickstart¶
pip install -r requirements.txt && pip install -e .
export OPENAI_API_KEY=...
epsilon runs create --topology dag \
--task "Create a file named hello.txt containing exactly: hello world"
epsilon runs get <run_id>
That creates a run, decomposes the task, executes it, and records the results. You can inspect the outputs, logs, and artifacts afterward.
What are you trying to do?¶
I want to try Epsilon on a simple task. Start with Getting Started. You will have a working run in under five minutes.
I want to run my own agent through Epsilon. Read the Epsilon SDK. The adapter contract is small — read a task, write files, return a dict. There are starter templates for LangChain and LlamaIndex.
I need to pick the right topology for my workload.
See Topologies. Start with dag if you are not sure.
I need configuration details, the adapter protocol, or architecture internals. Go to Technical Reference.
Key concepts¶
Before you go further, here are the terms that show up everywhere in Epsilon:
| Term | What it means |
|---|---|
| Run | A single execution of a task through a topology. Epsilon records everything about it — config, logs, artifacts, and status. |
| Topology | The coordination pattern for a run. Determines how work is split, executed, and verified. Epsilon ships with eight: dag, tree, pipeline, supervisor, work_queue, sharded_queue, map_reduce, population_search. |
| Implementation | The code that actually does the work inside a run. Can be the built-in Epsilon agent or your own adapter. |
| Adapter | Your custom implementation, packaged as a Python function or external process. Follows a simple contract: read the task, write to the workspace, return a result. |
| Workspace | A shared directory where agents read and write files during a run. This is how work products move between steps. |
| Artifact | A file produced by a run — code, data, reports, anything written to the workspace. |
| Wave | One cycle of build → QA → fix in topologies that support QA loops (dag, tree, pipeline). A run can have multiple waves. |
| Manifest | An explicit list of tasks for queue-based topologies (sharded_queue, map_reduce). You define the workload upfront instead of letting the LLM decompose it. |
Docs¶
| Page | What's there |
|---|---|
| Getting Started | Install, first run, inspect results |
| Topologies | Choosing and using the eight coordination patterns |
| Epsilon SDK | Plugging in your own agent or function |
| LangChain | LangChain adapter guide |
| LlamaIndex | LlamaIndex adapter guide |
| Technical Reference | Architecture, configuration, adapter protocol, environment variables |