Skip to content

LlamaIndex

This guide walks you through plugging a LlamaIndex agent into Epsilon. By the end, you will have a working adapter that runs LlamaIndex inside Epsilon's orchestration layer.

1. Install LlamaIndex

pip install -U llama-index llama-index-llms-openai

2. Set Your API Key

export OPENAI_API_KEY=...

3. Copy the Starter File

Use the simple version first:

cp examples/epsilon_sdk/llamaindex_simple_chat.py examples/epsilon_sdk/my_llamaindex_agent.py

If you want a file-writing starter:

cp examples/epsilon_sdk/llamaindex_workspace_file_agent.py examples/epsilon_sdk/my_llamaindex_agent.py

4. Keep This Function Name

def run(input: Dict[str, Any], *, session: AdapterSession | None = None, **_kwargs: Any) -> Dict[str, Any]:

Epsilon calls this function directly.

5. Read the Task and Workspace

task = str(input.get("task", "") or "").strip()
workspace = Path(str(input.get("workspace", ".") or ".")).resolve()
workspace.mkdir(parents=True, exist_ok=True)

6. Call LlamaIndex

from llama_index.llms.openai import OpenAI

llm = OpenAI(model="gpt-4o-mini", temperature=0)
response = llm.complete(f"Write one short paragraph for this task: {task}")
text = str(getattr(response, "text", "") or response).strip()

7. Write a File

output_path = workspace / "result.md"
output_path.write_text(text + "\n", encoding="utf-8")

8. Return a Result

return {
    "status": "ok",
    "summary": "wrote result.md",
    "artifact": "result.md",
}

9. Run It

epsilon runs create \
  --topology dag \
  --task "Write a short hello-world note" \
  --implementation python:examples/epsilon_sdk/my_llamaindex_agent.py:run

Smallest Working Template

from pathlib import Path
from typing import Any, Dict

from runtime.epsilon_sdk import AdapterSession


def run(input: Dict[str, Any], *, session: AdapterSession | None = None, **_kwargs: Any) -> Dict[str, Any]:
    task = str(input.get("task", "") or "").strip()
    workspace = Path(str(input.get("workspace", ".") or ".")).resolve()
    workspace.mkdir(parents=True, exist_ok=True)

    from llama_index.llms.openai import OpenAI
    llm = OpenAI(model="gpt-4o-mini", temperature=0)
    response = llm.complete(f"Write one short paragraph for this task: {task}")
    text = str(getattr(response, "text", "") or response).strip()

    output_path = workspace / "result.md"
    output_path.write_text(text + "\n", encoding="utf-8")

    return {
        "status": "ok",
        "summary": "wrote result.md",
        "artifact": "result.md",
    }

Use session Only If You Need It

Examples:

  • session.log("starting task")
  • session.send_message("ready for review")