LangChain¶
This guide walks you through plugging a LangChain agent into Epsilon. By the end, you will have a working adapter that runs LangChain inside Epsilon's orchestration layer.
1. Install LangChain¶
pip install -U langchain langchain-openai
2. Set Your API Key¶
export OPENAI_API_KEY=...
3. Copy the Starter File¶
Use the simple version first:
cp examples/epsilon_sdk/langchain_simple_chat.py examples/epsilon_sdk/my_langchain_agent.py
If you want a file-writing starter:
cp examples/epsilon_sdk/langchain_workspace_file_agent.py examples/epsilon_sdk/my_langchain_agent.py
4. Keep This Function Name¶
def run(input: Dict[str, Any], *, session: AdapterSession | None = None, **_kwargs: Any) -> Dict[str, Any]:
Epsilon calls this function directly.
5. Read the Task and Workspace¶
task = str(input.get("task", "") or "").strip()
workspace = Path(str(input.get("workspace", ".") or ".")).resolve()
workspace.mkdir(parents=True, exist_ok=True)
6. Call LangChain¶
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
response = llm.invoke(f"Write one short paragraph for this task: {task}")
text = str(getattr(response, "content", "") or "").strip()
7. Write a File¶
output_path = workspace / "result.md"
output_path.write_text(text + "\n", encoding="utf-8")
8. Return a Result¶
return {
"status": "ok",
"summary": "wrote result.md",
"artifact": "result.md",
}
9. Run It¶
epsilon runs create \
--topology dag \
--task "Write a short hello-world note" \
--implementation python:examples/epsilon_sdk/my_langchain_agent.py:run
Smallest Working Template¶
from pathlib import Path
from typing import Any, Dict
from runtime.epsilon_sdk import AdapterSession
def run(input: Dict[str, Any], *, session: AdapterSession | None = None, **_kwargs: Any) -> Dict[str, Any]:
task = str(input.get("task", "") or "").strip()
workspace = Path(str(input.get("workspace", ".") or ".")).resolve()
workspace.mkdir(parents=True, exist_ok=True)
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
response = llm.invoke(f"Write one short paragraph for this task: {task}")
text = str(getattr(response, "content", "") or "").strip()
output_path = workspace / "result.md"
output_path.write_text(text + "\n", encoding="utf-8")
return {
"status": "ok",
"summary": "wrote result.md",
"artifact": "result.md",
}
Use session Only If You Need It¶
Examples:
session.log("starting task")session.send_message("ready for review")