Build & integrate
Framework integrations
LangChain, LangGraph, CrewAI, AutoGen — thin Python/TS clients and copy-paste patterns.
Framework integrations (LangChain, LangGraph, CrewAI, AutoGen)
AgentNexusAPI is HTTP-first (POST /api/v1/evaluate, POST /api/v1/receipt). For MCP-native hosts (Cursor, Claude Desktop, custom agents), use POST /api/mcp JSON-RPC — see MCP_GATEWAY.md. Example configs and a stdio bridge are in integrations/mcp-stdio-bridge/. Official thin clients live in-repo so you can wrap any framework in a few lines.
| Client | Location | Install (until published) |
|---|---|---|
| Python | integrations/python/agentnexus | pip install ./integrations/python/agentnexus from repo root |
| TypeScript | integrations/typescript/agentnexus-sdk | npm run build in that folder, then npm link or path dep |
Set AGENTNEXUS_API_KEY and AGENTNEXUS_BASE_URL (your deployed origin, no trailing slash) in the environment.
“Three lines” core pattern
After install, governance is: construct client → call evaluate → branch on status (then receipt when execution completes).
Python
from agentnexus import AgentNexus
gx = AgentNexus() # uses AGENTNEXUS_API_KEY + AGENTNEXUS_BASE_URL
out = gx.evaluate("my_policy", {"payload_scope": {"intent": "send_email"}})
TypeScript
import { AgentNexus } from "@agentnexus/sdk";
const gx = new AgentNexus();
const out = await gx.evaluate("my_policy", {
payload_scope: { intent: "send_email" },
});
Framework-specific snippets below add one indirection (Runnable, graph node, crew step, agent hook) around the same calls.
LangChain (Python)
Use a RunnableLambda (or wrap evaluate inside any custom tool). Requires langchain-core.
from langchain_core.runnables import RunnableLambda
from agentnexus import AgentNexus
gx = AgentNexus()
def check_governance(input: dict) -> dict:
return gx.evaluate("my_policy", {"payload_scope": input})
chain = RunnableLambda(check_governance) | your_downstream_chain
LangGraph (Python)
Call the client inside a graph node before side-effect nodes.
from agentnexus import AgentNexus
gx = AgentNexus()
def governance_node(state: dict) -> dict:
r = gx.evaluate("my_policy", {"payload_scope": state.get("payload", {})})
return {**state, "governance": r}
CrewAI (Python)
Invoke evaluate at the start of a task or in a before_kickoff callback (exact API depends on your CrewAI version).
from agentnexus import AgentNexus
gx = AgentNexus()
def before_agent_act(context: dict) -> None:
r = gx.evaluate("my_policy", {"payload_scope": context})
if r.get("status") not in ("auto_approved", "approved"):
raise RuntimeError("blocked by policy")
AutoGen (Python)
Call evaluate from a user-proxy hook or before registering a tool that performs external actions.
from agentnexus import AgentNexus
gx = AgentNexus()
def pre_tool_call(tool_input: dict) -> None:
gx.evaluate("my_policy", {"payload_scope": tool_input})
LangChain.js / LangGraph.js (TypeScript)
Use @agentnexus/sdk with RunnableLambda from @langchain/core/runnables (same idea as Python).
import { RunnableLambda } from "@langchain/core/runnables";
import { AgentNexus } from "@agentnexus/sdk";
const gx = new AgentNexus();
const governed = RunnableLambda.from(async (input: Record<string, unknown>) =>
gx.evaluate("my_policy", { payload_scope: input }),
);
Publishing
- PyPI: package name
agentnexusmay need conflict check; bump version inpyproject.tomland publish fromintegrations/python/agentnexus. - npm:
@agentnexus/sdk— runnpm run build, thennpm publishfromintegrations/typescript/agentnexus-sdk.
Until then, install from path / git as in the table above.
Roadmap
- Optional peer packages (
langchain-agentnexus, etc.) if the community wants zero-boilerplate imports. - CI: smoke tests for clients against a mocked HTTP server.
OpenAPI reference: public/openapi.yaml.