top of page
Search

🧱 Enterprise Python Agent Template (Production-Ready)

  • Writer: Mark Kendall
    Mark Kendall
  • Feb 16
  • 3 min read


🧱 Enterprise Python Agent Template (Production-Ready)



This is not a toy LangChain script.


This is a microservice-grade structure.





1️⃣ Repository Structure


enterprise-agent/

├── app/

│   ├── main.py                 # FastAPI entrypoint

│   ├── config.py               # Settings (Pydantic)

│   │

│   ├── api/

│   │   ├── routes.py           # HTTP endpoints

│   │   └── schemas.py          # Request/Response models

│   │

│   ├── agent/

│   │   ├── orchestrator.py     # LangGraph state machine

│   │   ├── state.py            # Typed agent state

│   │   ├── tools.py            # Tool adapters

│   │   └── prompts.py          # Prompt templates

│   │

│   ├── memory/

│   │   ├── redis_memory.py

│   │   └── vector_store.py     # LlamaIndex integration

│   │

│   ├── models/

│   │   └── llm_gateway.py      # Model abstraction layer

│   │

│   ├── observability/

│   │   ├── logging.py

│   │   └── tracing.py

│   │

│   └── core/

│       ├── exceptions.py

│       └── constants.py

├── tests/

├── Dockerfile

├── docker-compose.yml

├── pyproject.toml

├── Makefile

└── README.md

That structure alone eliminates 70% of chaos.





2️⃣ Architectural Layers




🔹 API Layer (FastAPI)



  • Accept request

  • Validate schema

  • Pass to orchestrator

  • Return structured result



No business logic here.





🔹 Orchestrator Layer (LangGraph)



State machine like:

class AgentState(TypedDict):

    input: str

    context: dict

    plan: list[str]

    result: str

    error: str | None

Graph:

Receive → Plan → Execute Tools → Evaluate → Respond

Explicit transitions.

Deterministic branches.

Retry nodes if needed.


This is your “canonical core.”





🔹 Tool Adapter Layer



Never call tools directly in prompts.


Instead:

class SalesforceTool:

    def execute(self, payload: dict) -> dict:

        ...

Clean, injectable, mockable.





🔹 LLM Gateway (Critical)



Never call OpenAI or Claude directly inside orchestrator.


Instead:

class LLMGateway:

    def init(self, provider: str):

        ...


    def generate(self, messages: list[dict]) -> str:

        ...

That keeps you:


  • Model-agnostic

  • Swap-ready

  • Future-proof






🔹 Memory Layer



Redis for:


  • Short-term state

  • Conversation persistence

  • Workflow context



LlamaIndex for:


  • RAG

  • Document querying

  • Enterprise knowledge ingestion






🔹 Observability Layer



Every request:


  • Correlation ID

  • Structured logging

  • Latency metrics

  • Tool invocation tracing



Without observability, agents become black boxes.


You don’t allow black boxes.





3️⃣ Deployment Hardening




Dockerfile



  • Multi-stage build

  • Non-root user

  • Healthcheck endpoint




Health Endpoint


GET /health

Checks:


  • Redis connection

  • LLM provider reachable

  • Vector store reachable






4️⃣ Optional Event-Driven Version



If event-driven:


Add:

app/messaging/

    kafka_consumer.py

    kafka_producer.py

Then:


API becomes optional.

Service becomes event listener.





5️⃣ Why This Beats “Generate Agent”



Because once this template exists:


Next agent =


  • Copy repo

  • Define new state

  • Define new tools

  • Adjust graph



Everything else stays hardened.


That’s how you go from 70% → 90%.





6️⃣ What NOT to Do



❌ Don’t let LangChain logic leak into API layer

❌ Don’t let prompts call external APIs directly

❌ Don’t tie memory to provider

❌ Don’t skip tracing






 
 
 

Recent Posts

See All

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Post: Blog2_Post

Subscribe Form

Thanks for submitting!

©2020 by LearnTeachMaster DevOps. Proudly created with Wix.com

bottom of page