top of page
Search

How to Build a TeamBrain Sales Engineer Agent in Python (Step-by-Step)

  • Writer: Mark Kendall
    Mark Kendall
  • Dec 29, 2025
  • 8 min read

How to Build a TeamBrain Sales Engineer Agent in Python (Step-by-Step)


If you’ve ever watched a Sales Engineer walk into a new customer conversation with nothing but scattered notes, a half-baked diagram, and a vague “we can integrate with anything” vibe… you know what happens next:

• The solution becomes improvised

• Architecture decisions become opinions

• Policies become afterthoughts

• And the first real “requirements” don’t show up until the build breaks


This starter kit flips the order.


TeamBrain says: Intent first.

Then evidence.

Then reasoning.

Then policies and architecture guidelines.

Then the questions you should have asked on day one.


This article shows you how to write that agent in Python—cleanly, safely, and in a way you can run locally or inside CI.



The Folder Structure (The Blueprint)


You’ll build the agent using this structure:


teambrain-se-starter-agent/

├── README.md

├── DEVELOPER_NOTES.md

├── PLAYBOOK.md

├── pyproject.toml

├── .env.example

├── intent/

│   ├── customer.yaml

│   ├── domain.md

│   └── constraints.yaml

├── policies/

│   ├── base_policies.md

│   └── architecture_guidelines.md

├── src/

│   └── teambrain_agent/

│       ├── init.py

│       ├── cli.py

│       ├── agent.py

│       ├── intent_reader.py

│       ├── openapi_tools.py

│       ├── llm_client.py

│       ├── prompts.py

│       ├── models.py

│       ├── report.py

│       └── logging_setup.py

└── tests/

    └── test_intent_reader.py


What this structure means

/intent is where the customer’s declared intent lives (multiple files, not one blob).

/policies is your seed library of baseline governance and architecture practices.

agent.py is your orchestrator (the brain that connects it all).

• Everything else is a tool or a contract.



Step 1 — Define the Contracts (models.py)


Before you call any AI, you define the output shape.


That’s the whole TeamBrain trick: policy over vibes.


📄 src/teambrain_agent/models.py


from future import annotations

from pydantic import BaseModel, Field

from typing import Any, Dict, List, Optional


class IntentDoc(BaseModel):

    name: str

    path: str

    content: str

    meta: Dict[str, Any] = Field(default_factory=dict)


class OpenAPISummary(BaseModel):

    source: str

    title: Optional[str] = None

    version: Optional[str] = None

    servers: List[str] = Field(default_factory=list)

    endpoints: List[str] = Field(default_factory=list)

    auth_schemes: List[str] = Field(default_factory=list)


class PolicyRecommendation(BaseModel):

    category: str

    statement: str

    rationale: str

    confidence: float = 0.7


class StarterKitReport(BaseModel):

    intent_overview: str

    inferred_constraints: List[str]

    openapi_summary: Optional[OpenAPISummary] = None

    recommended_policies: List[PolicyRecommendation]

    architecture_guidelines: List[str]

    next_questions: List[str]


What’s happening here

• You are locking your agent into a stable JSON output.

Field(default_factory=list) guarantees you never get None lists.

• This becomes your agent’s contract boundary, similar to a Spring Boot DTO layer.



Step 2 — Read Intent Files (Multi-file) (intent_reader.py)


TeamBrain isn’t “one prompt.” It’s a filesystem of intent.


📄 src/teambrain_agent/intent_reader.py


from future import annotations

from pathlib import Path

import yaml

from .models import IntentDoc


SUPPORTED_EXTS = {".md", ".txt", ".yaml", ".yml", ".json"}


def read_intent_dir(intent_dir: Path) -> list[IntentDoc]:

    docs: list[IntentDoc] = []

    for p in sorted(intent_dir.rglob("*")):

        if p.is_file() and p.suffix.lower() in SUPPORTED_EXTS:

            docs.append(_read_one(p))

    return docs


def readone(path: Path) -> IntentDoc:

    raw = path.read_text(encoding="utf-8")

    meta = {}

    if path.suffix.lower() in {".yaml", ".yml"}:

        try:

            parsed = yaml.safe_load(raw) or {}

            if isinstance(parsed, dict):

                meta = {"top_level_keys": list(parsed.keys())}

        except Exception:

            meta = {"yaml_parse_error": True}


    return IntentDoc(name=path.stem, path=str(path), content=raw, meta=meta)


What’s happening here

• You load multiple intent documents.

• You keep the content verbatim (no rewriting).

• You extract lightweight metadata (top-level YAML keys) to help reasoning without guessing.



Step 3 — Pull the Customer’s OpenAPI and Summarize It (openapi_tools.py)


This is the “don’t guess the API” move.


📄 src/teambrain_agent/openapi_tools.py


from future import annotations

import httpx

from typing import Any, Dict

from .models import OpenAPISummary


def fetch_openapi(url: str, timeout_s: float = 20.0) -> Dict[str, Any]:

    with httpx.Client(timeout=timeout_s) as client:

        r = client.get(url)

        r.raise_for_status()

        return r.json()


def summarize_openapi(spec: Dict[str, Any], source: str) -> OpenAPISummary:

    title = (spec.get("info") or {}).get("title")

    version = (spec.get("info") or {}).get("version")


    servers = [

        s.get("url") for s in (spec.get("servers") or [])

        if isinstance(s, dict) and s.get("url")

    ]


    paths = spec.get("paths") or {}

    endpoints = []

    if isinstance(paths, dict):

        for p, methods in paths.items():

            if isinstance(methods, dict):

                for m in methods.keys():

                    endpoints.append(f"{m.upper()} {p}")


    endpoints = endpoints[:200]


    auth_schemes = []

    comps = spec.get("components") or {}

    sec = (comps.get("securitySchemes") or {}) if isinstance(comps, dict) else {}

    if isinstance(sec, dict):

        auth_schemes = list(sec.keys())


    return OpenAPISummary(

        source=source,

        title=title,

        version=version,

        servers=servers,

        endpoints=endpoints,

        auth_schemes=auth_schemes

    )


What’s happening here

fetch_openapi() does an HTTP GET and returns JSON.

summarize_openapi() converts a massive spec into SE-friendly evidence:

• endpoints

• auth schemes

• servers

• title/version



Step 4 — Build the Prompts (Policy-First) (prompts.py)


Your agent’s “brain” is a template, not a freestyle chat.


📄 src/teambrain_agent/prompts.py


from future import annotations


SYSTEM_PROMPT = """You are TeamBrain SE Starter Agent.

Your job: read intent files + optional OpenAPI spec summary and produce:

1) inferred intent & constraints

2) a starter set of policies & architecture guidelines

3) questions a Sales Engineer should ask next


Rules:

- Be concrete, actionable, and enterprise-safe.

- Do NOT invent customer facts.

- If uncertain, say so and ask a question.

- Output MUST be valid JSON matching the requested schema.

"""


def build_user_prompt(intent_bundle: str, openapi_bundle: str | None) -> str:

    parts = []

    parts.append("INTENT FILES (verbatim):\n" + intent_bundle)

    if openapi_bundle:

        parts.append("\nOPENAPI SUMMARY:\n" + openapi_bundle)


    parts.append("""

Return JSON with keys:

intent_overview (string),

inferred_constraints (array of strings),

recommended_policies (array of objects {category, statement, rationale, confidence}),

architecture_guidelines (array of strings),

next_questions (array of strings).

""".strip())


    return "\n\n".join(parts)


What’s happening here

• You enforce:

• “don’t invent customer facts”

• JSON-only output

• actionable policies + next questions


This is where you stop hallucinations before they start.



Step 5 — Call the LLM Safely (llm_client.py)


This is your outbound integration (like a Spring Boot client).


📄 src/teambrain_agent/llm_client.py


from future import annotations

import os

import json

import httpx


class OpenAIChatClient:

    def init(self, api_key: str, model: str):

        self.api_key = api_key

        self.model = model


    def chat_json(self, system: str, user: str, timeout_s: float = 60.0) -> dict:

        headers = {"Authorization": f"Bearer {self.api_key}"}

        payload = {

            "model": self.model,

            "temperature": 0.2,

            "response_format": {"type": "json_object"},

            "messages": [

                {"role": "system", "content": system},

                {"role": "user", "content": user}

            ]

        }

        with httpx.Client(timeout=timeout_s) as client:

            r = client.post(url, headers=headers, json=payload)

            r.raise_for_status()

            data = r.json()


        content = data["choices"][0]["message"]["content"]

        return json.loads(content)


def from_env() -> OpenAIChatClient:

    api_key = os.environ.get("OPENAI_API_KEY", "").strip()

    model = os.environ.get("OPENAI_MODEL", "gpt-4.1-mini").strip()

    if not api_key:

        raise RuntimeError("Missing OPENAI_API_KEY")

    return OpenAIChatClient(api_key=api_key, model=model)


What’s happening here

• You force JSON output with response_format.

• You keep temperature low so output is stable.

• You parse the response and return a Python dict.



Step 6 — Orchestrate the Whole Workflow (agent.py)


This is the core: Intent → Evidence → Reasoning → Validated Output.


📄 src/teambrain_agent/agent.py


from future import annotations

from pathlib import Path

from typing import Optional

import json


from .intent_reader import read_intent_dir

from .openapi_tools import fetch_openapi, summarize_openapi

from .prompts import SYSTEM_PROMPT, build_user_prompt

from .llm_client import OpenAIChatClient

from .models import StarterKitReport


def bundleintents(intent_docs) -> str:

    chunks = []

    for d in intent_docs:

        chunks.append(

            f"---\nFILE: {d.path}\nMETA: {json.dumps(d.meta)}\n---\n{d.content}\n"

        )

    return "\n".join(chunks)


def run_agent(

    intent_dir: Path,

    llm: OpenAIChatClient,

    openapi_url: Optional[str] = None

) -> StarterKitReport:

    intent_docs = read_intent_dir(intent_dir)

    intent_bundle = bundleintents(intent_docs)


    openapi_bundle = None

    if openapi_url:

        spec = fetch_openapi(openapi_url)

        summary = summarize_openapi(spec, source=openapi_url)

        openapi_bundle = summary.model_dump_json(indent=2)


    user_prompt = build_user_prompt(intent_bundle=intent_bundle, openapi_bundle=openapi_bundle)

    raw = llm.chat_json(system=SYSTEM_PROMPT, user=user_prompt)


    return StarterKitReport(**raw)


What’s happening here (step-by-step)

1. Read multiple intent files

2. Bundle them into evidence

3. Optionally pull OpenAPI and summarize endpoints

4. Build a strict prompt

5. Call the LLM for JSON output

6. Validate output with Pydantic


This is how you build agents that don’t embarrass you.



Step 7 — Output Reports (report.py)


You write results in both machine and human formats.


📄 src/teambrain_agent/report.py


from future import annotations

import json

from pathlib import Path


def write_report_json(report: dict, out_path: Path) -> None:

    out_path.write_text(json.dumps(report, indent=2), encoding="utf-8")


def write_report_md(report: dict, out_path: Path) -> None:

    lines = []

    lines.append("# TeamBrain SE Starter Kit\n")


    lines.append("## Intent Overview\n")

    lines.append(report.get("intent_overview", "").strip() + "\n")


    lines.append("## Inferred Constraints\n")

    for c in report.get("inferred_constraints", []):

        lines.append(f"- {c}")

    lines.append("")


    lines.append("## Recommended Policies\n")

    for p in report.get("recommended_policies", []):

        lines.append(f"- {p.get('category','General')}: {p.get('statement','')}")

        lines.append(f"  - Rationale: {p.get('rationale','')}")

        lines.append(f"  - Confidence: {p.get('confidence',0.7)}")

    lines.append("")


    lines.append("## Architecture Guidelines\n")

    for g in report.get("architecture_guidelines", []):

        lines.append(f"- {g}")

    lines.append("")


    lines.append("## Next Questions (SE)\n")

    for q in report.get("next_questions", []):

        lines.append(f"- {q}")

    lines.append("")


    out_path.write_text("\n".join(lines), encoding="utf-8")




Step 8 — Add a CLI Entry Point (cli.py)


This makes the agent usable in a terminal or CI job.


📄 src/teambrain_agent/cli.py


from future import annotations

from pathlib import Path

import typer

from rich import print


from .agent import run_agent

from .llm_client import from_env

from .report import write_report_json, write_report_md


app = typer.Typer(help="TeamBrain SE Starter Agent: intent -> starter kit")


@app.command()

def generate(

    intent_dir: Path = typer.Option(Path("intent"), exists=True, file_okay=False, dir_okay=True),

    openapi_url: str = typer.Option("", help="Optional URL to OpenAPI JSON spec"),

    out_dir: Path = typer.Option(Path("out"), help="Output directory")

):

    out_dir.mkdir(parents=True, exist_ok=True)

    llm = from_env()


    report = run_agent(intent_dir=intent_dir, llm=llm, openapi_url=openapi_url or None)

    report_dict = report.model_dump()


    write_report_json(report_dict, out_dir / "starter-kit.json")

    write_report_md(report_dict, out_dir / "starter-kit.md")


    print("[bold green]Done.[/bold green]")

    print(f"- out/starter-kit.json")

    print(f"- out/starter-kit.md")




How to Run It (Local)


python -m venv .venv && source .venv/bin/activate

pip install -e .

cp .env.example .env

export $(cat .env | xargs)


teambrain-se generate --intent-dir intent --openapi-url "https://example.com/openapi.json" --out-dir out




What You Get

out/starter-kit.md → SE-friendly summary

out/starter-kit.json → pipeline-friendly data



Closing: Why This Is TeamBrain


This isn’t “AI that writes words.”


It’s AI that reasons from declared intent, backed by evidence, and outputs governance-ready starting points:

• policies

• architecture guidelines

• next questions


That’s how you keep engagements clean and scalable.


 
 
 

Recent Posts

See All

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Post: Blog2_Post

Subscribe Form

Thanks for submitting!

©2020 by LearnTeachMaster DevOps. Proudly created with Wix.com

bottom of page