top of page
Search

Building a Real Python Agent (Without Hype)

  • Writer: Mark Kendall
    Mark Kendall
  • Feb 11
  • 4 min read


Building a Real Python Agent (Without Hype)



Most teams do not need another AI framework.


They need to understand:


  • What an agent actually is

  • How to structure one properly

  • How to integrate it into real engineering workflows

  • How to evolve it responsibly



This article walks through building a command-line Python agent that genuinely helps a development team.


Not a toy.

Not a chatbot wrapper.

Not a plugin demo.


A structured, extensible reasoning system.





The Goal



We will build a CLI-based Python agent that:


  • Accepts a code file

  • Analyzes it using an LLM

  • Produces:


    • Architectural feedback

    • Observability improvements

    • Refactoring suggestions

    • Production risk analysis


  • Maintains session memory

  • Uses clean separation of concerns

  • Is extensible toward production



This is entry-level agent thinking for real engineers.





What an Agent Really Is



An agent is not magic.


It is:


  • Orchestration

  • State

  • Tool abstraction

  • Execution layer

  • Observability

  • Discipline



If you already understand microservices, you already understand 60% of agents.





Architecture Overview



We separate the system into five parts:


  1. Agent (Orchestrator)

  2. Memory (State Management)

  3. LLM Client (Execution Engine)

  4. Tools (Reasoning Strategy)

  5. CLI Interface (Operational Layer)



This mirrors how production systems are designed.





Project Structure


devassist_agent/

├── main.py

├── agent.py

├── memory.py

├── llm.py

├── tools.py

├── config.py

└── requirements.txt





Requirements


openai>=1.0.0

python-dotenv





Configuration



We isolate configuration.

import os

from dotenv import load_dotenv


load_dotenv()


class Config:

    OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")

    MODEL = "gpt-4o-mini"

This allows:


  • Environment-based key injection

  • Model swapping without code rewrite

  • CI/CD compatibility






Memory Layer (State Management)



Agents are stateful systems.

class SessionMemory:

    """

    Simple in-memory state manager.

    Replaceable with Redis or DB.

    """


    def init(self):

        self.history = []


    def add(self, role: str, content: str):

        self.history.append({"role": role, "content": content})


    def get(self):

        return self.history


    def clear(self):

        self.history = []

This abstraction makes future persistence trivial.





LLM Client (Execution Layer)



Never bury API calls inside your agent.


Abstract them.

from openai import OpenAI

from config import Config


class LLMClient:

    """

    Swappable LLM backend wrapper.

    """


    def init(self):

        self.client = OpenAI(api_key=Config.OPENAI_API_KEY)


    def chat(self, messages):

        response = self.client.chat.completions.create(

            model=Config.MODEL,

            messages=messages,

            temperature=0.2

        )

        return response.choices[0].message.content

This allows:


  • Swapping to Azure

  • Swapping to Anthropic

  • Swapping to local LLM

  • Mocking for tests



That is composability.





Tool Layer (Reasoning Strategy)



Tools encapsulate thinking patterns.

class CodeReviewTool:

    """

    Structured reasoning template.

    """


    @staticmethod

    def build_prompt(code: str) -> str:

        return f"""

You are a senior software architect.


Analyze the following code and provide:


1. Architectural feedback

2. Observability improvements

3. Refactoring suggestions

4. Potential production risks


Code:

{code}

"""

Later this could expand into:


  • SecurityTool

  • PerformanceTool

  • ComplianceTool

  • ArchitectureRuleValidator



Tools isolate cognitive strategy.





The Agent (Orchestrator)



The agent coordinates everything.

import logging

from memory import SessionMemory

from llm import LLMClient

from tools import CodeReviewTool


logging.basicConfig(level=logging.INFO)


class DevAssistAgent:

    """

    Multi-step reasoning agent.

    """


    def init(self):

        self.memory = SessionMemory()

        self.llm = LLMClient()


    def analyze_code(self, code: str):

        logging.info("Building prompt...")

        prompt = CodeReviewTool.build_prompt(code)


        self.memory.add("system", "You are a senior architect helping a microservices team.")

        self.memory.add("user", prompt)


        logging.info("Calling LLM...")

        response = self.llm.chat(self.memory.get())


        self.memory.add("assistant", response)


        return response

Notice:


  • No business logic in main

  • No direct API calls in agent

  • State handled separately

  • Tool strategy isolated



This is production thinking.





CLI Interface (Operational Layer)



Keep the interface simple.

import argparse

from agent import DevAssistAgent


def main():

    parser = argparse.ArgumentParser(description="DevAssist Agent CLI")

    parser.add_argument("--file", required=True, help="Path to code file to analyze")

    args = parser.parse_args()


    with open(args.file, "r") as f:

        code = f.read()


    agent = DevAssistAgent()

    result = agent.analyze_code(code)


    print("\n===== ANALYSIS RESULT =====\n")

    print(result)


if name == "__main__":

    main()

Run it:

export OPENAI_API_KEY=your_key_here

pip install -r requirements.txt

python main.py --file sample_service.py

You now have a working reasoning system.





What This Agent Actually Does for a Team



This is immediately useful:


  • Architecture guardrails for junior developers

  • Observability standardization

  • Risk detection

  • Refactoring acceleration

  • Code review consistency



It becomes a reasoning microservice assistant.





Principles Applied from the LearnTeachMaster Framework




Foundation



  • Advanced OOP

  • Composability

  • Replaceable execution layer

  • Testing-ready design

  • Clear abstraction boundaries




Synthesis



  • Production-style modular structure

  • Real-world workflow

  • Deadline-ready organization




Practical Engineering



  • Agent Architecture pattern

  • State abstraction

  • API integration discipline

  • Logging for observability




Realization



  • Multi-step reasoning

  • Deterministic CLI execution

  • Clear extensibility path






Why Command Line First?



Because:


  • It forces clarity

  • It eliminates UI distraction

  • It integrates into CI

  • It supports Jenkins and GitHub Actions

  • It reduces early complexity



Build the brain first.

Wrap it later.





Where This Evolves



This foundation can grow into:


  • FastAPI microservice

  • Redis-backed memory

  • Kafka-triggered analysis

  • GitHub PR comment automation

  • Observability dashboards

  • Multi-agent orchestration

  • Budget and usage monitoring

  • Guardrails and policy enforcement



But you do not start there.


You start here.





Homework & Challenge



If you want to go beyond entry-level agent thinking, here is your challenge:


  1. Add structured JSON output validation

  2. Implement retry logic with exponential backoff

  3. Add async execution

  4. Introduce multiple tools and dynamic selection

  5. Add OpenTelemetry tracing

  6. Convert to FastAPI

  7. Add Redis persistent memory

  8. Integrate into CI pipeline

  9. Add token usage monitoring

  10. Implement prompt injection defense



Do not install another framework.


Engineer it.


If you can complete those steps cleanly,

you are not experimenting with agents.


You are building production systems.





Final Thought



Agents are not magic.


They are:


  • State

  • Tools

  • Orchestration

  • Execution layers

  • Observability

  • Discipline




 
 
 

Recent Posts

See All

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Post: Blog2_Post

Subscribe Form

Thanks for submitting!

©2020 by LearnTeachMaster DevOps. Proudly created with Wix.com

bottom of page