Category: Opinion

  • Model Context Protocol (MCP)

    Model Context Protocol (MCP)

    Note: This article is still in progress. The content may change until it reaches its final version.

    There is a lot of buzz about a new trend in the AI world. The Model Context Protocol, or MCP for short. In this article I will explain what MCP is.

    LLMs Evolution

    In the “Startup Ideas Podcast”, episode “Model Context Protocol (MCP), clearly explained (why it matters)“, Dr. Ras Mix explained the concept from the perspective of the evolution of LLM and the tools and services surrounding the AI ecosystem. In the following paragraphs, I will summarize it.

    In the beginning, it was just the LLM: trained, with a cut-off knowledge base at a certain point in time. Based on the next token prediction, it could answer questions from the knowledge base, but it won’t be able to perform any tasks outside of what it was trained for.

    In a second phase, LLMs were connected to tools. For example, LLMs start to connect to a search engine, consume and interpret various APIs, and receive additional data knowledge through RAG systems. Being a new field, there is no standard and everyone has integrated these tools in their own way. While this approach makes LLMs smarter, they bring their own set of problems due to lack of standards.

    The current phase, LLMs and MCP, brings a standard way to connect to tools and services through an MCP server that provides a common protocol for LLMs to discover, understand and use the tools available.

    MCP Terminology

    Model – refers to the LLM model in use.

    Context – refers to the context provided by the tools.

    Protocol – refers to this common standard that allows the model (LLM) to understand the context provided by different sources.

  • Everything is a Trade-Off and the Power of “Why”

    Everything is a Trade-Off and the Power of “Why”

    In software architecture there is a “delicate dance of compromises”, a concept emphasized in the “Fundamentals of Software Architecture” by Mark Richards and Neal Ford. Two particular “laws” in that book have always resonated with me, influencing not just how I design systems, but also how I approach new technologies.

    Let’s dive in:

    The First Law of Architecture

    Everything in software architecture is a trade-off.

    Software architecture is about balancing competing priorities: performance vs. simplicity, security vs. ease of use, scalability vs. cost, and so on. No solution optimizes for all aspects at once. There is always a push and pull between constraints.

    Corollary to the First Law

    If an architect thinks they have discovered something that isn’t a trade-off, more likely they just haven’t yet identified the trade-off.

    I interpret this quite strictly: if you have not spotted any trade-offs in your proposed solution, you probably have not delved deep enough into how the solution will behave under various circumstances. Real-world usage often exposes hidden costs—be it higher operational overhead, licensing fees, or dependencies that complicate future updates.

    The Second Law of Architecture

    Why is more important than how.

    A frequent pitfall in software development is jumping straight into the “how” (tools, frameworks, or libraries), without first asking “why”. Pinpointing the “why” ensures that your technology choices serve real business or user needs, rather than following hype or convenience.

    First Law in the AI Context: Everything Is a Trade-off

    When choosing or designing an AI solution, you are always balancing competing goals. Below are two common examples:

    • Local vs. Cloud
      • Local Models: Full control over data and potentially lower latency, but you’ll incur high hardware and maintenance costs.
      • Cloud Models: Easier scalability and lower initial costs, but you face recurring fees and potential compliance concerns.
    • Complexity vs. Speed
      • Complex Models: Often more accurate but can be expensive to train and slow to run.
      • Simplicity: Faster, cheaper, and easier to maintain, though potentially less precise.

    Trade-offs are unavoidable. If you think you have found a free lunch, you likely have not spotted the hidden cost yet.

    Second Law in the AI Context: Why Matters More Than How

    Defining the rationale (“why”) comes before deciding on the technical approach (“how”). Examples include:

    • Business Alignment
      Ask why you need the AI solution at all. If it does not solve a genuine business or user problem, no amount of technical brilliance will create real value.
    • Measuring Success
      Pinpoint the problem you aim to fix (reduced costs, higher customer satisfaction, deeper analytics) and let that guide your choice of tools and methods.

    Knowing why you are building an AI system ensures you pick the right trade-offs when weighing performance, cost, complexity, and transparency.