Author: Silviu Bojica

  • Model Context Protocol (MCP)

    Model Context Protocol (MCP)

    Note: This article is still in progress. The content may change until it reaches its final version.

    There is a lot of buzz about a new trend in the AI world. The Model Context Protocol, or MCP for short. In this article I will explain what MCP is.

    LLMs Evolution

    In the “Startup Ideas Podcast”, episode “Model Context Protocol (MCP), clearly explained (why it matters)“, Dr. Ras Mix explained the concept from the perspective of the evolution of LLM and the tools and services surrounding the AI ecosystem. In the following paragraphs, I will summarize it.

    In the beginning, it was just the LLM: trained, with a cut-off knowledge base at a certain point in time. Based on the next token prediction, it could answer questions from the knowledge base, but it won’t be able to perform any tasks outside of what it was trained for.

    In a second phase, LLMs were connected to tools. For example, LLMs start to connect to a search engine, consume and interpret various APIs, and receive additional data knowledge through RAG systems. Being a new field, there is no standard and everyone has integrated these tools in their own way. While this approach makes LLMs smarter, they bring their own set of problems due to lack of standards.

    The current phase, LLMs and MCP, brings a standard way to connect to tools and services through an MCP server that provides a common protocol for LLMs to discover, understand and use the tools available.

    MCP Terminology

    Model – refers to the LLM model in use.

    Context – refers to the context provided by the tools.

    Protocol – refers to this common standard that allows the model (LLM) to understand the context provided by different sources.

  • Everything is a Trade-Off and the Power of “Why”

    Everything is a Trade-Off and the Power of “Why”

    In software architecture there is a “delicate dance of compromises”, a concept emphasized in the “Fundamentals of Software Architecture” by Mark Richards and Neal Ford. Two particular “laws” in that book have always resonated with me, influencing not just how I design systems, but also how I approach new technologies.

    Let’s dive in:

    The First Law of Architecture

    Everything in software architecture is a trade-off.

    Software architecture is about balancing competing priorities: performance vs. simplicity, security vs. ease of use, scalability vs. cost, and so on. No solution optimizes for all aspects at once. There is always a push and pull between constraints.

    Corollary to the First Law

    If an architect thinks they have discovered something that isn’t a trade-off, more likely they just haven’t yet identified the trade-off.

    I interpret this quite strictly: if you have not spotted any trade-offs in your proposed solution, you probably have not delved deep enough into how the solution will behave under various circumstances. Real-world usage often exposes hidden costs—be it higher operational overhead, licensing fees, or dependencies that complicate future updates.

    The Second Law of Architecture

    Why is more important than how.

    A frequent pitfall in software development is jumping straight into the “how” (tools, frameworks, or libraries), without first asking “why”. Pinpointing the “why” ensures that your technology choices serve real business or user needs, rather than following hype or convenience.

    First Law in the AI Context: Everything Is a Trade-off

    When choosing or designing an AI solution, you are always balancing competing goals. Below are two common examples:

    • Local vs. Cloud
      • Local Models: Full control over data and potentially lower latency, but you’ll incur high hardware and maintenance costs.
      • Cloud Models: Easier scalability and lower initial costs, but you face recurring fees and potential compliance concerns.
    • Complexity vs. Speed
      • Complex Models: Often more accurate but can be expensive to train and slow to run.
      • Simplicity: Faster, cheaper, and easier to maintain, though potentially less precise.

    Trade-offs are unavoidable. If you think you have found a free lunch, you likely have not spotted the hidden cost yet.

    Second Law in the AI Context: Why Matters More Than How

    Defining the rationale (“why”) comes before deciding on the technical approach (“how”). Examples include:

    • Business Alignment
      Ask why you need the AI solution at all. If it does not solve a genuine business or user problem, no amount of technical brilliance will create real value.
    • Measuring Success
      Pinpoint the problem you aim to fix (reduced costs, higher customer satisfaction, deeper analytics) and let that guide your choice of tools and methods.

    Knowing why you are building an AI system ensures you pick the right trade-offs when weighing performance, cost, complexity, and transparency.

  • Manage Your Tasks with ChatGPT Reminders

    Manage Your Tasks with ChatGPT Reminders

    This week, OpenAI introduced a set of task management features for ChatGPT. Now, you can create one-time or recurring reminders, or define more complex tasks that the AI should run at specific intervals.

    The feature is currently rolling out to users on paid plans: Plus, Pro, or Teams. For more details, see the Scheduled tasks in ChatGPT article on the OpenAI help pages.

    Examples of Managing Your Tasks with ChatGPT

    Remind me tomorrow morning at 9am to buy milk
    Can you provide me every day, at 2:15pm, with an executive summary of the last 24 hours news from the domain of Artificial Intelligence (AI)?

    I am particularly excited about prompts like the second example, since I’ll receive a daily briefing on what’s happening in the AI space. It’s a quick way to stay informed without manually checking updates.

    How the Flow Looks in Images

    The model selector:

    • There is a new model option, GPT-4o (scheduled tasks), clearly marked as beta.
    Screenshot showcasing tasks ChatGPT interface and features.

    The prompt and the answer:

    • The usual workflow: you enter your prompt, ChatGPT responds.
    • Look out for the new visual elements indicating that a task has been scheduled.
    Tasks ChatGPT interface showcasing scheduled reminders.

    Task management:

    • After creating a task, you can edit, pause, or view all tasks in a dedicated task manager area.
    Tasks ChatGPT interface showcasing scheduled reminders.

    Notification:

    • When it’s time for your task to run, you’ll receive an email or push notification (depending on your settings).

    My own email alert looked like this:

    Screenshot showcasing tasks ChatGPT interface and features.

    and the chat is updated with this new message (direct link when you follow New Message action from your email):

    Screenshot showcasing tasks ChatGPT interface and features.

    These new scheduling features make it easier to offload your routine reminders and automations to ChatGPT, so you can focus on what truly matters. How do you feel about trusting an AI to handle your daily to-dos? Let me know if you have any creative ideas for using these new tools in your own workflow.

  • Chronicles of an AI Tamer!

    Chronicles of an AI Tamer!

    Let’s start the engines! Welcome to this adventure where technology and imagination collide: “Chronicles of an AI Tamer“!

    I am Silviu Bojica, a software engineer and solution architect with a keen eye on the latest software trends, especially in the world of Artificial Intelligence. In these chronicles, I will share my learnings, experiments, and occasional misadventures as I attempt to “tame” AI.

    Why “Chronicles of an AI Tamer”?

    Because the AI landscape is vast and sometimes wild, I will attempt to be your guide through its jungles, deserts, waters, and uncharted territories, sharing real-world insights, tutorials, and stories along the way.

    Over the coming posts, we’ll explore:

    • Practical AI tips & tutorials: from quick hacks to deeper dives.
    • Industry trends & tools: spotlights on emerging frameworks, programming languages, and cloud services.
    • Lessons learned: candid accounts of what worked, what didn’t, and how to course-correct.
    • Experimental projects: trials with machine learning models, GPT-based projects, and beyond.

    As an architect, I love blending pragmatism and curiosity. Expect technical details, but also a bird’s-eye perspective on how AI affects businesses, teams, and broader society. If you ever have questions or want to bounce ideas, please leave a comment below.

    Ready to jump in? In my next post, we’ll explore why “in software, everything is a trade-off” and how that mindset ties into taming AI. Stay tuned! We have a fascinating journey ahead!

    Disclaimer

    All opinions, strategies, and experiments shared in these chronicles are my own and do not represent those of my employer, whose name I will not disclose here. The information provided should be used at your own risk. While I regularly work with AI in my day-to-day job, nothing in these posts should be considered official advice or endorsement from my company. Always do your own research and due diligence before applying any concepts discussed here.