AI++ // are MCP servers inefficient? What can we learn from coding agents?


There has been a flurry of new frontier models dropping over the last week that you can already use in your applications. Gemini 3 was released today, and Grok 4.1 and GPT-5.1 both arrived last week.

This week we're also learning a lot of lessons from how coding agents are built, including building a coding agent in Langflow if you want hands-on experience with your own. There's also much debate over the efficiency of MCP and whether other tools fit the job better.

Phil Nash
Developer relations engineer for
Langflow

🛠️ Building with AI, Agents & MCP

New model releases ✨

You know what it's like, you're trying to get on with building your world changing agent, and then three new models arrive on the scene within a week. Gemini 3 has just launched and it's has topped the LMArena leaderboard. Gemini also released some interesting new features, including a file search tool and improved structured outputs.

Grok 4.1 arrived just a couple of days ago, topping the LMArena board only to be unseated by Gemini. Still impressive.

It almost seems old news that GPT-5.1 was released a week ago and shortly after in the API, but things move fast in AI!

Lessons from coding agents

Coding agents have been one of the breakout successes so far, so there must be much to learn from them. AmpCode wrote a guide on context management, including some of the features they built into their agent that allow for intentionally including data in the context, as well as editing, restoring and forking context.

Cursor wrote about improving their coding agent with semantic search. The post also highlights using offline and online evals to recognise that a change improves the agent.

If we can learn about all agents by looking at what they can do now, here's a view into the future of agentic coding to take some inspiration from.

And again, you can start your own coding agent from the comfort of Langflow. See how good you can make with just drag-and-drop.

Optimizing MCP

There is great debate over whether MCP is inefficient and whether you would be better having your agent execute code or use a CLI. Perhaps some MCP servers are poorly implemented and using a CLI will result in poorly implementing MCP anyway?

With that in mind, Code-Mode is a library intended to help your agents call tools by executing code and MCP-Optimizer is an intermediary MCP server that optimizes the tools available to an agent.

🗞️ Other news

🧑‍💻 Code & Libraries 

🔦 Langflow Spotlight 

I normally like to show off a feature or component of Langflow in this section, but this week I want to share a blog post from Luiz Henrique Salazar on how to deploy Langflow with custom components on Kubernetes. It’s a fantastic and in-depth tutorial that walks you through a lot, from deploying Langflow with a Helm chart, to building a custom component and deploying it with just the back-end Langflow runtime.

For a simpler deployment recipe, check out Tejas’s post on deploying Langflow to services like AWS, Fly.io, Render, and Hetzner.

🗓️ Events 

December 4th, Sao Paulo, Brazil

Get down to the Langflow Meetup Sao Paulo to meet the founders of Langflow and learn about how people are using Langflow in production today.

Enjoy this newsletter? Forward it to a friend.

2755 Augustine Dr, 8th Floor, Santa Clara, CA 95054
Unsubscribe · Preferences

AI++ newsletter

Subscribe for all the latest news for developers on AI, Agents and MCP curated by the Langflow team.

Read more from AI++ newsletter

Happy birthday MCP! 🥳 The world's fastest growing protocol was released on 26th November 2024 and has captivated developers and users alike. I am certain that everyone reading this newsletter has used MCP in one way or another, and will be happy to hear that there is plenty of work going on to keep improving and evolving the protocol. In the newsletter this week we have stories on prompt caching, JSON outputs, product evals, and the evolution of LLM extensions that has brought us to the state...

The topic of security, specifically around prompt injection, is often raised and then dropped with a bit of a shrug as the path to a solution isn't very clear. Thankfully there are people out there thinking hard about it. In AI++ today, there are articles from Meta and Perplexity on this, with ways to mitigate the issue that we should all read and learn from. We've also got news of some great AI events coming up, including the online OpenRAG Summit, along with news of introspective AI models,...

If you like building agents that get work done, you're in for a treat in this newsletter. CUGA is a new agent framework that is topping benchmarks and using all sorts of cunning under the hood to help you build better agents that can execute complex tasks. There is also news on model releases, code execution sandboxes, and the latest podcast episode from The Flow, all on OAuth and MCP. Phil NashDeveloper relations engineer for Langflow 🛠️ Building with AI, Agents & MCP IBM Research releases...