What the heck is MCP and why is everyone talking about it?


It feels like everyone’s talking about MCP (Model Context Protocol) these days when it comes to large language models (LLMs), but hardly anyone is actually defining it.

TL;DR: It’s an open standard for connecting LLMs to data and tools.

Let’s dive in deeper!

The context problem for LLMs

LLMs often struggle when they are asked for information outside of their training data. They’ll sometimes either hallucinate and say something incorrect, or simply say, “I don’t know.”

Giving them the right amount of context when you prompt them (whether it’s your codebase, your repository data, your documentation, etc.) is necessary for AI agents built on top of LLMs to be useful.

Usually, you have to really refine your prompting to give LLMs that context, or use some sort of external tool. For example, GitHub Copilot has tools like @workspace to give relevant information from your codebase to your prompts. This type of “extra tooling” is cool, but can get fairly complex fairly quickly as you implement things across different APIs and services.

A solution: Model Context Protocol, or MCP

In November, Anthropic open sourced the Model Context Protocol as a standard for connecting LLMs and AI assistants to data and tools!

MCP grew the way you sleep… slowly and then all at once. As tools and organizations have adopted the MCP standard, it has only become more and more valuable. And because MCP is model agnostic, anyone can use and create MCP integrations. As with all open standards, a rising tide lifts all boats: the more people that use it, the better it becomes.

I think that MCP has “won” the hearts of so many AI developers and tools because of this openness, and also because it’s a very “AI-first” version of existing ideas.

This isn’t the first time we’ve seen a protocol like this become a standard, either. Back in 2016, Microsoft released the Language Server Protocol (LSP), which provided standards for code editors to support programming languages. Fast forward to today: because of LSP, programming language support across editors is better than ever, to the point where developers don’t even need to think about it anymore!

MCP takes a lot of its inspiration from LSP, and could be absolutely transformative for AI tooling. It allows for everyone, from the largest tech giants to the smallest indie developer shops, to enable robust AI solutions in any AI client with minimal setup.

That’s why this is a huge deal! An open standard that is backed more and more by the tech community means better tools, better developer experiences, and better user experiences for everyone.

GitHub and MCP

We’re not just talking about MCP: we’re contributing, too!

We’re SO excited to have recently released our new open source, official, local GitHub MCP Server! It provides seamless integration with GitHub APIs, allowing for advanced automation and integration capabilities for developers to build with!

You can chat more with us about it in the GitHub Community or you can check out the official announcement.

How do I contribute and learn more?

Hoorah, I thought you’d never ask! Here’s some resources to get you on your way:

Also, if you don’t mind the shameless plug, you can also use it with agent mode now. Go forth and code!

Written by

Blog Article: Here

  • Related Posts

    Math Test? No Problems: NVIDIA Team Scores Kaggle Win With Reasoning Model

    The final days of the AI Mathematical Olympiad’s latest competition were a transcontinental relay for team NVIDIA. Every evening, two team members on opposite ends of the U.S. would submit an AI reasoning model to Kaggle — the online Olympics of data science and machine learning. They’d wait a tense five hours before learning how
    Read Article

    When to choose GitHub-Hosted runners or self-hosted runners with GitHub Actions

    Comparing GitHub-hosted vs self-hosted runners for your CI/CD workflows? This deep dive explores important factors to consider when making this critical infrastructure decision for your development team.

    The post When to choose GitHub-Hosted runners or self-hosted runners with GitHub Actions appeared first on The GitHub Blog.

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Isomorphic Labs Rethinks Drug Discovery With AI

    Isomorphic Labs Rethinks Drug Discovery With AI

    See how Wake Forest University redefined its cybersecurity with Google Workspace for Education Plus.

    See how Wake Forest University redefined its cybersecurity with Google Workspace for Education Plus.

    Record-Breaking Growth: Data Cloud & AI Annual Recurring Revenue Reached the $900M Milestone in FY25, Proving Data Cloud’s Role as the Intelligent Activation Layer for Enterprise AI

    Record-Breaking Growth: Data Cloud & AI Annual Recurring Revenue Reached the $900M Milestone in FY25, Proving Data Cloud’s Role as the Intelligent Activation Layer for Enterprise AI
    Into the Omniverse: How Digital Twins Are Scaling Industrial AI

    Our 2024 Ads Safety Report shows how we use AI to safeguard consumers.

    Our 2024 Ads Safety Report shows how we use AI to safeguard consumers.

    Math Test? No Problems: NVIDIA Team Scores Kaggle Win With Reasoning Model

    Math Test? No Problems: NVIDIA Team Scores Kaggle Win With Reasoning Model