Loading...

Corporate MCP Cybersecurity Explained

Wiring Google OAuth, per-tool authorization, and audit logging into a .NET MCP server

1. Introduction: Why Corporate MCP Needs More Than stdio

The Model Context Protocol (MCP) is a small, well-defined contract that lets a language-model client call typed, server-defined tools. In its most common form it runs over stdio: the model launches a child process, speaks JSON-RPC over its standard streams, and shuts it down when the conversation ends. That model is fine for a single developer running a single tool on a single laptop. It stops being fine the moment more than one person needs to call the same tools.

A corporate MCP server is a different animal. It is hosted on a real network interface, it is reachable by every employee who can resolve its DNS name, and it speaks to data that nobody outside the company should ever see. The transport is no longer a trusted parent process; it is HTTP, and HTTP is the most attacked surface in the world. Once you have bound to anything other than the loopback interface, you own a public API and every classical web-application threat applies to it.

This article is a code-driven tour of how to take an MCP server from "works on my laptop" to "safe to point at production data". Every code listing is taken from a working .NET 10 ASP.NET Core MCP server that uses Google Workspace as its identity provider, the official ModelContextProtocol.AspNetCore SDK as its transport, and a small JSON file as its access-control roster. The fictitious company in the examples is called Shopsnap; substitute your own domain wherever you see it.

The ground we will cover is, in order: a quick threat model, a vocabulary lesson on the pillars of cybersecurity, the Google Cloud setup that creates the OAuth identity screen, the .NET pipeline that validates Google’s tokens, the per-tool authorization layer that decides who is allowed to call what, the audit middleware that records every call, and finally a deployment checklist. By the end you should be able to take the listings and drop them into your own MCP server with very little modification.

Posted: 4/30/2026
Avatar
Author
Gal Ratner

2. The Corporate MCP Threat Model

Before designing defenses it helps to be precise about what you are defending against. An MCP server that exposes internal data — customers, schedules, payments, source code — is a much more attractive target than an arbitrary REST API, because it advertises a typed catalog of operations to anyone who can list its tools. The discoverability that makes MCP useful for an LLM also makes it useful for an attacker who has stolen a token.

2.1 Who is on the other end of Bearer ...?

Every authenticated MCP request arrives with an Authorization header. Your server has to answer two questions before doing anything: is this a real token issued by an identity provider you trust, and which human does it represent? Skipping either answer collapses authentication into "the request looked like JSON", which is no authentication at all. The dual JWT / opaque-token handler shown later in Section 5 exists specifically because answering the first question correctly is harder than the tutorials usually admit.

2.2 What can a stolen token do?

Assume the token will be stolen. A laptop is lost, a developer pastes a token into a Slack channel, a malicious browser extension exfiltrates session storage. The question is not "how do we prevent every leak" but "how much damage does a single leaked token do before it expires?" Limiting blast radius means short-lived tokens, minimum scopes, and an authorization layer that does not assume a valid token implies valid intent.

2.3 Insider misuse versus external attacker

External attackers are the loud threat; insider misuse is the quiet one. A rostered employee with legitimate access to one tool may try to invoke another that nobody has ever audited. A per-tool ACL stops that, and an audit log makes the attempt visible to whoever owns incident response. Without both, the answer to "who pulled that report on Friday night" is a shrug.

2.4 Data exfiltration through tool composition

An LLM client is a confused deputy by design: it composes tools on a user’s behalf, but it has no idea which compositions the user is actually entitled to. If your read tool returns customer records and your write tool sends emails, an instruction embedded in a record can ask the model to email a competitor. The defenses here are the same defenses you would apply to any API: scope the tools tightly, never let one tool’s output drive another tool’s side effects without a human in the loop, and log everything.

Threats this article addresses, mapped to later sections:

·       Forged or stolen tokens → Section 5 (authentication, audience, hosted-domain)

·       Over-broad access by valid users → Section 6 (per-tool roster)

·       Insider misuse → Section 7 (audit log of every call)

·       Eavesdropping in transit → Section 8 (TLS, forwarded headers)

·       Secrets in source control → Section 8 (configuration management)

3. The Pillars of Cybersecurity: A Working Vocabulary

The rest of this article uses six recurring words. They are the standard vocabulary of practitioner security: the classical CIA triad of Confidentiality, Integrity, and Availability, plus Authentication, Authorization, and Auditing / Non-repudiation. Each definition below is short on purpose; the implementation details are in the code sections, and Section 9 contains a single table mapping every pillar back to a specific file in the codebase.

3.1 Confidentiality

Information is only readable by the people meant to read it. In an MCP context this means tokens are not visible on the wire (TLS), database credentials are not visible in source control (vaulted secrets), and tool responses are not delivered to unauthenticated callers. Implemented here via HTTPS redirection in Program.cs and configuration-based secrets, see Section 8.

3.2 Integrity

Information cannot be modified in transit or at rest by an unauthorized party, and the server can prove it. Cryptographic signatures on JWTs, parameterized SQL queries that resist injection, and a re-check of the OAuth audience claim are all integrity controls. Implemented here in the JWT validation parameters and tokeninfo handler shown in Section 5, and the Entity Framework patterns referenced in Section 8.

3.3 Availability

Authorized callers can actually use the system when they need it. This usually shows up as cache, throttling, and timeout policy, but it also means a missing config file should fail loudly at startup rather than silently degrade in production. Implemented here as a fail-fast roster loader, a reused HttpClient that prevents socket exhaustion, and a configurable database command timeout.

3.4 Authentication

Proof of identity. "This request is from a real person at a real company." In this codebase the proof is a Google-issued token, validated either as a signed JWT or by introspecting an opaque access token at Google’s tokeninfo endpoint. The hosted domain and verified-email checks make sure the identity is from your Workspace, not any random Gmail account.

3.5 Authorization

What an authenticated identity is allowed to do. Authentication answers "who"; authorization answers "may they call this specific tool right now?" Implemented here as an opt-in [ToolAccess] attribute, a custom AuthorizationHandler, and a per-user-per-tool roster loaded from a JSON file at startup.

3.6 Auditing and Non-repudiation

An immutable record of who did what, when. A user cannot plausibly claim they did not call a tool if there is a timestamped log entry tying their email to the tool name. Implemented here as a thin middleware that runs after authentication and extracts the JSON-RPC method and tool name from each request body.

4. Setting Up Google Cloud: OAuth Consent Screen, Client ID, Hosted Domain

Before writing a single line of .NET, the OAuth handshake has to exist on Google’s side. The configuration there sets the perimeter of who can even attempt to authenticate. Get this wrong and your server-side checks become the only line of defense; get it right and you have two layers, with Google’s identity provider doing most of the work.

4.1 Create the project and enable the OAuth consent screen

In the Google Cloud Console, create a new project for the MCP server (or reuse an existing one belonging to the team that owns it). Open APIs & Services → OAuth consent screen and start configuring. The single most important field on this screen for a corporate MCP server is the User Type radio button. Choose Internal. "Internal" pins the consent screen to your Google Workspace organization — only accounts in your Workspace can ever see it, and Google will refuse to issue tokens to anyone else. Personal Gmail accounts are blocked at the identity-provider layer, before they ever hit your server.

Fill in the App Name (for example "Shopsnap MCP"), the user support email (an owned alias such as mcp-support@shopsnap.com), and the developer contact email. The branding fields are not security-critical but they are the first thing your users see in the OAuth consent dialog — a polished consent screen reduces phishing risk by training users to expect a specific look.

4.2 Create the OAuth 2.0 Client ID

Under APIs & Services → Credentials → Create Credentials → OAuth client ID, choose Application Type: Web application. The redirect URI you authorize here must match whatever URI your MCP client (Claude Desktop, a custom CLI, an internal portal) uses to complete the OAuth flow. For Claude Desktop and most loopback-flow clients, this is a localhost URL on a high port; for a server-rendered client, it is your own callback endpoint.

The scopes you request from Google should be the minimum needed to identify the user. For an MCP server that does no Google API calls of its own, openid email profile is sufficient. These three scopes give you the user’s identity, their verified email, and their basic profile — nothing else. Asking for more is a violation of least privilege and an unnecessary entry on the consent screen that may make users suspicious.

Save the Client ID. You do not need the client secret for this server because validation happens server-side using Google’s public key set (JWKS) for JWTs and Google’s public tokeninfo endpoint for opaque tokens. The client secret would be needed only if your server itself was performing the OAuth flow on behalf of a user, which it is not.

4.3 Hosted-domain restriction

The Internal user type already restricts authentication to your Workspace, but the .NET server will also enforce a hosted-domain (hd) check on every token. This belt-and-suspenders pattern matters because configuration drift is real: someone may flip the consent screen to External in a year, or a future MCP client may use a flow that bypasses the consent screen entirely. The server-side hd check is the invariant that survives those changes.

Warning: Choosing "External" on the consent screen lets any Google account through Google’s side of the handshake. The server’s hd and email-suffix checks will still reject them, but you have lost defense in depth and exposed the consent screen to the entire internet.

With the consent screen and client ID in place, you have three values to feed the server: the Client ID (e.g. <YOUR_GOOGLE_CLIENT_ID>.apps.googleusercontent.com), the authority (https://accounts.google.com), and the hosted domain (shopsnap.com). These land in appsettings.json under the GoogleAuth key, shown in Section 8.

5. Wiring Authentication in ASP.NET Core

This is the longest section in the article because authentication is where most corporate MCP tutorials get something wrong. The ASP.NET Core JwtBearer middleware is excellent at validating signed JWTs, but Google’s OAuth flow can return either a JWT (the ID token) or an opaque access token (a string like ya29.xxx). Most MCP clients send the access token. JwtBearer cannot validate an opaque token because there is nothing inside it to validate — you have to introspect it at Google’s tokeninfo endpoint. The code below handles both cases.

5.1 The extension method and DI shape

All Google-related authentication setup lives behind a single extension method. The Program.cs caller does not need to know about JWT bearer schemes, MCP authentication schemes, or tokeninfo — it just calls AddGoogleMcpAuthentication and is done. Listing 1 shows the entry point and the dual-scheme registration.

Listing 1: AddGoogleMcpAuthentication entry point

public static IServiceCollection AddGoogleMcpAuthentication(

    this IServiceCollection services, IConfiguration config)

{

    var clientId     = config["GoogleAuth:ClientId"]     ?? throw new InvalidOperationException("GoogleAuth:ClientId");

    var authority    = config["GoogleAuth:Authority"]    ?? "https://accounts.google.com";

    var resource     = config["GoogleAuth:Resource"]     ?? throw new InvalidOperationException("GoogleAuth:Resource");

    var hostedDomain = config["GoogleAuth:HostedDomain"] ?? "shopsnap.com";

 

    services

        .AddAuthentication(o =>

        {

            o.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;

            o.DefaultChallengeScheme    = McpAuthenticationDefaults.AuthenticationScheme;

        })

 

Two schemes are registered, with different responsibilities. JwtBearer is the default authenticate scheme — it inspects every incoming Bearer token and produces a ClaimsPrincipal. The Mcp scheme is the default challenge scheme — when an unauthenticated request arrives, it produces the protocol-correct 401 response with the protected-resource metadata that MCP-aware clients use to discover the auth server. Splitting the two responsibilities is what lets MCP clients negotiate authentication without you having to write any of that handshake yourself.

Notice the fail-fast pattern in the configuration reads. Missing ClientId or Resource throws at startup, not on the first authenticated request. This is an Availability control: the operator finds out about a misconfigured deployment in seconds, not after a user has already filed a confusing bug report.

5.2 Token validation parameters

JwtBearer needs to know what a valid token looks like. Listing 2 is the configuration block that pins the validator to Google’s issuers, your client ID as the audience, and standard signature and lifetime checks.

Listing 2: TokenValidationParameters for Google JWTs

o.TokenValidationParameters = new TokenValidationParameters

{

    ValidateIssuer           = true,

    ValidIssuers             = new[] { "https://accounts.google.com", "accounts.google.com" },

    ValidateAudience         = true,

    ValidAudience            = clientId,

    ValidateLifetime         = true,

    ValidateIssuerSigningKey = true,

    NameClaimType            = "email",

};

 

Two settings deserve a closer look. ValidIssuers contains both the URL form and the host-only form because Google has historically used both, and a strict comparison against just one will reject perfectly valid tokens. ValidAudience is set to your OAuth client ID — this is the integrity guarantee that prevents an attacker from presenting a token minted for a different application. Without the audience check, any Google-signed token from any OAuth client in the world would pass signature validation here.

NameClaimType = "email" tells ASP.NET Core that ctx.User.Identity.Name should return the email claim, which is what the rest of the pipeline expects. The MapInboundClaims = false setting (configured one line above) preserves Google’s original claim names like hd and email_verified instead of remapping them to the older Microsoft URI-style claim names. The authorization handler depends on those raw names.

5.3 Dual token handling: JWT fast path versus opaque tokeninfo path

This is the centerpiece of the authentication wiring. The OnMessageReceived handler fires on every incoming request, before JwtBearer’s default validator runs. It looks at the token, decides whether it is a JWT or an opaque access token, and either steps out of the way (for JWTs, letting JwtBearer do its normal thing) or introspects the token at Google’s tokeninfo endpoint and constructs a ClaimsPrincipal manually.

Listing 3: OnMessageReceived — the dual JWT/opaque token handler

o.Events = new JwtBearerEvents

{

    // Intercept opaque Google access tokens (non-JWT). Claude Desktop and most

    // MCP clients send the access_token as the Bearer, which for Google is an

    // opaque string like "ya29.xxx" — not a JWT. Introspect via tokeninfo.

    OnMessageReceived = async ctx =>

    {

        var auth = ctx.Request.Headers.Authorization.ToString();

        if (!auth.StartsWith("Bearer ", StringComparison.OrdinalIgnoreCase))

            return;

 

        var token = auth["Bearer ".Length..].Trim();

 

        // A Google ID token is a JWT: three dot-separated segments. Let

        // JwtBearer's built-in validator handle those (fast path, no HTTP).

        if (token.Count(c => c == '.') == 2)

            return;

 

        // Opaque access token path — validate via Google's tokeninfo endpoint.

        var logger = ctx.HttpContext.RequestServices

            .GetRequiredService<ILoggerFactory>()

            .CreateLogger("GoogleAuth.TokenInfo");

 

        try

        {

            var resp = await _tokenInfoHttp.GetAsync(

                $"https://oauth2.googleapis.com/tokeninfo?access_token={Uri.EscapeDataString(token)}",

                ctx.HttpContext.RequestAborted);

 

            if (!resp.IsSuccessStatusCode)

            {

                logger.LogWarning("tokeninfo rejected access token (HTTP {Status})", (int)resp.StatusCode);

                ctx.Fail("Google tokeninfo rejected the access token");

                return;

            }

 

            var info = await resp.Content.ReadFromJsonAsync<Dictionary<string, JsonElement>>(

                cancellationToken: ctx.HttpContext.RequestAborted);

            if (info is null)

            {

                ctx.Fail("Empty tokeninfo response");

                return;

            }

 

            var aud = info.TryGetValue("aud", out var a) ? a.GetString() : null;

            if (!string.Equals(aud, clientId, StringComparison.Ordinal))

            {

                logger.LogWarning("tokeninfo aud='{Aud}' != expected client_id", aud);

                ctx.Fail($"Access token aud='{aud}' does not match expected client_id");

                return;

            }

 

            var claims = new List<Claim>();

            foreach (var (k, v) in info)

                claims.Add(new Claim(k, v.ToString() ?? string.Empty));

 

            var identity = new ClaimsIdentity(

                claims,

                authenticationType: JwtBearerDefaults.AuthenticationScheme,

                nameType: "email",

                roleType: null);

            ctx.Principal = new ClaimsPrincipal(identity);

 

            var err = ValidateGoogleClaims(ctx.Principal, hostedDomain);

            if (err is not null)

            {

                logger.LogWarning("tokeninfo principal rejected: {Err}", err);

                ctx.Fail(err);

                return;

            }

 

            ctx.Success();

        }

        catch (Exception ex)

        {

            logger.LogError(ex, "tokeninfo introspection threw");

            ctx.Fail(ex);

        }

    },

 

    OnTokenValidated = ctx =>

    {

        var err = ValidateGoogleClaims(ctx.Principal, hostedDomain);

        if (err is not null) ctx.Fail(err);

        return Task.CompletedTask;

    },

 

    // Surface the exact reason any token was rejected into logs.

    OnAuthenticationFailed = ctx =>

    {

        var logger = ctx.HttpContext.RequestServices

            .GetRequiredService<ILoggerFactory>()

            .CreateLogger("GoogleAuth.JwtBearer");

        logger.LogWarning("JwtBearer auth failed: {Message}", ctx.Exception?.Message);

        return Task.CompletedTask;

    }

};

 

Walk through it from the top. The handler reads the Authorization header and bails out early if there is no Bearer token — there are other authentication schemes in ASP.NET Core, and an absent Bearer is not the same as a rejected one. Then it counts the dots in the token. A JWT is exactly three base64-url-encoded segments separated by two dots; an opaque Google access token is one long string with no dots. The two-dot test is the cheapest possible way to distinguish them and works in practice for every token Google currently issues.

If the token is a JWT, the handler returns and lets JwtBearer’s default validator run. That validator pulls Google’s public keys (JWKS) from https://www.googleapis.com/oauth2/v3/certs, verifies the RS256 signature, checks the issuer and audience, and produces a ClaimsPrincipal automatically. The downstream OnTokenValidated event then runs ValidateGoogleClaims to enforce the hosted-domain and verified-email rules.

If the token is opaque, the handler does the work itself. It HTTP-GETs the tokeninfo endpoint with the access token as a query parameter. Google responds with either a 200 and a JSON document of claims, or a 4xx that means the token is invalid. The handler treats anything other than 200 as a failure.

The audience re-check is the most easily missed part of this handler. Google’s tokeninfo endpoint validates the token — it confirms the signature is real and the token is not expired — but it does not validate the audience against your client ID. It will happily return claims for a token minted for a completely different OAuth client. Without the explicit aud comparison shown here, your server would accept tokens issued by any OAuth client at all, which means an attacker who controls a public OAuth app could mint tokens that pass authentication on your MCP server. The string equality check is the integrity boundary; treat it as load-bearing.

Once the audience is confirmed, the handler builds a ClaimsIdentity from the tokeninfo response, sets ctx.Principal so the rest of the pipeline sees the correct user, runs the same ValidateGoogleClaims function the JWT path uses, and calls ctx.Success() to short-circuit JwtBearer’s built-in validator (which would fail because the token is not a JWT).

Note: The HttpClient used for tokeninfo is a static readonly field at the top of the GoogleAuthExtensions class, not a per-request new HttpClient(). The latter is one of the most common bugs in .NET code: each new HttpClient opens its own socket pool, and under load the process exhausts ephemeral ports. The static instance is reused for the lifetime of the process and is fully thread-safe for GET requests.

5.4 ValidateGoogleClaims: hosted domain and email_verified

Both the JWT path and the opaque-token path eventually call into ValidateGoogleClaims. This is the function that turns "the token is real" into "the token belongs to a real Workspace user." Listing 4 shows it in full.

Listing 4: ValidateGoogleClaims

private static string? ValidateGoogleClaims(ClaimsPrincipal? p, string hostedDomain)

{

    if (p is null) return "No principal";

    var hd = p.FindFirst("hd")?.Value;

    var ev = p.FindFirst("email_verified")?.Value;

    var em = p.FindFirst("email")?.Value;

 

    // `hd` is present on ID tokens (JWT) but absent from access-token tokeninfo responses.

    // Only enforce it when actually present; the email domain check below is the

    // real security boundary for both token types.

    if (!string.IsNullOrEmpty(hd) &&

        !string.Equals(hd, hostedDomain, StringComparison.OrdinalIgnoreCase))

        return $"hd='{hd}' is not '{hostedDomain}'";

 

    if (!string.Equals(ev, "true", StringComparison.OrdinalIgnoreCase))

        return "email_verified is not true";

 

    if (string.IsNullOrWhiteSpace(em) ||

        !em.EndsWith("@" + hostedDomain, StringComparison.OrdinalIgnoreCase))

        return $"email '{em}' not in '{hostedDomain}'";

 

    return null;

}

 

There are three checks, in order. The hd claim, when present, must match the configured hosted domain. The hd claim is only present on ID tokens (JWT); tokeninfo responses for opaque access tokens do not include it. Skipping the check when hd is absent avoids false rejections of access tokens; the email-suffix check on the next line is the actual security boundary that applies to both token types.

The email_verified claim must be exactly "true". A user can put any string in their email field at sign-up; only after Google has actually verified ownership does this claim flip. Without it, an attacker who controls a personal Google account could in principle list a corporate email there and have it appear in the email claim. Without an email_verified gate, that attacker passes your hosted-domain check trivially.

Finally the email itself must end in @shopsnap.com. This is the check that catches the most cases in practice: anything that does not unambiguously identify a member of your Workspace is rejected with a logged reason. The reason string is returned all the way up the stack and ends up in the structured log, which makes it easy to debug "why was Alice rejected?" without having to add ad-hoc logging.

5.5 Advertising the protected resource to MCP clients

MCP’s authentication discovery mechanism lets a client ask a server "what auth server should I send users to?" rather than baking the answer into the client’s config. The AddMcp call shown in Listing 5 publishes the metadata that drives this.

Listing 5: Protected-resource metadata for MCP clients

.AddMcp(o =>

{

    o.ResourceMetadata = new()

    {

        Resource = resource,

        AuthorizationServers = { authority },

        ScopesSupported = { "openid", "email", "profile" },

    };

});

 

Resource is the canonical URL of this MCP server. AuthorizationServers points to Google. ScopesSupported is the minimum set the client should ask for. A compliant MCP client reading this metadata can construct the correct OAuth flow without any user-facing configuration beyond the server URL itself — the user types in https://mcp.shopsnap.com/api/mcp/v1, the client discovers Google, and the Google consent screen appears.

6. Wiring Authorization: Per-Tool ACL via Roster

Authentication only proves who is calling. Authorization decides whether this user is allowed to call this tool right now. Conflating the two is the most common cause of accidental data leaks in corporate APIs: "they had a valid token" is not the same as "they were entitled to that data." The codebase implements per-user-per-tool access control via three small pieces — an opt-in attribute, a custom AuthorizationHandler, and a JSON file on disk.

6.1 The opt-in attribute and policy registration

The attribute, shown in Listing 6, is twelve lines of code that exist only to make tool methods read better. [Authorize("ToolAccess")] would work just as well, but [ToolAccess] reads more naturally and centralizes the policy name.

Listing 6: ToolAccessAttribute

using Microsoft.AspNetCore.Authorization;

 

namespace ShopsnapMcpServer.Authorization;

 

[AttributeUsage(AttributeTargets.Method, AllowMultiple = false, Inherited = false)]

public sealed class ToolAccessAttribute : AuthorizeAttribute

{

    public const string PolicyName = "ToolAccess";

 

    public ToolAccessAttribute() : base(PolicyName) { }

}

 

The attribute references a policy by name. The policy itself, the requirement it carries, and the handler that evaluates it are all registered in Program.cs as shown in Listing 7.

Listing 7: Policy and handler registration in Program.cs

builder.Services.AddAuthorization(options =>

{

    options.AddPolicy(ToolAccessAttribute.PolicyName, policy =>

        policy.AddRequirements(new ToolAccessRequirement()));

});

builder.Services.AddSingleton<IEmployeeToolAccessRoster, EmployeeToolAccessRoster>();

builder.Services.AddSingleton<IAuthorizationHandler, ToolAccessHandler>();

 

ToolAccessRequirement is an empty marker class — ASP.NET Core’s authorization subsystem uses the requirement type as a discriminator to find the matching handler. The roster is registered as a singleton because it loads its data from disk once at startup and serves every subsequent lookup from an in-memory dictionary.

6.2 The handler and resource resolution

The handler is the busiest piece of authorization code in the project. It runs on every authenticated MCP request and has to figure out, from whatever object the MCP SDK passes as the policy’s Resource, which tool the user is trying to call. The MCP SDK passes different shapes for different request types, so the handler has to introspect them generically. Listing 8 shows the full file.

Listing 8: ToolAccessHandler

using Microsoft.AspNetCore.Authorization;

 

namespace ShopsnapMcpServer.Authorization;

 

public sealed class ToolAccessHandler : AuthorizationHandler<ToolAccessRequirement>

{

    private readonly IEmployeeToolAccessRoster _roster;

    private readonly ILogger<ToolAccessHandler> _log;

 

    public ToolAccessHandler(IEmployeeToolAccessRoster roster, ILogger<ToolAccessHandler> log)

    {

        _roster = roster;

        _log = log;

    }

 

    protected override Task HandleRequirementAsync(

        AuthorizationHandlerContext context,

        ToolAccessRequirement requirement)

    {

        var isAuthenticated = context.User.Identity?.IsAuthenticated ?? false;

        var email = context.User.FindFirst("email")?.Value?.Trim();

 

        if (!isAuthenticated || string.IsNullOrWhiteSpace(email))

        {

            _log.LogWarning("ToolAccess denied: not authenticated or missing email");

            return Task.CompletedTask;

        }

 

        var resolved = ResolveResource(context.Resource);

 

        // List-style requests (tools/list, prompts/list, etc.) — no specific tool name.

        // Gate: user just needs to be on the roster with at least one tool; the per-tool

        // filter pass will handle hiding tools they can't call.

        if (resolved.IsListRequest)

        {

            if (_roster.HasAnyAccess(email))

            {

                _log.LogDebug("ToolAccess granted (list gate): {Email}", email);

                context.Succeed(requirement);

            }

            else

            {

                _log.LogInformation("ToolAccess denied: {Email} is not on the roster", email);

            }

            return Task.CompletedTask;

        }

 

        if (string.IsNullOrWhiteSpace(resolved.ToolName))

        {

            _log.LogWarning(

                "ToolAccess denied: could not resolve tool name from resource type {Type}",

                context.Resource?.GetType().FullName ?? "<null>");

            return Task.CompletedTask;

        }

 

        if (_roster.IsAllowed(email, resolved.ToolName))

        {

            _log.LogDebug("ToolAccess granted: {Email} -> {Tool}", email, resolved.ToolName);

            context.Succeed(requirement);

        }

        else

        {

            _log.LogInformation(

                "ToolAccess denied: {Email} is not authorized for {Tool} (per roster)",

                email, resolved.ToolName);

        }

 

        return Task.CompletedTask;

    }

 

    // The MCP SDK passes different shapes as the policy's Resource depending on when the

    // policy is evaluated:

    //   - Request-level gate:       RequestContext<ListToolsRequestParams>  (no tool name)

    //                               RequestContext<CallToolRequestParams>   (tool name on Params.Name)

    //   - Per-tool filter pass:     an McpServerTool / IMcpServerPrimitive with a .Name property

    // This resolver covers all three without binding to concrete SDK types.

    private static ResolvedResource ResolveResource(object? resource)

    {

        if (resource is null) return default;

        if (resource is string s) return new ResolvedResource(s, false);

 

        var t = resource.GetType();

 

        if (t.IsGenericType && t.GenericTypeArguments.Length > 0)

        {

            var paramsTypeName = t.GenericTypeArguments[0].Name;

 

            if (paramsTypeName.StartsWith("List", StringComparison.Ordinal) &&

                paramsTypeName.EndsWith("RequestParams", StringComparison.Ordinal))

            {

                return new ResolvedResource(null, IsListRequest: true);

            }

 

            var paramsProp = t.GetProperty("Params");

            var paramsValue = paramsProp?.GetValue(resource);

            if (paramsValue is not null)

            {

                var nameFromParams = TryGetNameLike(paramsValue);

                if (!string.IsNullOrWhiteSpace(nameFromParams))

                    return new ResolvedResource(nameFromParams, false);

            }

 

            return default;

        }

 

        var nameFromPrimitive = TryGetNameLike(resource);

        if (!string.IsNullOrWhiteSpace(nameFromPrimitive))

            return new ResolvedResource(nameFromPrimitive, false);

 

        return default;

    }

 

    private static string? TryGetNameLike(object obj)

    {

        var t = obj.GetType();

        foreach (var propName in new[] { "Name", "ToolName", "ProtectedResourceName" })

        {

            var p = t.GetProperty(propName);

            if (p is null) continue;

            var v = p.GetValue(obj)?.ToString();

            if (!string.IsNullOrWhiteSpace(v)) return v;

        }

        return null;

    }

 

    private readonly record struct ResolvedResource(string? ToolName, bool IsListRequest);

}

 

Read the handler from the top. Step one is to confirm that authentication actually produced a usable identity — if the user is anonymous or has no email claim, the handler logs and returns without calling Succeed, which the framework treats as a denial. Step two is to figure out what the request is asking to do, which is the ResolveResource method.

ResolveResource handles three shapes the MCP SDK can pass. RequestContext<ListToolsRequestParams> means the user is asking for the catalog of tools — there is no specific tool name to authorize, so the handler just verifies the user is on the roster at all. The per-tool filter pass that runs later will hide individual tools the user cannot call. RequestContext<CallToolRequestParams> means the user is actually invoking a tool — the tool name is on Params.Name. McpServerTool / IMcpServerPrimitive is the per-tool filter pass shape, used by the SDK when it needs to decide whether to advertise a tool in the listing.

Reflection is used here deliberately, in preference to taking a hard reference on the concrete SDK types. The MCP SDK is in active development and these types may move between versions; introspecting by name keeps the handler resilient. The cost is one GetType / GetProperty call per request, which at MCP volumes is not measurable.

6.3 The roster: loading, normalization, lookup

The roster is the source of truth for who is allowed to call what. It is a JSON file deployed alongside the binary, loaded once at startup, and queried on every tool invocation. Listing 9 shows the full implementation.

Listing 9: EmployeeToolAccessRoster

using System.Text.Json;

using System.Text.Json.Serialization;

 

namespace ShopsnapMcpServer.Authorization;

 

public interface IEmployeeToolAccessRoster

{

    bool IsAllowed(string email, string toolName);

    bool HasAnyAccess(string email);

    int UserCount { get; }

}

 

public sealed class EmployeeToolAccessRoster : IEmployeeToolAccessRoster

{

    public const string FileName = "employee-tool-access.json";

 

    // email (lowercased) -> set of NORMALIZED tool names (alphanumeric-only, lowercased).

    // Normalization lets the roster accept either casing or word-separator style

    // ("GetRecordById", "get_record_by_id", "get-record-by-id") and still match

    // whichever form the MCP SDK advertises at runtime (it uses snake_case).

    private readonly Dictionary<string, HashSet<string>> _emailToTools;

 

    public int UserCount => _emailToTools.Count;

 

    public EmployeeToolAccessRoster(IHostEnvironment env, ILogger<EmployeeToolAccessRoster> log)

    {

        var path = Path.Combine(env.ContentRootPath, FileName);

        if (!File.Exists(path))

            throw new FileNotFoundException(

                $"Roster file '{FileName}' not found at '{path}'. It must be deployed alongside the binary.",

                path);

 

        using var stream = File.OpenRead(path);

        var doc = JsonSerializer.Deserialize<RosterDocument>(stream, JsonOpts)

            ?? throw new InvalidOperationException($"Roster file '{path}' is empty or invalid JSON.");

 

        _emailToTools = new Dictionary<string, HashSet<string>>(StringComparer.OrdinalIgnoreCase);

        foreach (var u in doc.Users ?? new List<RosterUser>())

        {

            if (string.IsNullOrWhiteSpace(u.Email)) continue;

            var tools = new HashSet<string>(StringComparer.Ordinal);

            foreach (var t in u.Tools ?? new List<string>())

            {

                var norm = Normalize(t);

                if (norm.Length > 0) tools.Add(norm);

            }

            _emailToTools[u.Email.Trim()] = tools;

        }

 

        log.LogInformation(

            "Loaded tool-access roster: {UserCount} users from {Path}",

            _emailToTools.Count, path);

    }

 

    public bool IsAllowed(string email, string toolName)

    {

        if (string.IsNullOrWhiteSpace(email) || string.IsNullOrWhiteSpace(toolName))

            return false;

        return _emailToTools.TryGetValue(email, out var tools) && tools.Contains(Normalize(toolName));

    }

 

    // Canonicalize a tool name so comparisons ignore casing and word-separator style.

    // "GetRecordById" and "get_record_by_id" both become "getrecordbyid".

    private static string Normalize(string toolName)

    {

        if (string.IsNullOrEmpty(toolName)) return string.Empty;

        var buf = new char[toolName.Length];

        var j = 0;

        foreach (var c in toolName)

        {

            if (char.IsLetterOrDigit(c))

                buf[j++] = char.ToLowerInvariant(c);

        }

        return new string(buf, 0, j);

    }

 

    public bool HasAnyAccess(string email)

    {

        if (string.IsNullOrWhiteSpace(email)) return false;

        return _emailToTools.TryGetValue(email, out var tools) && tools.Count > 0;

    }

 

    private static readonly JsonSerializerOptions JsonOpts = new()

    {

        PropertyNameCaseInsensitive = true,

        ReadCommentHandling = JsonCommentHandling.Skip,

        AllowTrailingCommas = true,

    };

 

    private sealed class RosterDocument

    {

        [JsonPropertyName("users")] public List<RosterUser>? Users { get; set; }

    }

 

    private sealed class RosterUser

    {

        [JsonPropertyName("email")]      public string?        Email      { get; set; }

        [JsonPropertyName("name")]       public string?        Name       { get; set; }

        [JsonPropertyName("employeeId")] public int?           EmployeeId { get; set; }

        [JsonPropertyName("tools")]      public List<string>?  Tools      { get; set; }

    }

}

 

Three things in the roster are worth understanding. First, the constructor throws FileNotFoundException if the roster file is missing. This is intentional and important: a server that cannot load its access-control list should not start. Failing open — "oh, the file is missing, just allow everyone" — is the kind of bug that causes data breaches. Failing closed at startup means the operator finds out about a missing roster before any user ever does.

Second, the Normalize method strips every non-alphanumeric character and lowercases the result. The MCP SDK serializes tool names in snake_case at runtime, while the C# methods are PascalCase, and the roster file might have been hand-edited with either style. Normalization makes "GetRecordById", "get_record_by_id", and "get-record-by-id" all collapse to the same canonical key, so the lookup never fails for a typographic reason.

Third, the lookup is in-memory and case-insensitive on the email key. Email lookups happen on every authorized request — they have to be cheap. The dictionary is built once at startup; runtime cost is one hash plus one set lookup.

6.4 An example tool method

With the attribute, the policy, the handler, and the roster all in place, opting a tool into the access-control system is one line: stack [ToolAccess] alongside [McpServerTool]. Listing 10 shows what a typical tool method looks like.

Listing 10: A tool method opted into per-user authorization

[McpServerTool]

[ToolAccess]

[Description("Gets a record by its ID. Returns null if not found.")]

public async Task<RecordDto?> GetRecordById(

    [Description("The unique identifier of the record")] int recordId)

{

    return await _db.Records

        .Where(r => r.RecordId == recordId)

        .FirstOrDefaultAsync();

}

 

That is the entire developer-facing API. Add the method, decorate it with [McpServerTool] so the MCP SDK registers it, decorate it with [ToolAccess] so the authorization handler gates it, and add the tool name to whichever roster entries should be allowed to call it. The cognitive overhead per tool is tiny, which matters: anything more burdensome would lead to developers forgetting and accidentally shipping unauthorized tools.

7. Audit Logging and Non-Repudiation

Authentication tells you who is calling. Authorization tells you whether they were allowed to. Auditing tells you, after the fact, that the call actually happened. The audit middleware shown in Listing 11 sits at the end of the request pipeline, runs after authentication and authorization have populated ctx.User, and writes one log line per MCP request with the user’s email and the tool name.

Listing 11: AuditLoggingMiddleware

using System.Text.Json;

 

namespace ShopsnapMcpServer.Middleware;

 

public sealed class AuditLoggingMiddleware

{

    private readonly RequestDelegate _next;

    private readonly ILogger<AuditLoggingMiddleware> _log;

 

    public AuditLoggingMiddleware(RequestDelegate next, ILogger<AuditLoggingMiddleware> log)

    {

        _next = next;

        _log = log;

    }

 

    public async Task InvokeAsync(HttpContext ctx)

    {

        if (!ctx.Request.Path.StartsWithSegments("/api/mcp/v1") ||

            !HttpMethods.IsPost(ctx.Request.Method))

        {

            await _next(ctx);

            return;

        }

 

        var email = ctx.User?.FindFirst("email")?.Value ?? "<anonymous>";

 

        ctx.Request.EnableBuffering();

        string? method = null;

        string? toolName = null;

 

        try

        {

            using var doc = await JsonDocument.ParseAsync(ctx.Request.Body, cancellationToken: ctx.RequestAborted);

            var root = doc.RootElement;

            if (root.ValueKind == JsonValueKind.Object && root.TryGetProperty("method", out var m))

                method = m.GetString();

            if (method == "tools/call" &&

                root.TryGetProperty("params", out var pr) &&

                pr.TryGetProperty("name", out var n))

                toolName = n.GetString();

        }

        catch (JsonException) { /* streamed or non-JSON body; skip */ }

        finally

        {

            ctx.Request.Body.Position = 0;

        }

 

        if (toolName is not null)

            _log.LogInformation("MCP audit: {Email} called tool {Tool}", email, toolName);

        else if (method is not null)

            _log.LogInformation("MCP audit: {Email} invoked {Method}", email, method);

 

        await _next(ctx);

    }

}

 

Three implementation details matter. First, ordering: in Program.cs the middleware is registered after UseAuthentication and UseAuthorization, so by the time the audit middleware runs, ctx.User has been populated by the JWT pipeline. If the order were reversed, every audit line would say <anonymous>.

Second, request-body buffering. The MCP request body is read by the downstream MCP handler to dispatch the JSON-RPC call. If the audit middleware reads the body without calling EnableBuffering and then resetting Body.Position, the downstream handler finds an empty stream and the tool call silently fails. EnableBuffering is the ASP.NET Core idiom for "I want to read this stream more than once."

Third, what this gives you and what it does not. You get a per-request record of the form "alice@shopsnap.com called tool GetRecordById". That is enough to answer most after-the-fact questions: who pulled this report, who tried to access this tool. What this does not capture is the result of the call (success vs error), the latency, or the arguments. Adding any of those is straightforward but each adds risk: arguments may contain PII, and storing the result of a read tool may double the data-breach blast radius. Most teams ship the minimal version first and layer additional telemetry behind feature flags as needs become concrete.

Natural follow-ons to add later, in order of value:

·       Wrap _next(ctx) in a logging scope so downstream logs inherit the user identity

·       Capture the response status code (success vs framework-level failure)

·       Ship the audit stream to a write-once sink (e.g. Cloud Logging, S3 with object lock)

·       Add request and response latency for SLO tracking8. Defense in Depth

Authentication and authorization are necessary but not sufficient. A working corporate MCP server also needs the boring hygiene that turns a demo into a deployable system. This section covers HTTPS, secret management, SQL safety, transport choice, and the operational shape of the roster file.

8.1 HTTPS everywhere

Bearer tokens are sent in plaintext on the wire. Anything less than TLS makes them trivially stealable on any shared network, and "shared network" includes coffee shop wifi, hotel networks, and any infrastructure between the user and your server. The two-line excerpt in Listing 12 turns on HTTPS redirection and forwarded-headers support.

Listing 12: HTTPS and forwarded headers in Program.cs

app.UseForwardedHeaders();

app.UseHttpsRedirection();

 

UseForwardedHeaders is the piece teams forget. When the MCP server runs behind a reverse proxy or load balancer (as it should in production), the proxy terminates TLS and then forwards an HTTP request to your server with X-Forwarded-Proto: https as a header. UseForwardedHeaders teaches ASP.NET Core to honor that header, so RequireHttpsMetadata-style checks and the issuer URLs in token validation see the correct https:// scheme. Without it, the OAuth machinery may silently switch to http:// comparisons and fail intermittently.

8.2 Secret management

Secrets do not belong in source control. The repository uses a UserSecretsId in the csproj for development — the dotnet user-secrets tool stores them outside the project tree, in the user’s profile. In production, the same configuration keys should be served from a secrets manager (Azure Key Vault, AWS Secrets Manager, GCP Secret Manager) or from environment variables provisioned by your orchestrator.

The redacted appsettings.json shown in Listing 13 is what should be committed to the repository: real secret values are placeholders, and the actual secrets are sourced from the environment at runtime. Anything containing a real password or client secret should be in .gitignore and never reach a commit.

Listing 13: Redacted appsettings.json

{

  "ConnectionStrings": {

    "DefaultConnection": "Data Source=<DB_HOST>;Initial Catalog=ShopsnapDb;Persist Security Info=True;User ID=<DB_USER>;Password=<DB_PASSWORD>;Connect Timeout=240;TrustServerCertificate=True"

  },

  "GoogleAuth": {

    "ClientId": "<YOUR_GOOGLE_CLIENT_ID>.apps.googleusercontent.com",

    "Authority": "https://accounts.google.com",

    "Resource": "https://mcp.shopsnap.com/api/mcp/v1",

    "HostedDomain": "shopsnap.com"

  }

}

 

8.3 SQL injection: parameterized queries by default

Every tool method in this codebase queries the database through Entity Framework Core. EF parameterizes every LINQ query automatically: a Where(r => r.RecordId == recordId) call generates a parameterized SQL statement with @p0 placeholders, never string concatenation. As long as you stay in LINQ, SQL injection is structurally impossible. The footgun to avoid is FromSqlRaw with string interpolation — prefer FromSqlInterpolated, which preserves parameterization, or stay in LINQ entirely.

8.4 Transport choice: streamable HTTP versus stdio

The MCP SDK supports multiple transports. stdio is the easiest to wire up for local development but offers no per-request authorization hook — the entire ASP.NET Core authentication and authorization pipeline is bypassed because there is no HTTP request. WithHttpTransport() is what makes everything in Sections 5, 6, and 7 work. If you need security beyond "the OS process boundary," HTTP is the transport that gives you the hooks to enforce it.

8.5 The roster as data

Listing 14 shows what a real-world deployable roster looks like. The file lives alongside the binary and is reloaded the next time the process restarts.

Listing 14: Sample employee-tool-access.json

{

  "_comment": "Maps active shopsnap.com Google Workspace users to the MCP tools they may call.",

  "users": [

    {

      "email": "alice@shopsnap.com",

      "name": "Alice Example",

      "employeeId": 1001,

      "tools": [ "GetRecordById", "SearchCustomers", "GetCustomerPurchases" ]

    },

    {

      "email": "bob@shopsnap.com",

      "name": "Bob Example",

      "employeeId": 1002,

      "tools": [ "GetRecordById", "AddNote" ]

    },

    {

      "email": "carol@shopsnap.com",

      "name": "Carol Example",

      "employeeId": 1003,

      "tools": [ "GetRecordById" ]

    }

  ]

}

 

Storing the ACL as a separate file (rather than baking it into code) means a roster change does not require a rebuild. In the project file the roster is marked as CopyToPublishDirectory=PreserveNewest, so it ships with the binary on every publish but can be updated independently in production. For larger deployments the next step is to put the roster in a database table or a directory service like Google Workspace groups, but a JSON file works well for the first few hundred users and is easy to audit by reading.

Note: TrustServerCertificate=True in the connection string is acceptable for an internal SQL Server with a private CA, but it disables certificate validation. In production, prefer a properly trusted certificate so the connection string can drop that flag and gain real transport-level integrity guarantees against the database.

9. Mapping the Pillars Back to the Code

The synthesis. Each row of Table 1 takes one of the pillars from Section 3 and names the file or method that implements it in this codebase. This is the table to keep open in another tab while reading the source.

Table 1: Pillars to implementation mapping

Pillar

Implementation

File / location

Confidentiality

TLS termination + HTTPS redirect; secrets in user-secrets / vault, not appsettings

Program.cs UseHttpsRedirection; csproj UserSecretsId

Integrity

JWT signature validation via JWKS; tokeninfo aud re-check; EF parameterized queries

GoogleAuthExtensions.cs TokenValidationParameters and OnMessageReceived; Tools/*.cs LINQ

Availability

Fail-fast on missing roster; reused HttpClient to avoid socket exhaustion; configurable command timeout

EmployeeToolAccessRoster.cs ctor; static _tokenInfoHttp; Program.cs CommandTimeout(1800)

Authentication

Google OAuth 2.0; dual JWT + opaque token paths; hosted-domain + email_verified enforcement

GoogleAuthExtensions.AddGoogleMcpAuthentication, ValidateGoogleClaims

Authorization

[ToolAccess] attribute + ToolAccessRequirement policy + roster-backed handler with normalization

ToolAccessAttribute.cs, ToolAccessHandler.cs, EmployeeToolAccessRoster.cs

Auditing / Non-repudiation

Middleware logs email → tool after authentication runs

AuditLoggingMiddleware.cs

 

No single pillar is sufficient. An attacker who steals a token defeats Authentication but is still bounded by the roster (Authorization) and leaves a trail (Auditing). An insider who is on the roster bypasses Authorization for the tools they own but cannot reach the ones they do not, and every action they take is timestamped against their email. Defense in depth is what makes the failure of any one layer survivable.

10. Deployment Checklist and Hardening

Before opening the firewall, walk Table 2 top to bottom. Each item is something this article has covered, packaged for the operator who has to actually push the deploy button.

Table 2: Pre-deployment checklist

Item

Why

Consent screen set to "Internal"

First-line domain restriction at the IdP; rejects non-Workspace accounts before they reach your server

appsettings.json contains no real secrets

Avoid leakage via source control; secrets in vault / env vars only

Connection string sourced from Key Vault / env vars

Defense in depth for database credentials

HostedDomain matches your Workspace primary domain

Required for ValidateGoogleClaims to enforce the right domain

Roster file deployed alongside the binary

Server fails to start without it (intentional fail-closed)

HTTPS terminated at proxy with valid certificate

Confidentiality of bearer tokens in transit

Reverse proxy forwards X-Forwarded-Proto

UseForwardedHeaders needs it for issuer comparisons and redirect URIs

Audit logs shipped to a write-once sink

Tamper-evident non-repudiation — logs an attacker cannot rewrite

Every tool method has [ToolAccess] or its absence is intentional

A method without [ToolAccess] bypasses the roster entirely

Minimum scopes (openid email profile) only

Principle of least privilege; smaller blast radius if a token is stolen

 

10.1 The full pipeline, end to end

Listing 15 is the complete Program.cs that ties everything in this article together. Every line corresponds to a section above: configuration loading, the database context, the Google authentication extension, the authorization policy, the MCP server registration, the middleware order, and the protected endpoint. Forty-eight lines of composition root for an entire corporate-grade MCP server.

Listing 15: Program.cs — the full pipeline

using Microsoft.AspNetCore.Authorization;

using Microsoft.EntityFrameworkCore;

using ShopsnapMcpServer;

using ShopsnapMcpServer.Authentication;

using ShopsnapMcpServer.Authorization;

using ShopsnapMcpServer.Middleware;

using ShopsnapMcpServer.Models;

 

var builder = WebApplication.CreateBuilder(args);

 

builder.Services.Configure<ConnectionStringsOptions>(

    builder.Configuration.GetSection(ConnectionStringsOptions.SectionName));

 

builder.Services.AddDbContext<ShopsnapDbContext>(options =>

    options.UseSqlServer(builder.Configuration.GetConnectionString("DefaultConnection"),

        sqlOptions => sqlOptions

            .CommandTimeout(1800)

            .UseCompatibilityLevel(120)));

 

builder.Services.AddGoogleMcpAuthentication(builder.Configuration);

builder.Services.AddAuthorization(options =>

{

    options.AddPolicy(ToolAccessAttribute.PolicyName, policy =>

        policy.AddRequirements(new ToolAccessRequirement()));

});

builder.Services.AddSingleton<IEmployeeToolAccessRoster, EmployeeToolAccessRoster>();

builder.Services.AddSingleton<IAuthorizationHandler, ToolAccessHandler>();

 

// Add the MCP services: the transport to use (http) and the tools to register.

builder.Services

    .AddMcpServer()

    .WithHttpTransport()

    .WithTools<RecordTools>()

    .WithTools<CustomerTools>()

    .WithTools<AdminTools>()

    .AddAuthorizationFilters();

 

var app = builder.Build();

 

app.UseForwardedHeaders();

app.UseHttpsRedirection();

app.UseAuthentication();

app.UseAuthorization();

app.UseMiddleware<AuditLoggingMiddleware>();

 

app.MapMcp("/api/mcp/v1").RequireAuthorization();

 

await app.RunAsync();

 

10.2 Where to go next

The architecture in this article is a solid foundation, not a finished destination. The natural next steps depend on scale. For a few hundred users one JSON roster is fine; past that, replacing the file with a Google Workspace group lookup or a database table is the obvious move. Per-tool authorization can be tightened to per-argument authorization — a tool that returns customer records can scope its results to the caller’s region or department by reading additional claims from the principal.

Fork the repository, point it at your own Google Workspace, ship a read-only roster first, and watch the audit log for a week before you add any write tools. The patterns above are the load-bearing pieces; everything else is product. Per-tenant rosters, fine-grained per-argument authorization, and shipping audit events to a SIEM are the natural follow-ons once the basics are running. Good luck, and ship safely.

 


Related Tags:

No Comments Yet.

Leave a Comment
Top