Playing with ChatGPT RemoteMCP without OAuth

10 Jan 2026 - tsp
Last update 10 Jan 2026
Reading time 11 mins

Over the last few months, OpenAIโ€™s ChatGPT web interface gained support for remote MCP servers, making it possible to expose custom tools directly to ChatGPT without running your own client and orchestration layer. This finally allows personal services, databases, and internal tooling to be queried interactively from the ChatGPT web user interface itself.

However, there is a catch that raises the bar to play around: while the MCP specification supports multiple authentication mechanisms, the ChatGPT web interface currently expects OAuth for any non-public connector. For production systems this is perfectly reasonable, but for experimentation, prototyping or private one-user services, OAuth is a surprisingly heavy requirement. In addition simple OAuth is not sufficient - you need to support dynamic client registration and token exchange. This introduces a lot of boilerplate, when all you really want is a secret string that proves this is me to access my own personal data.

To make matters more restrictive, the ChatGPT web interface does not currently allow configuring a simple bearer token, even though this would be fully compliant with the MCP protocol. As a result, there is no officially supported way to use static API keys when playing with remote MCP servers through the browser.

This short tutorial shows a pragmatic workaround. We are using a static shared secret, that is passed via the query string, combined with validation middleware that extends the capabilities of our FastMCP backend. This approach is not suitable for production, leaks secrets into logs, and violates several best-practice security rules - but it works well enough to explore Remote MCP, understand the tooling model and prototype private connectors without having to set up a full OAuth stack. In addition keep in mind that utilizing the query string for an token means that you break potentially idempotent semantics of HTTP. This may become a problem in case you utilize transparent caches in your stack.

Along the way, we will also look at how to run such a connector behind an Apache reverse proxy with proper TLS termination, which is often the simplest way to host multiple experimental MCP services on a single machine. In addition it is helpful to outsource TLS/SSL termination to a reverse proxy

The Idea

Since we cannot pass the token in the Authorization header due to limitations - we simply pass it in the query string part of the URL. This is basically a hidden, secret URI, which is usually not a sane solution for authentication due to URIs being logged in various locations, during normal web operation URIs are usually also passed around, shown in clear text, they end up in browser histories and are accessible to various webpages, etc. - there are many reasons not to hide secrets in URIs or rely on hidden URIs. But for us this provides a very simple idea on how to perform basic password authentication for a single remote service. To handle this on the client side, that we are realizing using FastAPI and FastMCP we utilize custom Starlette middleware to validate either the bearer token (if present) or the secret URI.

The Connector

Lets start off with the connector. The following code snippet is a very simple example on how to build an MCP server that performs authentication with a bearer token or an ?api_key query parameter. Note that in the example the API key is encoded as a static string.

The simple example:

#!/usr/local/bin/python3

from contextlib import asynccontextmanager, AsyncExitStack
from fastapi import FastAPI, Request, HTTPException
from starlette.middleware.base import BaseHTTPMiddleware
from fastmcp import FastMCP

# CHANGE THIS โ€“ your static secret (make it long & random - and in
# best case either get it from the environment or from a configuration
# file)

STATIC_API_KEY = "YOUR_VERY_SECRET_KEY"

# Custom middleware to support BOTH header AND query param. You can
# imagine the middleware like a filter that resides between all HTTP requests
# that reach our service

class ApiKeyAuthMiddleware(BaseHTTPMiddleware):
    async def dispatch(self, request: Request, call_next):
        token = None

        #  Check standard Bearer header first

        auth_header = request.headers.get("Authorization")
        if auth_header and auth_header.startswith("Bearer "):
            token = auth_header.split(" ", 1)[1]

        # Fallback: query param for ChatGPT web UI workaround

        if not token:
            token = request.query_params.get("api_key")

        # If there is no API key we fail with error 401
        if not token or token != STATIC_API_KEY:
            raise HTTPException(
                status_code=401,
                detail="Invalid or missing API key. Use ?api_key=... or Authorization: Bearer ..."
            )

        # Bubble request through the filter graph

        response = await call_next(request)
        return response

# Create the FastMCP application

mcp = FastMCP("Static token authentication demo")

# Define tools (must happen before http_app() is called)

@mcp.tool(annotations={
    "title": "Say hello to someone",
    "readOnlyHint": True,
    "destructiveHint": False,
})
def hello(name: str = "world") -> str:
    """Say hello to someone."""
    return f"Hello {name}! ๐Ÿ‘‹ (Authenticated via API key). A little secret: vampires are actually sweet and friendly vegetarians, you can squeeze them while they are enjoying hanging in the sun."

# Create the HTTP application wrapper around the MCP server 
mcp_http_app = mcp.http_app()  # transport="http" by default

# Manually enter the FastMCP lifespan via our async context manager.
# This will create and start the task group and session manager

@asynccontextmanager
async def combined_lifespan(app: FastAPI):
    async with AsyncExitStack() as stack:
        await stack.enter_async_context(mcp_http_app.lifespan(app))
        yield

# Now we create the FastAPI application and pass our lifespan management.
app = FastAPI(lifespan=combined_lifespan)

# Register our middleware that handles _all_ requests. This will handle
# the authentication tokens
app.add_middleware(ApiKeyAuthMiddleware)

# Mount the MCP service in the global scope. This will register /mcp
# Since we use a reverse proxy all other parts of the URI are handled
# by the reverse proxy
app.mount("/", mcp_http_app)



# Main startup logic. We launch uvicorn and create the unix domain socket
# If you want to use a TCP socket you supply for example
#
#    host="0.0.0.0", port=1234
#
# instead of uds="..."

if __name__ == "__main__":
    import uvicorn
    print(f"Server ready!")
    uvicorn.run(app, uds="/tmp/mcptest.sock")

Apache httpd Reverse Proxy Configuration

We will forward all incoming requests from our public facing Apache httpd. This is of course an arbitrary choice, you can utilize any webserver or even expose the connector by itโ€™s own https interface. Keep in mind that you need SSL/TLS! An example configuration, that exposes our service under /mcptest/mcp - i.e. with an additional prefix so we can run multiple connectors on the same domain and still use for example acme.sh to create TLS certificates using the web root technique - would be:

        ProxyTimeout    1800
        ProxyPass       /mcptest/       "unix:/tmp/mcptest.sock|http://127.0.0.1/" connectiontimeout=300 timeout=1800 nocanon retry=0
        ProxyPassReverse /mcptest/      "unix:/tmp/mcptest.sock|http://127.0.0.1/" 

A detailed discussion of why to utilize Unix domain sockets instead of TCP and reasons why to run your service behind a reverse proxy is provided in a previous blog post.

Launching the Application

Next we launch the application:

$ ./connector.py
Server ready!
INFO:     Started server process [85712]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on unix socket /tmp/mcptest.sock (Press CTRL+C to quit)
INFO:      - "POST /mcp?api_key=XXXXXXX HTTP/1.1" 200 OK
INFO:      - "POST /mcp?api_key=XXXXXXX HTTP/1.1" 200 OK

Later when you run the application you will see that we will leak the API key here in the log. Keep this in mind.

Utilizing the Remote MCP server using ChatGPT

Now - how can you use this server using ChatGPT. First the negative side upfront: You cannot use it in conjunction with ChatGPTs memory features since you have to enter developer mode. The steps to configure the remote MCP are simple:

Enable Developer Mode

You will loose access to your local memories and the ability of ChatGPT to access all of your chats while operating in this mode unfortunately, there is no way to bypass. This is not a limitation of the MCP interface but OpenAIs policy when utilizing applications that they have not verified.

Enabling developer mode in ChatGPT

Create a new Application

After enabling developer mode you can create a new application

Create a new application on ChatGPT

As soon as you select Create the platform will contact your remote MCP server. It only succeeds if the connection is secured via SSL/TLS and if the request succeeds to list all available tools. These will be shown in the application dialog including their supported metadata:

An application on ChatGPT

Querying the Connector

Now you can use your connector as long as you run in developer mode. To utilize the MCP server you have to add it to the enabled tools in the dialog like you would also enable deep research, agentic mode, files and images or search capabilities.

Enabling the test connector on ChatGPT

ChatGPT will query your MCP server and return an appropriate response

Response from the remote MCP server

Conclusion

This example demonstrated how a minimal remote MCP server can be exposed to ChatGPT using a static shared secret, bypassing OAuth in order to experiment with custom tools from the web interface. By validating either a bearer token or a query-string parameter in middleware, it becomes possible to prototype MCP services quickly without building a full authentication stack.

It is important to be explicit about the limitations of this approach. Passing secrets in URLs is unsafe, leaks credentials into logs and if accessed via a browser also into their histories, and should never be used for multi-user systems or services that handle sensitive data. OAuth - or at least proper header-based token authentication - is the correct solution for anything beyond personal experimentation.

That said, for one-user setups, local services, or early-stage exploration of MCP capabilities, this technique lowers the barrier significantly. It allows you to focus on tool design, data exposure, and orchestration semantics before committing to production-grade authentication.

I personally hope that as remote MCP continues to evolve, that the ChatGPT web interface will eventually support simpler authentication modes explicitly - and allow us to interact also with memories and other tools available on the web interface. Until then, this workaround provides a practical way to explore the ecosystem, understand MCPs strengths and constraints, and decide where a full OAuth deployment actually makes sense.

This article is tagged:


Data protection policy

Dipl.-Ing. Thomas Spielauer, Wien (webcomplains389t48957@tspi.at)

This webpage is also available via TOR at http://rh6v563nt2dnxd5h2vhhqkudmyvjaevgiv77c62xflas52d5omtkxuid.onion/

Valid HTML 4.01 Strict Powered by FreeBSD IPv6 support