At Stellate, our mission is to bring superpowers to large-scale GraphQL APIs. Our customers love our GraphQL Edge Caching and Rate Limiting products and the power it gives them. However, with great power comes great responsibility, we understand that making changes to the configuration for your Stellate service can be scary because it affects your production traffic!
That’s why we want to enable our users to be able to test config changes locally before deploying them to production with the stellate serve
command. However, our CDN can’t access your local GraphQL server running on localhost:1234
, and we can’t run the CDN locally on your machine either…what do we do?
Possible Solution #1: Reverse Proxy
Our first iteration of stellate serve
used a reverse proxy to expose the local running server to the internet, which worked surprisingly well!
We installed an npm package on-demand when a user tried stellate serve
for the first time. Then we would take the local running server and run the reverse proxy against it. When we have the proxied URL, we would push up a stellate config for a new service that runs against this URL:
Unfortunately, it turns out this doesn’t work for many people—a surprisingly large number of companies block these types of reverse proxies on their company networks.
Possible Solution #2: Tunneling over WebSocket
We went back to the drawing board and tried to think about how we can reach out to a locally running server without exposing it to the internet and… we got to websockets. What if the CLI spun up a HTTP proxy that connects with the CDN over a websocket and then sends requests & responses back and forth?
To make this work, we needed to find a way to dynamically construct WebSocket channels that are isolated to one service and are easy to connect to.
With CloudFlare’s Durable Objects, this quickly went from a “what-if” to “it works!”
We created a CloudFlare Worker that handles two paths, /send
and /websocket
, both of which are sent to a Durable Object that is name-spaced to the GraphQL API. When we see a websocket
upgrade request coming in, we establish a connection from the proxy running on the CLI to the durable object.
Now that we have a connection, we can create a separate Stellate service that, as an originUrl
, uses this Durable Object but uses the /send
path instead. When a request comes in on the CDN, it will forward that request to /send
this request, then gets sent by the durable object to the WebSocket we established earlier and will arrive in the proxy running in our CLI.
Code snippet for a high level overview:
// Our workerexport default {fetch(request, env) {const url = new URL(request.url)const path = url.pathname.split('/').slice(1)const command = path[0] || 'websocket'const serviceName = path[1]?.toLowerCase() || nullif (command === 'websocket') {// verify whether this user is allowed to connect to the WS}// Get the durable object for this service and dispatch// the network callconst tunnel = getTunnelObject(env, serviceName)return tunnel.fetch(request)}}// Our durable objectexport class Tunnel {async fetch(request: Request) {switch (new URL(request.url).pathname.split('/')[1]) {case 'send':// Send the network request over websocket// to the localhost URLreturn this.onBroadcast(request)default:// Open a websocket connection to// the localhost URLreturn this.onOpenTunnel(request)}}}
We invite you to try this out yourself using the Stellate CLI:
stellate serve --backend-port 4000
Replace the port number to match the port of your local GraphQL server. The CLI allows several more options like path (in case your server runs on /graphql), what port we should serve on, etc. You can read the full docs here.
Conclusion
I’ve always been a big fan of the technology behind Durable Objects and this seemed like a great time to test it out for a real problem we were facing. It has been working great for us and I encourage folks to explore this space, it’s a great way to add a stateful side to your stateless workers!