GraphQL Edge Cache Quickstart
If you didn't do yet, first create a Stellate service.
Once the service is ready, let's add some caching!
Before diving into the specifics, first have a look at our integrations. In case you're using one of the following frameworks, you can use Stellate with zero config!
Integrations
Strapi Plugin (Alpha)
Get edge caching with automated cache invalidation for your Strapi app
Saleor Cache Configuration
Get edge caching with automated cache invalidation for your Saleor site
Contentful Cache Configuration
Get edge caching with automated cache invalidation for your Contentful site
Using Stellate's edge cache with a custom GraphQL API
If you have a custom GraphQL API which is not covered by the above integrations, Stellate provides a powerful caching system, which allows you to cache (nearly) everything. By now we expect you already set up your service (without caching) and run it in production. If that's not yet the case, first check out our quickstart.
1. Configure Scopes to cache authenticated data
You can skip this step if you don't have authenticated data and go to Step 2
The first step is to set up Scopes, which makes sure that you do not inadvertently share cached information with somebody who doesn’t have access to that information.
import { Config } from 'stellate'
const config: Config = {
config: {
scopes: {
AUTHENTICATED: {
definition: 'header:authorization',
},
},
},
}
export default config
Scopes are explained in detail in the introduction to Scopes, so we recommend you read that documentation article before continuing.
2. Configure types you never want to cache
With the required scopes configured, we can now look into cache rules.
Since we are taking a conservative approach with this guide, we would also recommend thinking about any types (or fields), that you definitely do not want to cache at all. This could include information that is rapidly changing or information that you need to be accurate at all times.
Add those types to the nonCacheable
config property so that no response that includes any of those types is ever cached. You can target specific fields as well using the syntax <type-name>.<field-name>
if you need that additional specificity.
With our sample SpaceX API we do not ever want to cache information about the Roadster
floating around in our star system, so we’ve disabled caching for it.
import { Config } from 'stellate'
const config: Config = {
config: {
originUrl: 'https://api.spacex.land/graphql',
nonCacheable: ['Roadster'],
},
}
export default config
You can either save the config under "Config" in your service dashboard or use the CLI with stellate push
.
3. Cache your first query
With scopes configured and cache rules for types (or fields) you don’t want to cache set up, we can now (finally) work on caching data. This will be an ongoing cycle of:
- Identify a query (or type) to cache
- Implement the required invalidations in your backend
- Configuring the corresponding cache rules to enable caching for that query
Identify the right query
The queries that are important for you to cache depend heavily on your specific use case. If there are any that have particularly slow response times or cause a lot of load on your server, those usually make good initial targets.
On top of that, data that is public, read-heavy, and/or doesn’t change frequently will have a higher cache hit rate and it could thus make sense to prioritize those queries.
Implement invalidation
Once you have a query (or type, or field) to cache identified, we would recommend thinking about how you want that cached data to be invalidated. If you are fine with stale data for a certain time, you don’t have to make changes to your application, and can instead rely on the maxAge
and swr
properties.
However, if you want to have fine-grained control over cache expiration, you can implement custom purging logic in your application based on the Purging API we make available for each service.
If you pass your mutations through Stellate as well, we automatically take care of some invalidation. That behavior is documented on Automatic Purging based on Mutations, however, there are some edge cases e.g. list invalidation where we can not automatically figure out which results to purge.
Once you are happy with the invalidation logic, whether handled automatically by Stellate, implemented in your application, or relying on time-based expiration, you can go ahead and add a new cache rule for your query.
Create a Cache Rule
You can either target a named query, or target types and fields included in the response. If you do not target a named query, keep in mind that the cache rules will apply to every response returning that type or field.
In our example, we targeted the launchLatest
and launchesPast
queries, and cache them for a day (86400 seconds) with a 1 hour (3600 seconds) stale-while-revalidate time. If your data is specific to a user (or a set of users), make sure you have a corresponding scope selected when creating the cache rule.
import { Config } from 'stellate'
const config: Config = {
config: {
originUrl: 'https://api.spacex.land/graphql',
rules: [
{
types: {
Query: ['launchLatest', 'launchesPast'],
},
maxAge: 86400,
swr: 3600,
description: 'Cache launchLatest and launchesPast',
},
],
},
}
export default config
With the cache rule in place, and invalidation taken care of, we can now push the configuration either through saving it in the Stellate dashboard or by running stellate push
in the CLI.
You can test that the cache is working by sending a matching query to your Stellate service and looking at the gcdn-cache
header included with the response. It will initially show a MISS
, indicating that the query was not cached yet. However, if you send it again, you will see a cache HIT
instead.
You’re now edge-caching your first GraphQL query! 🎉
If you do not see a cache HIT
, even after repeatedly sending the query,
reach out to support@stellate.co. We’d be happy
to help you look into this.
Once you’re happy with the results of the initial caching attempts, repeat the steps again, first thinking about what would be the next best query (or type, or field) to cache, how to handle invalidation and whether you need to make changes to your application and then adding the required cache rules.
You will be able to see the progress that you make in regards to cache hit rate as well as response times and bandwidth saved on your dashboard. And your backend service will likely see reduced load and improved response times as well.
From here you can now check out some more advanced concepts on caching: