Announcing Partial Query Caching: Automatically split your GraphQL Queries at the Edge for Ideal Caching
All current GraphQL caching solutions fall short for many use cases. Why? Because they only allow caching whole responses. When a single field in a query cannot be cached or contains user-specific data, this affects the caching of the entire response. (Examples include personalized product prices, up-to-date availability, and AI-powered dynamic recommendations)
Today, we are excited to introduce the first alpha release of Partial Query Caching. Based on per-GraphQL type or even per-field cache configuration, our GraphQL edge caching will automatically split queries for ideal caching.
Removing the Last Barrier to GraphQL Caching
GraphQL’s flexibility is one of its biggest selling points. But, it also makes caching more difficult, as queries can easily contain even a single field that is not cacheable or cacheable only for a specific user.
Stellate’s Partial Query Caching automatically splits uncacheable and scoped (user-specific) data into separate queries. Not only does it work for root-level queries but also for nested types and fields!
This enables you to dramatically increase the amount of data that you can cache across your queries, in turn, offloading significantly more traffic to our GraphQL edge caching, thus improving your stability and decreasing your infrastructure costs.
How does it work?
As an extreme example to illustrate, let’s say your client sends the below query and you have cache rules defined that apply this cache configuration:
{product(slug: "blue-sneakers") {...ProductInfo # maxAge: 86400s, scope: PUBLIC...PersonalPromoCodes # maxAge: 60s, scope: USER...ProductAvailability # nonCacheable ⚠️}currentUser {...UserInfo # maxAge: 3600s, scope: USER}}
All other GraphQL caching solutions will bail out of caching completely and you would get a 0% Cache Hit Rate, as the data contains not only scoped (user-specific) data but also nonCacheable data.
However, with Partial Query Caching, this query will be split up into four distinct parts based on their cacheability:
# [Query 1] maxAge: 86400s, scope: PUBLIC{product(slug: "blue-sneakers") {...ProductInfo}}# [Query 2] maxAge: 60s, scope: USER{product(slug: "blue-sneakers") {...PersonalPromoCodes}}# [Query 3] nonCacheable ⚠️{product(slug: "blue-sneakers") {...ProductAvailability}}# [Query 4] maxAge: 3600s, scope: USER{currentUser {...UserInfo}}
Your origin will only ever receive a single query for the data that is missing from the cache.
Even Better, It Runs Automatically
Once an engineering team sets up cache rules for the data that they own, Partial Query Caching will automatically ensure their data is cached, no matter the query that accesses it. You can easily monitor the results with our metrics dashboard and adjust your rules. Everything else is automatic.
To get early access to Partial Query Caching, set up a call with our founder Max. He will help you assess Partial Query Caching for your specific use case and set it up with you.