For most websites, caching is critical for improving performance, lowering infrastructure costs, and reducing unnecessary data fetching.
At the same time, caching can be exceedingly tricky to get right. There's a reason it's known as one of the two hardest problems in engineering.
But what is it about caching that makes it so challenging? It might be surprising to hear that it's not the process of caching itself. It's knowing what data to cache and when to invalidate it.
We realized this early in our journey at Stellate, and it's a problem we continue to solve as we grow and provide our users with sophisticated edge caching for GraphQL.
In the following sections, I'll look at the most challenging part of caching and important considerations you can incorporate into your caching strategies to ensure optimal performance. Here's what I'll cover:
Caching 101: Different Ways to Cache
There are many types of caches, from client-side caching in web browsers to server-side caching in backend applications or edge caching in Content Delivery Networks (CDNs), but the underlying premise remains the same: caching enhances overall system performance by leveraging cached data instead of repeatedly fetching it from the original data source.
Today, most websites benefit from caching, even if it's only at the web browser level. And as the volume of traffic and data handled by websites continues to grow, it's become increasingly common for websites to employ a multi-tiered caching system that combines different types of caching for various data types and use cases.
A website's HTML page, which is the output seen by site visitors, is often cached. But there is more going into that page than meets the eye. For example, if the page relies on multiple microservices and APIs, those may in turn have their own caching layers.
A news organization may use different layers of caching for different parts of its website. Headlines and news updates may change frequently and require frequent cache invalidation. Other parts of the site, such as the site header or specific landing pages, may change less often.
By combining different types and layers of caching and applying them to specific data types, websites can more effectively serve traffic, providing a better user experience while reducing the load on backend systems (and lowering infrastructure costs).
Nevertheless, effective caching also requires deep attention to detail and careful planning to optimize what data needs to be cached and the strategies you will implement for invalidating it.
The Challenge: Knowing What to Cache and When to Invalidate
While the actual process of caching is relatively straightforward, the real challenge lies in determining what to cache and when to invalidate cached data and fetch fresh content from the data source. This is where the heart of caching's complexity resides.
What to Cache
To determine what to cache on your website, you must first understand your website's characteristics and the way users interact with it.
That means having a clear view into the volatility of different data on your site. While caching is most effective for relatively static data or data that changes infrequently, you certainly can (and sometimes should) cache data that changes often; although it will require more careful cache management.
By analyzing user behavior to determine which parts of your site are most frequently accessed, you can better understand areas that will benefit most from caching.
You should also identify parts of your site that are the most resource-intensive, even if they’re accessed less frequently. Look for slow database queries or other resource-intensive operations where caching can help alleviate the load on your server and improve overall site performance.
While the above two areas offer a good baseline for “what to cache,” there are more advanced options, such as caching data that assists with personalization and individual user behavior. For example, caching personalized recommendations for content or products and preferred settings for things like notifications, languages, or even font size can significantly enhance the user experience by tailoring it on a per-user basis.
In summary, to optimize what you’re caching on your website, you need to understand user behavior, cache frequently accessed areas, and alleviate resource-intensive operations.
When to Invalidate
Whenever data is updated, any cached copies of that data become stale, leading to inconsistencies. To avoid serving stale content to users, caching invalidation tells the cache when specific cached data needs to be refreshed.
Implementing effective cache invalidation requires knowing the unique requirements and use cases of the data you wish to cache. Here are a few of the most common caching invalidation strategies:
Time-based expirations
One common approach to cache invalidation is to use time-based expirations like time-to-live (TTL). By setting a TTL for cached items, you can control how long data remains valid before it is automatically removed from the cache. While TTLs provide a simple mechanism for cache expiration, they're often best used in scenarios where sensitivity to stale data is low or data changes predictably.
The downside of TTLs is that cached items are removed from the cache even if the data is unchanged. This causes more requests to your servers and as a result slower requests for some users.
Stale-while-revalidate
Stale-while-revalidate (SWR) is an HTTP cache control extension that helps solve this issue, as it allows a client (such as a web browser) to display stale or expired content while attempting to revalidate the cache with the origin server in the background. This provides a faster, more seamless user experience, as the requested data can be accessed quickly from the cache, even if the cache has expired. Nonetheless, just like with TTLs, there is still a risk of exposing stale data.
Event-based invalidation
Event-based invalidation is the most advanced approach to cache invalidation. At key events such as direct data changes or likely data changes your system triggers cache invalidation directly. This method offers complete control over cache invalidation but requires additional mechanisms to detect and propagate these events effectively.
When tailored to your website or application's specific needs, these types of invalidation strategies can help you tackle the complexities of caching. But in order to define these rules, you must identify the critical data that requires caching, understand its dependencies, and determine the appropriate caching strategies for different use cases.
With Stellate's edge cache, you cache between your user and the backend, which means data is delivered at maximum speed. You also gain access to multiple options for cache invalidation, including:
TTL-based: You set a max-age and/or SWR value for specific types or fields and matching responses are cached. Once the time you configure has passed, the next request for that same query will cause Stellate to re-fetch the query from your backend and update the cache.
Mutation-based: Mutations are analyzed and the impacted types and fields determined. Based on this information the relevant cached items are removed from the cache.
Event-based (Manual): Sometimes, you wish to invalidate specific cached data manually. For example, you might have a webhook that changes data outside of your GraphQL request flow. In that case, you need to explicitly tell Stellate that the data has changed.
You can find out more about Stellate’s caching invalidation methods here.
Customizing Caching for Different Use Cases
Expanding on the strategies for cached data and cache invalidation discussed in the previous section, there are also specific examples and use cases in which the practical implementation of caching rules require deeper attention to detail and customization.
Consider an e-commerce store, where various aspects of the webshop, such as the shopping cart, product listings, and AI-based personalized recommendations, all require different caching approaches.
The Shopping Cart:
The shopping cart is personal to each user and is subject to frequent changes. In most cases, you will want to avoid caching the shopping cart. If you wish to cache it, achieving perfect invalidation would be crucial to ensure users always see the most up-to-date information.
Product Listings:
Product listings, however, remain the same for every user and don't change frequently. In this scenario, a TTL-based invalidation strategy with periodic revalidation might be suitable to keep the product data fresh without excessive overhead.
AI-Based Recommendations:
AI-based recommendations can be personalized or general for all users. This necessitates different caching strategies. For personalized recommendations, caching them per user becomes essential, while for general recommendations, caching them for a specific period and serving them to a particular group of users can be advantageous.
In a scenario like this, the benefits of a well-designed, multi-tiered caching system become evident, but so do the complexities of building such a system. How do you know which data points correspond to the different parts of your site? And more importantly, how do you track the impact of your caching strategies as you tweak and fine-tune your approach?
The answer lies in monitoring cache performance as you track the effectiveness of your caching strategies and make ongoing tweaks to fine-tune your approach.
Monitoring Cache Performance
Implementing an effective caching strategy is not a one-time task, especially for GraphQL, where frontend developers can modify queries as needed.
Whereas the cache strategy for a well-scoped and limited REST endpoint is unlikely to change over time, GraphQL endpoints don't exist, and users can instead create any operation they want on the fly.
This crystallizes a significant difference between GraphQL and REST, highlighting the granular level of control, power, freedom, and agility you gain with GraphQL. It also necessitates using robust metrics and insights with GraphQL to ensure you use it to its full potential.
As applications evolve and user behavior changes, caching strategies often require adjustments to maintain optimal performance. Continuously monitoring caching performance is essential to identify bottlenecks, assess cache efficiency, and ensure that caching delivers the intended benefits.
When it comes to implementing this type of monitoring, you have a few choices, and each path will inherently shape the outcomes you achieve.
One option is to build (and monitor) caching performance yourself, which can be highly time-consuming and error-prone.
Alternatively, you can use tools like Stellate, which provides sophisticated GraphQL edge caching and comes with an entire metrics suite that gives you access to powerful real-time insights
With no configuration, Stellate's GraphQL Metrics provide real-time observability for your GraphQL APIs usage, performance, and errors. That means gaining visibility into every query and mutation sent to your GraphQL API, and a clearer view into latency, errors, and more. It also means monitoring performance, identifying issues and constraints, and automatically overseeing the implementation of your GraphQL API down to the type or field level.
By looking at crucial cache performance metrics such as cache hit rate and cache miss rate, you can better understand how well your cache is performing, whether it's effectively reducing the load on the backend, and where you can make potential improvements.
Track all HTTP and GraphQL errors and receive alerts on spikes the moment they become a problem for your users. Analyze issues quickly and with precision. Deep dive into your data with pinpoint accuracy, aid your debugging process, and inform decision-making.
Find out more about Stellate's GraphQL Metrics here.
Conclusion
Caching may seem like a simple concept on the surface, but the devil is in the details. Understanding what to cache and when to invalidate is the true challenge of caching.
Each application and use case demands a unique caching setup to achieve peak performance—there is no one size fits all. However, by carefully customizing caching strategies and closely monitoring cache performance, you can harness the true power of caching and deliver optimal user experiences.
At Stellate, we embrace these challenges and strive to provide our users with a powerful caching solution that enables lightning-fast API responses at scale at a fraction of the cost.
Find out more about Stellate and the work we do here.