May 21

Open sourcing graphql-query: 8.7x faster GraphQL query parser written in Rust

Blog post's hero image

We’re excited to announce that we’re open-sourcing graphql-query, our implementation of a parser for the GraphQL execution language! 🎉 We have been using graphql-query in production at Stellate for over two years, handling billions of requests for our customers every month.

Check out the repository and the Rust crate, and read below for the backstory and how we made it 8.7x faster.

Why did we need to build a new library?

Performance is critical to Stellate as we our GraphQL edge caching has to add the least overhead possible. One of the primary reasons customers wish to use our edge caching is to improve request response times.

In October last year we announced that our Edge Caching had become 2x faster. This was primarily done by reducing network overhead associated with distributing our logic across Fastly and CloudFlare by consolidating all logic on Fastly.

As Fastly has a first-class SDK for Rust, we eliminated the use of TypeScript in our CDN in favor of Rust as part of the migration. This rewrite to a single Rust binary gave us many opportunities to make additional improvements to the performance of our CDN.

graphql-query is one of these performance improvements.

We didn’t set out to to write our own parser. First, we went over everything we’d need and reached some conclusions when evaluating GraphQL parsers in the Rust ecosystem:

  1. Most of the existing crates lacked a primitive to alter executable documents — we needed to add the __typename field to selections, and we knew we would need it to build Partial Query Caching.

  2. We needed a primitive to convert a GraphQL introspection into a client_schema so we could analyze a document and tell what types/fields would be requested.

We didn’t find anything to address these missing pieces, not even if we combined a few libraries.

That meant writing our own specialized library was the best option.

Setting graphql-query's design goals

We now had the opportunity to define some clear design goals for this new GraphQL library:

  • Performance: we sit between the origin of our customer and their customer, if we impose slowdown then we aren’t serving our customers well.

  • An elegant Developer Experience: we didn’t want to battle lifetimes, for example.

  • A way to manipulate documents that pass through our CDN.

  • Functionality scoped to the GraphQL Execution language (i.e., GraphQL operations, not GraphQL schemas).

Building on the shoulders of giants

When we started looking for tools to achieve our goals, we quickly bumped into Logos, which brands itself as “Create ridiculously fast Lexers.” After trying it, that statement really holds up. Aside from that, integrating Logos was also pretty elegant. Go check it out! We moved on, reading the GraphQL spec and translating everything into the Lexer. We had our first piece, our Lexer!

Next up was creating the parser. Design choices made here would have a big impact on the Developer Experience when using graphql-query, because the parsed AST is what we’d be interacting with in our CDN. We didn’t want to battle lifetimes so… we chose to put everything on an allocator that we’d carry around for the duration of a request passing through our CDN. For the allocator we chose bumpalo which is a recognised library in the Rust ecosystem!

Putting graphql-query together

At this point, we basically had all the pieces we needed. We went on to create a JSON module to handle variables and arguments in a spec-compliant manner. We created basic schema-less validation rules for our documents, a visit function, and our folder, which powers our AST manipulation.

In the Stellate codebase, you’ll see code that looks like the following

use graphql_query::ast::{ASTContext, Document, ParseNode, PrintNode};
fn main() {
let ctx = ASTContext::new();
// Parse the source_string with the context
let document = Document::parse(request.body.query)?;
// Manipulate the document with our folder
let new_document = add_typenames(&ctx, document);
// Print the new document and sent it to the origin
send_request_to_origin(new_document.print(), request.body.variables);
}

The ASTContext contains an arena allocator, bumpalo, which handles memory allocation for any given request. This highly increases the efficiency of our code, since it reduces memory allocations and allows all allocations per request to be freed after the request has been processed.

This is ideal for a GraphQL runtime since, as seen here, much work, like parsing, validation, and printing, is done as one synchronous operation per request.

An example folder to add __typename to selections would look as follows (simplified version of what we use at Stellate)

use graphql_query::{ast::*, visit::*};
struct AddTypenames {}
impl<'arena> Folder<'arena> for AddTypenames<'arena> {
fn selection_set(
&mut self,
ctx: &'arena ASTContext,
selection_set: SelectionSet<'arena>,
_info: &VisitInfo,
) -> Result<SelectionSet<'arena>> {
if selection_set.is_empty() {
return Ok(selection_set);
}
let has_typename = selection_set.selections.iter().any(|selection| {
selection
.field()
.map(|field| field.name == "__typename" && field.alias.is_none())
.unwrap_or(false)
});
if !has_typename {
let typename_field = Selection::Field(Field {
alias: None,
name: "__typename",
arguments: Arguments::default_in(&ctx.arena),
directives: Directives::default_in(&ctx.arena),
selection_set: SelectionSet::default_in(&ctx.arena),
});
let new_selections = selection_set
.into_iter()
.chain(iter::once(typename_field))
.collect_in::<bumpalo::collections::Vec<Selection>>(&ctx.arena);
Ok(SelectionSet { selections: new_selections })
} else {
Ok(selection_set)
}
}
}

Benchmark results

To look at a few comparison benchmarks, you can find the code for these in the repository (lower is better)

// Parsing comparison
graphql_ast_parse_graphql_query ... bench: 2,886 ns/iter (+/- 130)
graphql_ast_parse_graphql_parser ... bench: 25,122 ns/iter (+/- 1,711)
graphql_ast_parse_apollo_parser ... bench: 36,242 ns/iter (+/- 1,062)
// Printing comparison
graphql_ast_print_graphql_query ... bench: 1,082 ns/iter (+/- 79)
graphql_ast_print_gql_parser ... bench: 1,137 ns/iter (+/- 48)
graphql_ast_print_apollo_parser ... bench: 20,861 ns/iter (+/- 518)
// Our other utilities are also fast
graphql_ast_fold ... bench: 8,466 ns/iter (+/- 768)
graphql_ast_validate ... bench: 2,339 ns/iter (+/- 127)
graphql_load_introspection ... bench: 90,265 ns/iter (+/- 4,899)

For parsing queries, graphql_query is 8.7x faster than graphql_parser. (the other Rust GraphQL parser) Even saving less than a millisecond per request adds up to hours of saved processing time when you process billions of requests per month.

We have been using graphql-query for the past two and a half years and are so excited to bring it to you! We’re looking forward to hearing what you do with graphql-query!

Credits to Philpl for writing the original implementation of this library