Depending on your usecase, you can go one level further and directly evaluate the query AST without delegating to resolvers.
(This generally only makes sense if you do programmatic schema generation. IE, read some metadata about a source and then generate GQL types/operations from it.)
Rather than having resolvers, and firing the "users" and "users.todos" resolvers, you interpret the AST of the query directly as if it were a regular value
In the case of a SQL database, you'd have a recursive function that takes a level of the query AST and generates a SQL string. Then you'd execute the query, format the results, and return it to the client.
IE, in this case you'd probably generate something like
SELECT id, email, JSON_OBJECT('todos', ...) AS todos
FROM users JOIN todos ON todos.user_id = users.id
As far as the client can tell, it's a "regular" GraphQL server, but the serverside implementation doesn't have any kind of real executable schema. It just behaves like it does.
The downside of this approach is that you lose the ability to use built-in functionality for things like tracing, query depth/complexity limiting, etc. You'd have to write this on your own.
It's good to see this happening, and happening in the open. In both JS and Python I've seen GraphQL execution take a ridiculous amount of time using the commonly used libraries, so long that we had fall back to raw JSON content in some parts of our schema that previously had a lot of small resolvers. There was a Python semi-JIT experiment a while ago, but it was paid and had some dodgy security implications os we couldn't use it, but the concept is a strong one.
This has been an effective approach for other areas like templating in Jinja2, so I'm not surprised that it's being used for GraphQL.
If you need to go to these steps to make optimizations, would it not make more sense to switch to a more bare metal programming language such as Rust for the entire stack?
Only if you are okay with maintaining two separate and evolving codebases until Rust one catches up to Ruby one used in production. But who knows how many months or years that may take.
so I guess the difference is instead of having totally generic functions that parse the query every time, instead you can call a factory function up front to get a specialised function for a particular query
https://github.com/benawad/node-graphql-benchmarks
Depending on your usecase, you can go one level further and directly evaluate the query AST without delegating to resolvers.
(This generally only makes sense if you do programmatic schema generation. IE, read some metadata about a source and then generate GQL types/operations from it.)
So a query comes in, say something like:
Rather than having resolvers, and firing the "users" and "users.todos" resolvers, you interpret the AST of the query directly as if it were a regular valueIn the case of a SQL database, you'd have a recursive function that takes a level of the query AST and generates a SQL string. Then you'd execute the query, format the results, and return it to the client.
IE, in this case you'd probably generate something like
As far as the client can tell, it's a "regular" GraphQL server, but the serverside implementation doesn't have any kind of real executable schema. It just behaves like it does.The downside of this approach is that you lose the ability to use built-in functionality for things like tracing, query depth/complexity limiting, etc. You'd have to write this on your own.