The parser currently has no resource limits. A document with an arbitrarily large number of tokens will be parsed to completion regardless of size, bounded only by PHP's memory_limit and max_execution_time.
graphql-js added a maxTokens parser option in v16.6.0 (graphql/graphql-js#3684) to address this. The rationale: parser CPU and memory usage scales with token count, and token counting is a language-level concept that works consistently across implementations (unlike AST node counting which is implementation-specific).
Proposed behavior
Add an optional maxTokens parameter to the parser options. When set, the parser tracks tokens consumed via advanceLexer() and throws a SyntaxError when the limit is exceeded:
Document contains more than {maxTokens} tokens. Parsing aborted.
No default limit — callers opt in by setting the value. This matches graphql-js behavior and avoids breaking existing users.
Why token counting complements other limits
- QueryDepth / QueryComplexity run after parsing — they cannot protect the parser itself
post_max_size limits raw bytes, but a single byte can create a token; legitimate queries have long string tokens, so byte limits are a poor proxy
- Token counting bounds the parser's work directly and is effective against both deeply nested queries and wide/alias-heavy queries
Reference
graphql-js implementation in Parser.advanceLexer():
advanceLexer(): void {
const { maxTokens } = this._options;
const token = this._lexer.advance();
if (token.kind !== TokenKind.EOF) {
++this._tokenCounter;
if (maxTokens !== undefined && this._tokenCounter > maxTokens) {
throw syntaxError(
this._lexer.source,
token.start,
`Document contains more than ${maxTokens} tokens. Parsing aborted.`,
);
}
}
}
The parser currently has no resource limits. A document with an arbitrarily large number of tokens will be parsed to completion regardless of size, bounded only by PHP's
memory_limitandmax_execution_time.graphql-js added a
maxTokensparser option in v16.6.0 (graphql/graphql-js#3684) to address this. The rationale: parser CPU and memory usage scales with token count, and token counting is a language-level concept that works consistently across implementations (unlike AST node counting which is implementation-specific).Proposed behavior
Add an optional
maxTokensparameter to the parser options. When set, the parser tracks tokens consumed viaadvanceLexer()and throws aSyntaxErrorwhen the limit is exceeded:No default limit — callers opt in by setting the value. This matches graphql-js behavior and avoids breaking existing users.
Why token counting complements other limits
post_max_sizelimits raw bytes, but a single byte can create a token; legitimate queries have long string tokens, so byte limits are a poor proxyReference
graphql-js implementation in
Parser.advanceLexer():