Closed
Conversation
Collaborator
Author
|
Closing per request; scrapping this parser performance experiment. |
5 tasks
JanJakes
added a commit
that referenced
this pull request
Apr 28, 2026
Apply lexer optimisations from PR #375: - Cache `strlen($sql)` once in `$sql_length` instead of recomputing on each EOF check. - Replace `strspn($byte, MASK) > 0` with direct byte comparisons (`$byte >= '0' && $byte <= '9'`, `false !== strpos(MASK, $byte)`, unrolled whitespace check). - Use `strpos($sql, '*/', $pos)` instead of a manual scan loop in `read_comment_content()`. - In `read_quoted_text()`, use `strpos()` to find the next quote, eliminating the separate end-of-input check that follows the `strcspn()` scan. - Inline `next_token()` + `get_token()` in `remaining_tokens()` so the hot loop builds tokens directly. Co-authored-by: Adam Zieliński <adam@adamziel.com> Adapted from #375
JanJakes
added a commit
that referenced
this pull request
Apr 28, 2026
Token construction is on the lexer hot path; bypassing the `WP_Parser_Token::__construct()` indirection and assigning the four properties directly removes one method call per token. Requires `$input` on `WP_Parser_Token` to be `protected` instead of `private` so the subclass can write to it. Co-authored-by: Adam Zieliński <adam@adamziel.com> Adapted from #375
JanJakes
added a commit
that referenced
this pull request
Apr 28, 2026
Apply lexer optimisations from PR #375: - Cache `strlen($sql)` once in `$sql_length` instead of recomputing on each EOF check. - Replace `strspn($byte, MASK) > 0` with direct byte comparisons (`$byte >= '0' && $byte <= '9'`, `false !== strpos(MASK, $byte)`, unrolled whitespace check). - Use `strpos($sql, '*/', $pos)` instead of a manual scan loop in `read_comment_content()`. - In `read_quoted_text()`, use `strpos()` to find the next quote, eliminating the separate end-of-input check that follows the `strcspn()` scan. - Inline `next_token()` + `get_token()` in `remaining_tokens()` so the hot loop builds tokens directly. Co-authored-by: Adam Zieliński <adam@adamziel.com> Adapted from #375
JanJakes
added a commit
that referenced
this pull request
Apr 28, 2026
Token construction is on the lexer hot path; bypassing the `WP_Parser_Token::__construct()` indirection and assigning the four properties directly removes one method call per token. Requires `$input` on `WP_Parser_Token` to be `protected` instead of `private` so the subclass can write to it. Co-authored-by: Adam Zieliński <adam@adamziel.com> Adapted from #375
JanJakes
added a commit
that referenced
this pull request
Apr 29, 2026
Apply lexer optimisations from PR #375: - Cache `strlen($sql)` once in `$sql_length` instead of recomputing on each EOF check. - Replace `strspn($byte, MASK) > 0` with direct byte comparisons (`$byte >= '0' && $byte <= '9'`, `false !== strpos(MASK, $byte)`, unrolled whitespace check). - Use `strpos($sql, '*/', $pos)` instead of a manual scan loop in `read_comment_content()`. - In `read_quoted_text()`, use `strpos()` to find the next quote, eliminating the separate end-of-input check that follows the `strcspn()` scan. - Inline `next_token()` + `get_token()` in `remaining_tokens()` so the hot loop builds tokens directly. Co-authored-by: Adam Zieliński <adam@adamziel.com> Adapted from #375
JanJakes
added a commit
that referenced
this pull request
Apr 29, 2026
Token construction is on the lexer hot path; bypassing the `WP_Parser_Token::__construct()` indirection and assigning the four properties directly removes one method call per token. Requires `$input` on `WP_Parser_Token` to be `protected` instead of `private` so the subclass can write to it. Co-authored-by: Adam Zieliński <adam@adamziel.com> Adapted from #375
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What changed
This draft explores faster MySQL lexing and parsing while keeping the parser compact.
WP_MySQL_Lexer::remaining_tokens()by avoiding repeated public method calls during bulk tokenization.array_search()calls.SELECT ... INTOnegative-lookahead check unless the current rule isselectStatement.Why
The dynamic recursive-descent parser spends a lot of time repeatedly rejecting grammar branches that cannot match the current token. The lexer benchmark also paid avoidable overhead for the common
remaining_tokens()path used before parsing.This keeps the current architecture and grammar file format intact while moving more branch-selection work to grammar initialization.
Performance
Original trunk baseline captured before this branch:
4.824s@14.4k QPS21.275s@3.27k QPSFresh local run on this branch:
1.76580s@39.4k QPS9.31625s@7.47k QPSReviewer run from the adversarial loop:
1.71405s@40.6k QPS9.70479s@7.17k QPSThis is roughly:
2.8xfaster lexer time.2.28xfaster end-to-end parser time.It does not reach 10x. The independent reviewer concluded that further large gains likely require a generated/specialized parser or larger rearchitecture.
Parser size constraint
The current compact parser footprint remains well under the requested 200 KB cap:
src/parser/*.phpplussrc/mysql/mysql-grammar.php:92,090bytes total.93,804bytes total.Validation
git diff --checkphp -lon modified lexer/parser filescomposer run test -- --filter 'WP_MySQL_(Lexer|Server_Suite_(Lexer|Parser))'141tests1,420,987assertionscomposer run test667tests1,427,673assertions2skipped,2incompletephp packages/mysql-on-sqlite/tests/tools/run-lexer-benchmark.phpphp packages/mysql-on-sqlite/tests/tools/run-parser-benchmark.phpFollow-up exploration
The next phase should investigate whether a compact specialized parser can preserve the 200 KB cap while reducing dynamic recursive-descent overhead further. Promising directions:
src/parser/*.phpplus grammar metadata stays below 200 KB.