diff options
author | René Kijewski <Kijewski@users.noreply.github.com> | 2024-01-12 20:41:46 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-01-12 20:41:46 +0100 |
commit | 3cddb918897383402a58a5d74b49500571144056 (patch) | |
tree | 714d350e4e95db756f35da6ca95882ab8b2768f1 /testing/templates/macro.html | |
parent | b52274d6e8060fc7a05e5b7d854d95dd7084b24b (diff) | |
download | askama-3cddb918897383402a58a5d74b49500571144056.tar.gz askama-3cddb918897383402a58a5d74b49500571144056.tar.bz2 askama-3cddb918897383402a58a5d74b49500571144056.zip |
Generator: make `normalize_identifier` faster (#946)
`normalize_identifier` is called quite often in the generator, once for
every variable name or path element that is written.
This PR aims to speed up the function by
* using a per-length input string length replacement map
* binary searching the replacement map instead of a linear search
Diffent, but functionally equivalent implementations were compared:
```text
* linear search in one big map: 348.44 µs
* binary search in one big map: 334.46 µs
* linear search in a per-length map: 178.84 µs
* binary search in a per-length map: 154.54 µs
* perfect hashing: 170.87 µs
```
The winner of this competition is "binary search in a per-length map".
It does not introduce new dependencies, but has the slight disadvantage
that it uses one instance of `unsafe` code. I deem this disadvantage
acceptable, though.
Nb. It was also tested if a variant that only stores the replaced string
would be faster. This "optimization" proved to be slower for all
implementations except "binary search in a per-length map", for which it
has the same runtime. Without a clear advantage to use the "optimized
version", I chose to use the more easy to read slice of tuples variant.
Obviously, for all measurements: YMMV.
Diffstat (limited to 'testing/templates/macro.html')
0 files changed, 0 insertions, 0 deletions