-
Notifications
You must be signed in to change notification settings - Fork 42
Open
Description
Sub-quadratic decreasing of throughput when length of the JSON object is increasing
On contemporary CPUs parsing of such JSON object with an additional field that has of 1000000 decimal digits (~1Mb) can took ~13 seconds:
[info] Benchmark (size) Mode Cnt Score Error Units
[info] ExtractFieldsReading.weePickle 1 thrpt 3 3302801.285 ± 1260746.202 ops/s
[info] ExtractFieldsReading.weePickle 10 thrpt 3 3120990.432 ± 202607.161 ops/s
[info] ExtractFieldsReading.weePickle 100 thrpt 3 814496.851 ± 15939.790 ops/s
[info] ExtractFieldsReading.weePickle 1000 thrpt 3 49105.311 ± 21893.732 ops/s
[info] ExtractFieldsReading.weePickle 10000 thrpt 3 674.598 ± 44.746 ops/s
[info] ExtractFieldsReading.weePickle 100000 thrpt 3 7.022 ± 0.054 ops/s
[info] ExtractFieldsReading.weePickle 1000000 thrpt 3 0.070 ± 0.002 ops/s
Probably the root cause is in the jackson core library, but reporting hear hoping that a hot fix to just skip unwanted values can be applied on the weePickle level.
weePickle version
1.8.0
Scala version
2.13.10
JDK version
17
Steps to reproduce
To run that benchmarks on your JDK:
- Install latest version of
sbt
and/or ensure that it already installed properly:
sbt about
- Clone
jsoniter-scala
repo:
git clone --depth 1 https://github.com/plokhotnyuk/jsoniter-scala.git
- Enter to the cloned directory and checkout for the specific branch:
cd jsoniter-scala
git checkout jackson-DoS-by-a-big-number
- Run a benchmark that reproduce the issue:
sbt clean 'jsoniter-scala-benchmarkJVM/jmh:run -wi 3 -i 3 ExtractFieldsReading.weePickle'
Metadata
Metadata
Assignees
Labels
No labels