About
This page contains benchmark results for the new ETS options
{write_concurrency, auto} and {write_concurrency, N} (where N is
an
integer). See commit
1b578e20634b2f5327c85879d009a60152583f52 and the two prior
commits for more information about the new options.
Benchmark Description
The benchmark measures how many ETS operations per second X Erlang processes can perform on a single table.
Each of the X processes repeatedly selects an operation to do from a given set of operations.
The likelihood that a certain operation will be selected is also given to the benchmark.
The table that the processes operate on is prefilled with 500K items before each benchmark run starts.
The source code for the benchmark is located in the function ets_SUITE:throughput_benchmark/0
(see "$ERL_TOP/lib/stdlib/test/ets_SUITE.erl
").
Below is a list with brief descriptions of the operations:
insert = an ets:insert/2
call with a random item within the key range as the second parameter value
remove = an ets:remove/2
call with a random item within the key range as the second parameter value
lookup = an ets:lookup/2
call with a random item within the key range as the second parameter value
Machine Configuration
Machine:
Microsoft Azure VM instance: Standard D64s v3 (64 vcpus, 256 GB memory):
2 * Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz (16 cores with hyper-threading)
2 NUMA nodes
32 cores
64 hardware threads
256GB ram
Operating System:
Description: Ubuntu 18.04.2 LTS
Linux version: 5.4.0-1051-azure
Run-time Parameters
The benchmark was started with the parameter "+sbt tnnps" .
erl +sbt tnnps -eval "parallel_messages_SUITE:throughput_benchmark(),erlang:halt()"
Benchmark Configuration
The benchmark configuration used can be found here .
Results
_1 _2 _4 _8 _16 _32 _64 0 10M 20M 30M 40M
Scenario: 50.000000% insert, 50.000000% delete | Key Range Size: 1000000 # of Processes Operations/Second ,[set,public,{write_concurrency,auto}] ,[set,public,{write_concurrency,auto},{read_concurrency,true}] Ver: 2 ,[set,public,{write_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,true},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,1},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,32},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,64},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,128},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,256},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,512},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,1024},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,2048},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,4096},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,8192},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,16384},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,32768},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,1},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,32},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,64},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,128},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,256},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,512},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,1024},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,2048},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,4096},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,8192},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,16384},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,32768},{read_concurrency,true},{decentralized_counters,true}] Ver: 3 ,(master)[set,public,{write_concurrency,true},{decentralized_counters,true}] Ver: 3 ,(master)[set,public,{write_concurrency,true},{read_concurrency,true},{decentralized_counters,true}] Ver: 3 ,(master)[set,public,{read_concurrency,true},{decentralized_counters,true}] Ver: 3 ,(master)[set,public,{decentralized_counters,true}] Fill screen _1 _2 _4 _8 _16 _32 _64 0 10M 20M 30M 40M 50M
Scenario: 10.000000% insert, 10.000000% delete, 80.000000% lookup | Key Range Size: 1000000 # of Processes Operations/Second ,[set,public,{write_concurrency,auto}] ,[set,public,{write_concurrency,auto},{read_concurrency,true}] Ver: 2 ,[set,public,{write_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,true},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,1},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,32},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,64},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,128},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,256},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,512},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,1024},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,2048},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,4096},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,8192},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,16384},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,32768},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,1},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,32},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,64},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,128},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,256},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,512},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,1024},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,2048},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,4096},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,8192},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,16384},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,32768},{read_concurrency,true},{decentralized_counters,true}] Ver: 3 ,(master)[set,public,{write_concurrency,true},{decentralized_counters,true}] Ver: 3 ,(master)[set,public,{write_concurrency,true},{read_concurrency,true},{decentralized_counters,true}] Ver: 3 ,(master)[set,public,{read_concurrency,true},{decentralized_counters,true}] Ver: 3 ,(master)[set,public,{decentralized_counters,true}] Fill screen _1 _2 _4 _8 _16 _32 _64 0 10M 20M 30M 40M 50M
Scenario: 1.000000% insert, 1.000000% delete, 98.000000% lookup | Key Range Size: 1000000 # of Processes Operations/Second ,[set,public,{write_concurrency,auto}] ,[set,public,{write_concurrency,auto},{read_concurrency,true}] Ver: 2 ,[set,public,{write_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,true},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,1},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,32},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,64},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,128},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,256},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,512},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,1024},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,2048},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,4096},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,8192},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,16384},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,32768},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,1},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,32},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,64},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,128},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,256},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,512},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,1024},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,2048},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,4096},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,8192},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,16384},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,32768},{read_concurrency,true},{decentralized_counters,true}] Ver: 3 ,(master)[set,public,{write_concurrency,true},{decentralized_counters,true}] Ver: 3 ,(master)[set,public,{write_concurrency,true},{read_concurrency,true},{decentralized_counters,true}] Ver: 3 ,(master)[set,public,{read_concurrency,true},{decentralized_counters,true}] Ver: 3 ,(master)[set,public,{decentralized_counters,true}] Fill screen _1 _2 _4 _8 _16 _32 _64 0 10M 20M 30M 40M 50M 60M
Scenario: 100.000000% lookup | Key Range Size: 1000000 # of Processes Operations/Second ,[set,public,{write_concurrency,auto}] ,[set,public,{write_concurrency,auto},{read_concurrency,true}] Ver: 2 ,[set,public,{write_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,true},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,1},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,32},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,64},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,128},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,256},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,512},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,1024},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,2048},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,4096},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,8192},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,16384},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,32768},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,1},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,32},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,64},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,128},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,256},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,512},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,1024},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,2048},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,4096},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,8192},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,16384},{read_concurrency,true},{decentralized_counters,true}] Ver: 2 ,[set,public,{write_concurrency,32768},{read_concurrency,true},{decentralized_counters,true}] Ver: 3 ,(master)[set,public,{write_concurrency,true},{decentralized_counters,true}] Ver: 3 ,(master)[set,public,{write_concurrency,true},{read_concurrency,true},{decentralized_counters,true}] Ver: 3 ,(master)[set,public,{read_concurrency,true},{decentralized_counters,true}] Ver: 3 ,(master)[set,public,{decentralized_counters,true}] Fill screen
ETS Benchmark Result Viewer
This page generates graphs from data produced by the ETS benchmark which is defined in the function ets_SUITE:throughput_benchmark/0
(see "$ERL_TOP/lib/stdlib/test/ets_SUITE.erl
").
Note that one can paste results from several benchmark runs into the field below. Results from the same scenario but from different benchmark runs will be relabeled and plotted in the same graph automatically. This makes comparisons of different ETS versions easy.
Note also that that lines can be hidden by clicking on the corresponding label.
Paste the generated data in the field below and press the Render button:
Include Throughput Plot
Include % More Throughput Than Worst Plot
Include % Less Throughput Than Best Plot
Bar Plot
Same X Spacing Between Points
Show [ordered_set,public]
Show [ordered_set,public,{write_concurrency,true}]
Show [ordered_set,public,{read_concurrency,true}]
Show [ordered_set,public,{write_concurrency,true},{read_concurrency,true}]
Show [set,public]
Show [set,public,{write_concurrency,true}]
Show [set,public,{read_concurrency,true}]
Show [set,public,{write_concurrency,true},{read_concurrency,true}]
Render