criterion performance measurements

overview

want to understand this report?

Crypto.Macaroon/create

lower bound estimate upper bound
OLS regression xxx xxx xxx
R² goodness-of-fit xxx xxx xxx
Mean execution time 1.1654644382453032e-5 1.2600205848891467e-5 1.3603312414896145e-5
Standard deviation 3.003924313903839e-6 3.5028611160687134e-6 4.226311433630701e-6

Outlying measurements have severe (0.979908058912253%) effect on estimated standard deviation.

Crypto.Macaroon/mint

lower bound estimate upper bound
OLS regression xxx xxx xxx
R² goodness-of-fit xxx xxx xxx
Mean execution time 1.926374087710549e-5 2.0415738325732072e-5 2.1426448032743204e-5
Standard deviation 2.731769151469443e-6 3.4477868085704054e-6 4.392710894614489e-6

Outlying measurements have severe (0.943016050183587%) effect on estimated standard deviation.

understanding this report

In this report, each function benchmarked by criterion is assigned a section of its own. The charts in each section are active; if you hover your mouse over data points and annotations, you will see more details.

Under the charts is a small table. The first two rows are the results of a linear regression run on the measurements displayed in the right-hand chart.

We use a statistical technique called the bootstrap to provide confidence intervals on our estimates. The bootstrap-derived upper and lower bounds on estimates let you see how accurate we believe those estimates to be. (Hover the mouse over the table headers to see the confidence levels.)

A noisy benchmarking environment can cause some or many measurements to fall far from the mean. These outlying measurements can have a significant inflationary effect on the estimate of the standard deviation. We calculate and display an estimate of the extent to which the standard deviation has been inflated by outliers.