Skip to content

Benchmarks

Koen Derks edited this page Sep 2, 2021 · 10 revisions

How to add a new benchmark to jfa

So you want to add a new benchmark, or unit test, to jfa? Great, your help on this open-source project is much appreciated. First, if you do not have a fork and a clone of the jfa repository, please read the Wiki page on setting up the jfa repository. In this guide, we will go through the process of adding a benchmark to jfa step by step. You will learn where to change the underlying R code and how to make a pull request to merge your changes.

Step 1: Create a new unit test

Navigate to your local clone of the jfa repository. You should see a folder tests in there. In this folder is another folder testthat that contains all existing unit tests. Unit tests for functions follow the naming structure test-function-functionname.R. Benchmarks follow the naming structure test-comparison-benchmark.R.

image

To add a new benchmark, create a new .R file. For example, the new file may be called test-comparison-newbenchmark.R.

image

The test-comparison-newbenchmark.R must at least contain a call to context(), a call to test_that(), and a call to expect_equal(). In the context(), you must specify the name of the benchmark. In the test_that(), you must give the benchmark that will be verified by the expect_equal() function. For example, the following block of code verifies that the sample size for a materiality of 1% (given the Poisson distribution, zero errors, and 95% confidence) is 300:

context("7. Name of the benchmark")

test_that(desc = "Name of the individual comparison", {
	jfaRes <- planning(materiality = 0.01, conf.level = 0.95, expected = 0, likelihood = "poison") # Test a functionality
	expect_equal(jfaRes[["n"]], 300)
})

The code that goes within test_that() can facilitate complex comparisons. For such an example, check out the test in the file test-comparison-appendix-a.R.

Step 2: Checking your unit test

You can run all available unit tests via the devtools package in R. First, open a new R session and set your working directory to the location of the jfa repository.

setwd("C:/location/to/your/jfa") # Set the package root

If you don't have the devtools package installed, install it using:

install.packages("devtools")

You can now run all available unit tests by typing devtools::test(). Before you go to the next step, verify that all unit tests pass and show no errors.

Step 3: Making a pull request

When the new benchmark has passed the test, you may commit and push your changes to your version of the jfa repository. Next, you can open a new pull request to merge your changes into the development branch of koenderks/jfa. First, navigate to the koenderks/jfa repository on GitHub. There, go to the Pull request tab. Click on New pull request.

image

Select Compare across forks, and select the branch that contains your added benchmark.

image

Click on Create pull request to create the pull request. You are now finished adding the benchmark. Keep an eye on the status and conversation in the pull request to see if any changes are requested by the maintainer.

Clone this wiki locally