Utility function to register the proper handlers for a process to make sure it handles early termination and cleanup correctly. The returned callback must be invoked at the end of the experiment, to stop the process in a clean manner.
The process handler.
The task context.
Utility function to register the proper handlers for a process to make sure it handles early termination and cleanup correctly. The returned callback must be invoked at the end of the experiment, to stop the process in a clean manner.
Generated using TypeDoc
jbr.js – Just a Benchmark Runner
A simple tool to initialize benchmarking experiments, run them, and analyze their results.
Experiments that are created and executed with this tool are fully reproducible, as experiments are fully deterministic, and metadata on all exact installed dependency versions is emitted together with the results.
Guides
Requirements
For certain experiment types, you may also require Docker.
Installation
or
Usage
This tool offers commands for executing the whole experimentation chain:
Full usage:
1. Initialization
Running this command will initialize a new experiment of the given type (
experiment-type
) in a new directory of the provided experiment name (my-experiment
).The experiment type must exist on npm under the
@jbr-experiment/*
scope. Click here for a full list of available experiment types. For example, thewatdiv
experiment can be used because the@jbr-experiment/watdiv
package exists on npm.The created directory will contain all default required files for running an experiment. You can initialize this directory as a git repository.
In most cases, you will have to configure at least one hook handler for your experiment, such as defining the SPARQL query engine you want to evaluate for a given benchmark experiment. Furthermore, you will usually need to edit the
jbr-experiment.json
file to configure your experiment.2. Data Preparation
In order to run all preprocessing steps, such as creating all required datasets, invoke the prepare step:
All prepared files will be contained in the
generated/
directory.When running this command, existing files within
generated/
will not be overwritten by default. These can be forcefully overwritten by passing the-f
option.3. Running Experiments
Once the experiment has been fully configured and prepared, you can run it:
Once the run step completes, results will be present in the
output/
directory.Configurability
All experiments will have a
jbr-experiment.json
in which the properties of an experiment can be set. The parameters of such a config file are dependent on the type of experiment that is being initialized.Depending on the experiment type, you may also need to change certain files within the
input/
directory.Hooks
Most experiment types expose certain hooks, which allow you to plug in certain hook handlers. For example, the WatDiv experiment type exposes the
hookSparqlEndpoint
hook. This hook is used to plug in a certain SPARQL query engine, which is what WatDiv will use to run its benchmark over.Hook handler types must exist on npm under the
@jbr-hook/*
scope. Click here for a full list of available hook handler types. For example, thesparql-endpoint-comunica
hook handler can be used because the@jbr-hook/sparql-endpoint-comunica
package exists on npm.Directory structure
A jbr experiment typically has the following directory structure:
To enable reproducibility, it is highly recommended to place these experiments under version control, e.g. via a git repository.
The following files and directories do not have to be added to this repository, as they are derived and can be reproduced:
Advanced
Combinations-based Experiments
Certain experiments may be designed to compare the effect different factors to each other, such as full factorial experiments, or fractional experiments. For instance, this may be used to compare the effect of running a certain system once with algorithm A and once with B, and measuring the effects.
Using jbr, you can easily setup and handle such combination-based experiments as follows:
1. Initialize experiment
Experiments that should be combinations-based must be initialized using the
-c
flag:Instead of creating a
jbr-experiment.json
file, this will create ajbr-experiment.json.template
file, together with an accompanyingjbr-combinations.json
file.2. Define combinations
Inside the
jbr-experiment.json.template
file (and input text files), you may define any number of variables using the%FACTOR-variableName%
syntax. Inside thejbr-combinations.json
file, you can define corresponding values for the given variables.For example,
jbr-experiment.json.template
can look like:Variable values can be assigned in
jbr-combinations.json
:Because
FullFactorialCombinationProvider
is used injbr-combinations.json
, all combinations (4) of thecpu
andmemory
variable will apply to this experiment.If the generated directory can be reused across combinations, then
commonGenerated
can be set to true.FractionalCombinationProvider
may also be used if only select combinations should apply.3. Regenerate combinations
Each time you make a change inside your input files,
jbr-combinations.json
, orjbr-experiment.json.template
, you should (re)generate the instantiated combinations by running the following command:This will create a new
combinations/
directory, containing sub-directories for all experiment combinations. Files in this directory should not be modified manually, but should only be managed via the template files andjbr generate-combinations
.4. Handle like any other experiment
From this point on, you can manage this combinations-based experiment like any other jbr experiment.
Concretely,
jbr prepare
will prepare all combinations, andjbr run
will also run all combinations.If you just want to run a single combination, you can specify its combination id via the
-c
option:Docker Resource Constraint
Some experiments or hooks may be executed in Docker containers. For these cases, jbr exposes a reusable helper component for defining Docker resource constraints.
For example, the following experiment is configured to use at most 90% of the CPU, and 10MB of memory.
All possible parameters (all are optional):
cpu_percentage
: Percentage (0-100) of the total CPU power that can be used. E.g. when fully consuming 4 cores, this value must be set to 100.memory_limit
: Memory usage limit, e.g. '10m', '1g'.Running against a different Docker instance
By default, Docker-based experiment will look for and use the Docker installation on your local machine. In some cases, you may want to run experiments within remote Docker instances. In such cases, you can use the
-d
or--dockerOptions
option to pass a custom Docker options file.For example, Docker options can be set when running an experiment as follows:
docker-options.json
for the default socket:docker-options.json
for running against a different host:More configuration options can be found at https://github.com/apocas/dockerode#getting-started
License
jbr.js is written by Ruben Taelman.
This code is copyrighted by Ghent University – imec and released under the MIT license.