Macro frame_benchmarking::benchmarks[][src]

macro_rules! benchmarks {
    (
		$( $rest:tt )*
	) => { ... };
}
Expand description

Construct pallet benchmarks for weighing dispatchables.

Works around the idea of complexity parameters, named by a single letter (which is usually upper cased in complexity notation but is lower-cased for use in this macro).

Complexity parameters (“parameters”) have a range which is a u32 pair. Every time a benchmark is prepared and run, this parameter takes a concrete value within the range. There is an associated instancing block, which is a single expression that is evaluated during preparation. It may use ? (i.e. return Err(…)`) to bail with a string error. Here’s a few examples:

// These two are equivalent:
let x in 0 .. 10;
let x in 0 .. 10 => ();
// This one calls a setup function and might return an error (which would be terminal).
let y in 0 .. 10 => setup(y)?;
// This one uses a code block to do lots of stuff:
let z in 0 .. 10 => {
  let a = z * z / 5;
  let b = do_something(a)?;
  combine_into(z, b);
}

Note that due to parsing restrictions, if the from expression is not a single token (i.e. a literal or constant), then it must be parenthesised.

The macro allows for a number of “arms”, each representing an individual benchmark. Using the simple syntax, the associated dispatchable function maps 1:1 with the benchmark and the name of the benchmark is the same as that of the associated function. However, extended syntax allows for arbitrary expresions to be evaluated in a benchmark (including for example, on_initialize).

Note that the ranges are inclusive on both sides. This is in contrast to ranges in Rust which are left-inclusive right-exclusive.

Each arm may also have a block of code which is run prior to any instancing and a block of code which is run afterwards. All code blocks may draw upon the specific value of each parameter at any time. Local variables are shared between the two pre- and post- code blocks, but do not leak from the interior of any instancing expressions.

Example:

benchmarks! {
  where_clause {  where T::A: From<u32> } // Optional line to give additional bound on `T`.

  // first dispatchable: foo; this is a user dispatchable and operates on a `u8` vector of
  // size `l`
  foo {
    let caller = account::<T>(b"caller", 0, benchmarks_seed);
    let l in 1 .. MAX_LENGTH => initialize_l(l);
  }: _(Origin::Signed(caller), vec![0u8; l])

  // second dispatchable: bar; this is a root dispatchable and accepts a `u8` vector of size
  // `l`.
  // In this case, we explicitly name the call using `bar` instead of `_`.
  bar {
    let l in 1 .. MAX_LENGTH => initialize_l(l);
  }: bar(Origin::Root, vec![0u8; l])

  // third dispatchable: baz; this is a user dispatchable. It isn't dependent on length like the
  // other two but has its own complexity `c` that needs setting up. It uses `caller` (in the
  // pre-instancing block) within the code block. This is only allowed in the param instancers
  // of arms.
  baz1 {
    let caller = account::<T>(b"caller", 0, benchmarks_seed);
    let c = 0 .. 10 => setup_c(&caller, c);
  }: baz(Origin::Signed(caller))

  // this is a second benchmark of the baz dispatchable with a different setup.
  baz2 {
    let caller = account::<T>(b"caller", 0, benchmarks_seed);
    let c = 0 .. 10 => setup_c_in_some_other_way(&caller, c);
  }: baz(Origin::Signed(caller))

  // this is benchmarking some code that is not a dispatchable.
  populate_a_set {
    let x in 0 .. 10_000;
    let mut m = Vec::<u32>::new();
    for i in 0..x {
      m.insert(i);
    }
  }: { m.into_iter().collect::<BTreeSet>() }
}

Test functions are automatically generated for each benchmark and are accessible to you when you run cargo test. All tests are named test_benchmark_<benchmark_name>, implemented on the Pallet struct, and run them in a test externalities environment. The test function runs your benchmark just like a regular benchmark, but only testing at the lowest and highest values for each component. The function will return Ok(()) if the benchmarks return no errors.

You can optionally add a verify code block at the end of a benchmark to test any final state of your benchmark in a unit test. For example:

sort_vector {
	let x in 1 .. 10000;
	let mut m = Vec::<u32>::new();
	for i in (0..x).rev() {
		m.push(i);
	}
}: {
	m.sort();
} verify {
	ensure!(m[0] == 0, "You forgot to sort!")
}

These verify blocks will not affect your benchmark results!

You can construct benchmark tests like so:

#[test]
fn test_benchmarks() {
  new_test_ext().execute_with(|| {
    assert_ok!(Pallet::<Test>::test_benchmark_dummy());
    assert_err!(Pallet::<Test>::test_benchmark_other_name(), "Bad origin");
    assert_ok!(Pallet::<Test>::test_benchmark_sort_vector());
    assert_err!(Pallet::<Test>::test_benchmark_broken_benchmark(), "You forgot to sort!");
  });
}