|
|
| |
Benchmark::Timer(3) |
User Contributed Perl Documentation |
Benchmark::Timer(3) |
Benchmark::Timer - Benchmarking with statistical confidence
# Non-statistical usage
use Benchmark::Timer;
$t = Benchmark::Timer->new(skip => 1);
for(1 .. 1000) {
$t->start('tag');
&long_running_operation();
$t->stop('tag');
}
print $t->report;
# --------------------------------------------------------------------
# Statistical usage
use Benchmark::Timer;
$t = Benchmark::Timer->new(skip => 1, confidence => 97.5, error => 2);
while($t->need_more_samples('tag')) {
$t->start('tag');
&long_running_operation();
$t->stop('tag');
}
print $t->report;
The Benchmark::Timer class allows you to time portions of code conveniently, as
well as benchmark code by allowing timings of repeated trials. It is perfect
for when you need more precise information about the running time of portions
of your code than the Benchmark module will give you, but don't want to go all
out and profile your code.
The methodology is simple; create a Benchmark::Timer object, and
wrap portions of code that you want to benchmark with
"start()" and
"stop()" method calls. You can supply a
tag to those methods if you plan to time multiple portions of code. If you
provide error and confidence values, you can also use
"need_more_samples()" to determine,
statistically, whether you need to collect more data.
After you have run your code, you can obtain information about the
running time by calling the "results()"
method, or get a descriptive benchmark report by calling
"report()". If you run your code over
multiple trials, the average time is reported. This is wonderful for
benchmarking time-critical portions of code in a rigorous way. You can also
optionally choose to skip any number of initial trials to cut down on
initial case irregularities.
In all of the following methods, $tag refers to the
user-supplied name of the code being timed. Unless otherwise specified,
$tag defaults to the tag of the last call to
"start()", or "_default" if
"start()" was not previously called with a
tag.
- $t = Benchmark::Timer->new( [options] );
- Constructor for the Benchmark::Timer object; returns a reference to a
timer object. Takes the following named arguments:
- skip
- The number of trials (if any) to skip before recording timing
information.
- minimum
- The minimum number of trials to run.
- error
- A percentage between 0 and 100 which indicates how much error you are
willing to tolerate in the average time measured by the benchmark. For
example, a value of 1 means that you want the reported average time to be
within 1% of the real average time.
"need_more_samples()" will use this
value to determine when it is okay to stop collecting data.
If you specify an error you must also specify a
confidence.
- confidence
- A percentage between 0 and 100 which indicates how confident you want to
be in the error measured by the benchmark. For example, a value of 97.5
means that you want to be 97.5% confident that the real average time is
within the error margin you have specified.
"need_more_samples()" will use this
value to compute the estimated error for the collected data, so that it
can determine when it is okay to stop.
If you specify a confidence you must also specify an
error.
- $t->reset;
- Reset the timer object to the pristine state it started in. Erase all
memory of tags and any previously accumulated timings. Returns a reference
to the timer object. It takes the same arguments the constructor
takes.
- $t->start($tag);
- Record the current time so that when
"stop()" is called, we can calculate an
elapsed time.
- $t->stop($tag);
- Record timing information. If $tag is supplied, it
must correspond to one given to a previously called
"start()" call. It returns the elapsed
time in milliseconds. "stop()" croaks if
the timer gets out of sync (e.g. the number of
"start()"s does not match the number of
"stop()"s.)
- $t->need_more_samples($tag);
- Compute the estimated error in the average of the data collected thus far,
and return true if that error exceeds the user-specified error. If a
$tag is supplied, it must correspond to one given
to a previously called "start()" call.
This routine assumes that the data are normally
distributed.
- $t->report($tag);
- Returns a string containing a simple report on the collected timings for
$tag. This report contains the number of trials
run, the total time taken, and, if more than one trial was run, the
average time needed to run one trial and error information.
"report()" will complain (via a warning)
if a tag is still active.
- $t->reports;
- In a scalar context, returns a string containing a simple report on the
collected timings for all tags. The report is a concatenation of the
individual tag reports, in the original tag order. In an list context,
returns a hash keyed by tag and containing reports for each tag. The
return value is actually an array, so that the original tag order is
preserved if you assign to an array instead of a hash.
"reports()" will complain (via a
warning) if a tag is still active.
- $t->result($tag);
- Return the time it took for $tag to elapse, or the
mean time it took for $tag to elapse once, if
$tag was used to time code more than once.
"result()" will complain (via a warning)
if a tag is still active.
- $t->results;
- Returns the timing data as a hash keyed on tags where each value is the
time it took to run that code, or the average time it took, if that code
ran more than once. In scalar context it returns a reference to that hash.
The return value is actually an array, so that the original tag order is
preserved if you assign to an array instead of a hash.
- $t->data($tag), $t->data;
- These methods are useful if you want to recover the full internal timing
data to roll your own reports.
If called with a $tag, returns the raw
timing data for that $tag as an array (or a
reference to an array if called in scalar context). This is useful for
feeding to something like the Statistics::Descriptive package.
If called with no arguments, returns the raw timing data as a
hash keyed on tags, where the values of the hash are lists of timings
for that code. In scalar context, it returns a reference to that hash.
As with "results()", the data is
internally represented as an array so you can recover the original tag
order by assigning to an array instead of a hash.
Benchmarking is an inherently futile activity, fraught with uncertainty not
dissimilar to that experienced in quantum mechanics. But things are a little
better if you apply statistics.
This code is distributed under the GNU General Public License (GPL) Version 2.
See the file LICENSE in the distribution for details.
The original code (written before April 20, 2001) was written by Andrew Ho
<andrew@zeuscat.com>, and is copyright (c) 2000-2001 Andrew Ho. Versions
up to 0.5 are distributed under the same terms as Perl.
Maintenance of this module is now being done by David Coppit
<david@coppit.org>.
Benchmark, Time::HiRes, Time::Stopwatch, Statistics::Descriptive
Visit the GSP FreeBSD Man Page Interface. Output converted with ManDoc. |