|
NAMEMST-Bench - Maximum Sustainable Throughput BenchmarkSYNOPSISmst-bench [--help] [--trials N] [--file-size GiB] OPTIONS
PURPOSEMST-Bench is a simple program for reporting the maximum sustainable memory and disk throughput one could expect from an optimal program. It does not attempt to report hardware transfer rate (which is not generally useful) or estimate performance of real programs.The goal is to provide a theoretical maximum to which real programs can be compared. For example, if MST-Bench reports 10 seconds to sequentially read the generated file, then we know that 10 seconds is the theoretical best time that any program can achieve while streaming this file. If fgrep string bench.tmpfile takes 15 seconds, then it is achieving about 2/3 of the maximum theoretical disk throughput. We might then investigate whether fgrep's CPU bottleneck can be reduced to the point where it can keep up with the disk input. Once fgrep is running clone to 10 seconds for this file, we know that it cannot be sped up much further. DESCRIPTIONMST-Bench estimates maximum sustainable throughput for memory and disk under typical circumstances.It runs the following speed tests: Sequential memory access for a small array that should result in a high cache hit ratio, where most accesses are satisfied by the cache. Sequential memory access for a large array that should result in a low cache hit ratio, where many accesses are not satisfied by the cache. Sequential disk write of a file much larger than physical memory, so that disk buffering has a minimal impact and the reported throughput represents sustainable speed for the disk hardware. Sequential disk read of a file much larger than physical memory, so that disk buffering has a minimal impact and the reported throughput represents sustainable speed for the disk hardware. Sequential disk rewrite of a file much larger than physical memory, so that disk buffering has a minimal impact and the reported throughput represents sustainable speed for the disk hardware. In many file systems, overwriting shows different performance characteristics than new writes. Random disk read of a file much larger than physical memory, so that disk buffering has a minimal impact and the reported throughput represents sustainable speed for the disk hardware. The random read reads the same file as the sequential read, reading every block in the file exactly once but in random order. This provides some idea about the latency of disk access. Note that dividing the file into more random reads of smaller blocks will result in lower performance. FILESbench.tmpfile - file generated in current directory for disk speed test BUGSPlease report bugs to the author and send patches in unified diff format. (man diff for more information)AUTHORJ. Bacon Visit the GSP FreeBSD Man Page Interface. |