First of all, it is not about the utility of Microbacmark, I know well with their purpose: highlight a specific aspect To demonstrate the performance characteristics and comparisons of a very specific case, whether or not it should have any implications on your work, this is a different story.
Some years ago, someone (I think that Henz Kabutz?) Noted that every benchmark that is worth the time to see its results needs to run at least a few minutes and at least Less than 3 times has to be run, whereas the first run is always left. It used to eat JVM in the environment as well as to disinfect inconsistencies (background processes, network traffic, ...) and measurement impurity. It came to my mind, and my personal experiences suggested something like that, so I always adopted this strategy.
However, I saw many people (for example) write those benchmarks that only run for a few milliseconds (!) And are running only once, I know that in recent years the short- The accuracy of running benchmarks increased, but it still kills me in weird form. Should each microbainchmark run for at least one second and run at least 3 times to get some useful output? Or is this rule obsolete nowadays?
In my experience, you need it:
- Run multiple times (And remove the first result - VM and other effects)
- Min take the time
- the cost of the loop if you are looking at the calculation-intensive code
- Ideally for an over-time slice, an OS time slice (usually 10 ms) to minimize the length of time, Or such as ~ 5 ms Receive ~ 500 Run for MS.
I only work with calculation-intensive code - if you have a different profile (like memory-intensive, or many I / O), then the time strategy is different May be required.
Comments
Post a Comment