Diff for /wikisrc/users/jym/benchmarks.mdwn between versions 1.1 and 1.3

version 1.1, 2010/07/10 23:02:32 version 1.3, 2010/07/12 03:07:16
Line 54  This machine uses HT - so technically sp Line 54  This machine uses HT - so technically sp
   
 ## PAE ##  ## PAE ##
   
   [[build-pae.png]]
   [[hackbench-pae.png]]
   [[sysbench-pae.png]]
   
 Overall, PAE affects memory performance by a 15-20% ratio; this is particularly noticeable with sysbench and hackbench, where bandwidth and thread/process creation time are all slower.  Overall, PAE affects memory performance by a 15-20% ratio; this is particularly noticeable with sysbench and hackbench, where bandwidth and thread/process creation time are all slower.
   
 Userland remains rather unaffected, with differences in the 5% range; build.sh -j4 runs approximately 5% slower under PAE, both for native and Xen case.  Userland remains rather unaffected, with differences in the 5% range; build.sh -j4 runs approximately 5% slower under PAE, both for native and Xen case.
   
 Do not be surprised by the important "user" result for build.sh benchmark in the native vs Xen case. Build being performed with -j4 (4 make sub-jobs in parallel), many processes may run concurrently under i386 native, crediting more time for userland, while under Xen, the kernel is not SMP capable.  Do not be surprised by the important "user" result for build.sh benchmark in the native vs Xen case. Build being performed with -j4 (4 make sub-jobs in parallel), many processes may run concurrently under i386 native, crediting more time for userland, while under Xen, the kernel is not SMP capable.
   
 Notice that, in a MP context, Xen stays behind by a 40% margin for parallel build. Given that Xen overhead is considered negligible, it shows that NetBSD build system gets an important boost when parallelized, at least for bi-CPU setups. Just to show that the concurrent build is not purely rhetorical :)  When comparing Xen with a native kernel with all CPU turned offline except one, we observe an overhead of 15 to 20%, that mostly impacts performance at "sys" (kernel) level, which directly affects the total time of a full build.sh -j4 release. Contrary to original belief, Xen does add overhead. One exception being the memory bandwidth benchmark, where Xen (PAE and non-PAE) outperforms the native kernels in an UP context.
   
   Notice that, in a MP context, the total build time between the full-MP system and the one with just one CPU running sees an improvement by approximately 15%, with "sys" nearly doubling its time credit when both CPUs are running. As the *src/* directory remained the same between the two tests, we can assume that the kernel was **concurrently** solicited twice as much in the bi-CPU than in the mono-CPU case.
   
 ## Xen ballooning ##  ## Xen ballooning ##
   
   [[build-balloon.png]]
   [[hackbench-balloon.png]]
   [[sysbench-balloon.png]]
   
   
 In essence, there is not much to say. Results are all below the 5% margin, adding the balloon thread did not affect performance or process creation/scheduling drastically. It is all noise. The timeout delay added by cherry@ seems to be reasonable (can be revisited later, but does not seem to be critical).  In essence, there is not much to say. Results are all below the 5% margin, adding the balloon thread did not affect performance or process creation/scheduling drastically. It is all noise. The timeout delay added by cherry@ seems to be reasonable (can be revisited later, but does not seem to be critical).

Removed from v.1.1  
changed lines
  Added in v.1.3


CVSweb for NetBSD wikisrc <wikimaster@NetBSD.org> software: FreeBSD-CVSweb