<br><br><p><DEFANGED_div class="gmail_quote">On Nov 16, 2007 1:16 AM, Michael Mol <<a href="mailto:mikemol@gmail.com">mikemol@gmail.com</a>> wrote:<br><blockquote class="gmail_quote" DEFANGED_style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<p><DEFANGED_div class="Ih2E3d">On Nov 16, 2007 12:57 AM, Tim Schmidt <<a href="mailto:timschmidt@gmail.com">timschmidt@gmail.com</a>> wrote:<br>> On Nov 16, 2007 12:45 AM, Bob Kline <<a href="mailto:bob.kline@gmail.com">
bob.kline@gmail.com</a>> wrote:<br></p><DEFANGED_div><p><DEFANGED_div class="Ih2E3d">> > I'd think swapping could be pretty fast in an<br>> > out of memory, but the instant it involves<br>> > a hard drive it's over.
<br>><br>> ? Swapping always involves block storage of some type. How much it<br>> hurts again, depends on memory access patterns of your load, as well<br>> as what percentage of the working set is swapped, and so on.
<br><br></p><DEFANGED_div>Heh. It could start hurting even more. Apparently they're getting<br>close to implementing swap over NFS. It's intended for diskless<br>cluster environments, but it still stands to complicate performance
<br>issues. Sub-optimal performance? Do you have a bad network cable, or<br>are your jobs eating too much memory?<br><font color="#888888"><br>--<br>:wq<br></font><p><DEFANGED_div><p><DEFANGED_div></p><DEFANGED_div> </p><DEFANGED_div></blockquote><p><DEFANGED_div>Defining efficiency on a system is
<br>complicated anyway. In order to <br>create the illusion of many users<br>each able to run a job on a machine<br>at the same time there has to be<br>overhead introduced. In many cases<br>the most efficient efficient mode is
<br>one job running. The trick is to be<br>somewhere between one job running<br>and all overhead.<br><br>Something like a Linux version of <br>the Laffer curve.<br><br> -Bob<br><br></p><DEFANGED_div></p><DEFANGED_div><br>