<br><br><div class="gmail_quote">On Thu, Sep 22, 2011 at 2:49 PM, Michael Mol <span dir="ltr"><<a href="mailto:mikemol@gmail.com">mikemol@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im">On Thu, Sep 22, 2011 at 2:37 PM, Bob Kline <<a href="mailto:bob.kline@gmail.com">bob.kline@gmail.com</a>> wrote:<br>
> Affinity is something like tendency.<br>
> The scheduler will do what you ask<br>
> "as long as practical for performance<br>
> reasons."<br>
<br>
</div>That's not the way it works on Windows. It would surprise me greatly<br>
if that were the way it worked on Linux. On Windows, affinity is a<br>
bitmask saying, "this process (or thread) is allowed to run on these<br>
CPUs". Not quite the same thing as "hey, run on these cores as long as<br>
it's convenient."<br>
<br>
Without any other instruction, Linux's scheduler already pins threads<br>
to individual cores until something needs that core more (such as an<br>
interrupt tied to that core). If I run a single process that spins a<br>
core at 100%, I see it stick to a single core for a long, long time,<br>
and then I see it bounce to a different core, and hang out there for a<br>
long, long time.<br></blockquote><div><br></div><div>OK, there seems to be something missing</div><div>here. As you say, Linux is not windows. And</div><div>even restricting a thread to a CPU, or set of</div><div>CPUs, does not ensure how it's run. You're</div>
<div>basically asking to alter the scheduling function,</div><div>and the question is how much that's possible.</div><div>i.e., how do things actually work.</div><div><br></div><div>Unix has a feature too call the sticky bit.</div>
<div>Apparently the only feature of Unix that</div><div>was patented. It essentially said that a</div><div>process would be kept in memory as </div><div>long as possible - clearly to avoid the</div><div>relatively large swapping time.</div>
<div><br></div><div>Almost all notions about computer performance</div><div>go back 50 years or more, including multiprocessors.</div><div>The ideas were there, but the hardware was </div><div>impractically expensive. Now the hardware is</div>
<div>relatively cheap, and the question how you</div><div>actually make things go faster, over and above</div><div>that offered by inherently faster hardware, vastly</div><div>more memory, etc. The only thing I haven't</div>
<div>seen considered is a processor per process.</div><div>In a way, Unix was inherently built for this.</div><div><br></div><div>It's still the case that using more than one</div><div>processor comes down to how you can break</div>
<div>up a job. Some lend themselves to this - e.g.</div><div>grid calculations - and many others do not.</div><div>And of course this is the nitty gritty of real</div><div>time processing, which is dominated simply</div>
<div>by how fast your hardware is. And advances</div><div>there owe a lot to things like bus widths and</div><div>word sizes, not massively faster silicon.</div><div><br></div><div>Anyway, what's actually known here about</div>
<div>how the Linux scheduler works? </div><div><br></div><div> -- Bob</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<font color="#888888"><br>
<br>
--<br>
:wq<br>
</font><div><div></div><div class="h5"><br>
--<br>
This message has been scanned for viruses and<br>
dangerous content by MailScanner, and is<br>
believed to be clean.<br>
<br>
_______________________________________________<br>
grlug mailing list<br>
<a href="mailto:grlug@grlug.org">grlug@grlug.org</a><br>
<a href="http://shinobu.grlug.org/cgi-bin/mailman/listinfo/grlug" target="_blank">http://shinobu.grlug.org/cgi-bin/mailman/listinfo/grlug</a><br>
</div></div></blockquote></div><br>
<br />--
<br />This message has been scanned for viruses and
<br />dangerous content by
<a href="http://www.mailscanner.info/"><b>MailScanner</b></a>, and is
<br />believed to be clean.