[GRLUG] FYI - old mysql and new gear

Bob Kline bob.kline at gmail.com
Thu Sep 22 15:10:12 EDT 2011


On Thu, Sep 22, 2011 at 2:49 PM, Michael Mol <mikemol at gmail.com> wrote:

> On Thu, Sep 22, 2011 at 2:37 PM, Bob Kline <bob.kline at gmail.com> wrote:
> > Affinity is something like tendency.
> > The scheduler will do what you ask
> > "as long as practical for performance
> > reasons."
>
> That's not the way it works on Windows. It would surprise me greatly
> if that were the way it worked on Linux. On Windows, affinity is a
> bitmask saying, "this process (or thread) is allowed to run on these
> CPUs". Not quite the same thing as "hey, run on these cores as long as
> it's convenient."
>
> Without any other instruction, Linux's scheduler already pins threads
> to individual cores until something needs that core more (such as an
> interrupt tied to that core). If I run a single process that spins a
> core at 100%, I see it stick to a single core for a long, long time,
> and then I see it bounce to a different core, and hang out there for a
> long, long time.
>

OK, there seems to be something missing
here.  As you say, Linux is not windows. And
even restricting a thread to a CPU, or set of
CPUs, does not ensure how it's run.  You're
basically asking to alter the scheduling function,
and the question is how much that's possible.
i.e., how do things actually work.

Unix has a feature too call the sticky bit.
Apparently the only feature of Unix that
was patented.  It essentially said that a
process would be kept in memory as
long as possible - clearly to avoid the
relatively large swapping time.

Almost all notions about computer performance
go back 50 years or more, including multiprocessors.
The ideas were there, but the hardware was
impractically expensive. Now the hardware is
relatively cheap, and the question how you
actually make things go faster, over and above
that offered by inherently faster hardware, vastly
more memory, etc.  The only thing I haven't
seen considered is a processor per process.
In a way, Unix was inherently built for this.

It's still the case that using more than one
processor comes down to how you can break
up a job.  Some lend themselves to this - e.g.
grid calculations - and many others do not.
And of course this is the nitty gritty of real
time processing, which is dominated simply
by how fast your hardware is.  And advances
there owe a lot to things like bus widths and
word sizes, not massively faster silicon.

Anyway, what's actually known here about
how the Linux scheduler works?

        -- Bob


>
> --
> :wq
>
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
>
> _______________________________________________
> grlug mailing list
> grlug at grlug.org
> http://shinobu.grlug.org/cgi-bin/mailman/listinfo/grlug
>

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://shinobu.grlug.org/pipermail/grlug/attachments/20110922/8020c0b0/attachment.html>


More information about the grlug mailing list