[GRLUG] Random questions.
Michael Mol
mikemol at gmail.com
Fri Jul 27 00:14:49 EDT 2012
On Thu, Jul 26, 2012 at 10:31 PM, Bob Kline <bob.kline at gmail.com> wrote:
> On Thu, Jul 26, 2012 at 10:15 PM, Michael Mol <mikemol at gmail.com> wrote:
>>
>> On Thu, Jul 26, 2012 at 9:26 PM, Bob Kline <bob.kline at gmail.com> wrote:
>> > I still use a 32-bit version of Linux.
>> > What's to be gained or lost by going
>> > to a 64-bit version?
>>
>> Larger address space.
>>
>> Reduced memory fragmentation (so, less process memory bloat)
>>
>> Improved performance, owing to utilization of more general-purpose
>> registers.
>>
>> What's to lose? At worst, some small amount of memory due to increased
>> pointer size, but this shouldn't be a problem unless you're running in
>> embedded-type environments. Practically speaking, it's a nonissue for
>> most users.
>>
>
> http://en.wikipedia.org/wiki/Physical_Address_Extension
>
> Apparently more than 4GB of RAM
> is not necessarily an issue yet with
> 32-bit CPUs, thanks to PAE.
PAE is a joke, if you have real larger address space capability.
PAE works very much like the old XMS memory manager of DOS days. You
have a block of your address space which you can swap out for some
page sitting in unaddressable memory. In XMS, this block of memory was
64KB. It's a bit larger these days, of course.
>
> Anyway, I thought the 32-bit versus
> 64-bit differences might be more
> consequential, but just as it's taken
> ages to fully exploit 32-bit processors,
> that seems to also apply to 64-bit now.
> But apparently one is still typically an
> order of magnitude away in terms of
> memory sizes, and maybe other hardware,
> before native 64 bit makes any difference
> for the home user.
There's a huge difference between virtual addresses and physical
addresses. You don't need more than 3GB of RAM in order to see benefit
from a 64-bit address space. Understanding why means understanding how
memory allocations work...
When a program allocates memory, it tells the kernel, 'hey, I need a
contiguous block of memory at least this many bits long.' When it's
done, it may tell the kernel that it no longer needs that memory.
This works great in theory, but you run into a few problems. First is
that there's a minimum allocation size; the kernel isn't going to
grant a program memory in chunks any smaller than the size of a single
page. (By default, that's 4KB). So if you need 256 bytes of memory,
the kernel will hand you 4KB. This minimum granularity can be buffered
somewhat by your C library managing memory carefully, of course.
The second problem you face is when your program lives a long time and
makes a mixture of large and small requests (or even many small
requests, followed by a large request.). If your address space is
speckled with hundreds of tiny spots of allocated memory, you won't be
able to get a new chunk of RAM any larger than the largest contiguous
bit of free space. Think of it like files getting fragmented on your
disk, with the additional restriction that while your free space is
allowed to be fragmented, files *themselves* must not be. So, while
your filesystem starts off fine with no fragmentation, the creation
and removal of files of various sizes eventually causes your free
space to be fragmented.
The third problem relates to the first and second. Remember that while
your kernel will only hand out chunks of memory in multiples of 4KB,
your C library kindly breaks that into smaller pieces, and will
service further requests from what remains of those 4KB blocks without
bothering the kernel, if it can. Now, let's say you're done with some
of those pieces. You tell your C library you're done with that
memory...but there's still one 16 byte chunk of memory somewhere in
that 4KB block that you haven't decided you're done with, yet. The C
library can't give that 4KB back to the kernel until you've told it
you're done with that 16 byte chunk.
Firefox is widely known for 'leaking' memory. With the exception of
some poorly written extensions holding references on objects, the
cause of this wasn't true memory leaks, the cause was the
fragmentation of the process's memory, as described above.
While a 64-bit address space doesn't help with the first or third
problems, it almost completely alleviates the second problem.
Further, while everybody touts the RAM limit as being the biggest
benefit of the AMD64 architecture, it's _not_. The biggest benefit
come from other architectural improvements. Some of it comes from a
massive increase in the number of general-purpose registers (remember
back when PowerPC was kicking x86's butt for performance? This was a
big part of the reason. The other half has been covered as of the
introduction of Advanced Vectoring Extensions.), and some of it comes
from a new calling convention that makes PIC not so much a pain in the
butt.
And regarding your earlier comment about 64-bit being sufficiently
tested...we hit that point at least four years ago. I've been running
64-bit Linux (well, multilib) almost exclusively since 2008, and the
only thing that gave me any trouble was Skype...and that cleared up a
few years ago.
--
:wq
More information about the grlug
mailing list