[NTLUG:Discuss] > 960 meg ram -- HIMEM kernel option -vs- 64bit Linux OS...

steve sjbaker1 at airmail.net
Wed Apr 26 21:39:01 CDT 2006


Richard Geoffrion wrote:
> What *IS* the deal with having more than XX amount of RAM in a linux 
> server?   You can have 2 gig but it won't use more than 960 Meg unless 
> you turn on the HIMEM option...in which case there will be 
> incompatibilities.  --OR-- you can just load up a 64bit Kernel (Ubuntu 
> 64, SLAMD64..etc) and then you won't have those 32bit memory 
> limitations---but you'll have other compatibility issues..
> 
> 
> So which is it?  Do I stick with say... Slackware 10.2-current (a 32 bit 
> OS) and compile in the HIMEM option or do I take the plunge and go to a 
> 64bit OS (www.slamd64.com) where I might have 'other' issues?
> 
> What ARE the respective issues?

Well, with a 32 bit CPU, a memory address is a 32 bit number and
can only address two-to-the-power-thirty-two bytes of memory -
which is 4,294,967,296 bytes.

That's 4Gbytes.

So the most you can possibly address is 4Gb.  However,
the kernel needs to be able to address memory in the literal
order that the bytes are in the physical RAM but your user-proceeses
need memory to be mapped into a nice linear order using the MMU
hardware.  If you have 4Gbytes of physical memory then every time
you switched from running a process to running in the kernel,
the kernel would have to re-map the MMU every time it took
control of the CPU - and again every time it gave it back to
the application process.  That's a S-L-O-W process.  (I think
this is what the HIMEM option does - but I'm not 100% sure).

It's better to limit the maximum process size to something less
than 4Gbytes to allow the kernel to reserve part of the address
space for a linear mapping of memory and the other part for the
organization that the individual process uses.

Hence, if:

  M = AMOUNT OF PHYSICAL MEMORY
  P = MAXIMUM PER-PROCESS MEMORY

Then for good kernel performance, you need  P + M <= 4Gbytes
so there is enough address space to hold both memory mappings
in the MMU simultaneously.

So....

* You can have 3Gbytes of physical RAM - but then each process
   can only use at most 1Gbyte of it.  You can run three programs,
   each of which uses 1Gbyte - but you can't have a single process
   that uses more than 1Gbyte.

* You can have 2Gbytes of physical RAM allows all 2Gbyte to be
   used in a single process (well, a little less than that because
   the kernel needs some).

* If you have just 1Gbyte of physical RAM - then theoretically,
   you could have 3Gbytes per process - but since you've only
   *got* 1Gbyte, that's not very useful and you have a limit of
   1Gbyte of memory per process again.

Hence to allow the absolute maximum single process size, without
wasting CPU cycles to continually remap the MMU, you
need to have 2Gbytes of physical memory and a 2Gbyte per-process
limit.

When you use MORE than 4Gbytes in a 32bit machine, the memory
has to be bank-switched or something - and things get even
slower still.

With a 64 bit machine, you can access more bytes of memory
than there are atoms in a can of coke.  This issue should
completely go away and let you have an essentially
unlimited amount of physical memory and use all of it
in a single process.

NOTE: You may also need to use 'ulimit' (or 'limit' if you
use csh) to change the maximum per-process RAM limit.  This
can be limited for each user so that on multiuser setups,
no one user can hog all of RAM.



More information about the Discuss mailing list