"The Linux Gazette...making Linux just a little more fun!"


(?) The Answer Guy (!)


By James T. Dennis, [email protected]
Starshine Technical Services, http://www.starshine.org/


(?) Ultra-DMA and the 8.4Gb IDE Disk Limit

From R. Brock Lynn on Sun, 17 Jan 1999

(?) Hi Jim,

We met briefly at USENIX '98. I sat in front of you in the Red Hat Admin Tutorial. :) I think you had asked me about bochs or something like that. But I haven't done anything with it for a while... limited drive space until just this xmas when I bought two brand new 10 gig IDE (ATA3) IBM Deskstar drives.

And I can't for the life of me get the full 10 gigs on each to be recognized! I get only a flat 8gig each!

I'm running Debian 2.0 Hamm, with Kernel 2.2.0-pre6 with a PPRO single processor board, made in 1995, with the latest BIOS upgrade my vendor has available, circa. Feb., 1997. (bought the thing in '97) Cybermax: www.cybmax.com was the vendor.

Anyhow, the darned IBM drives only show up under Linux as 8gig. To be precise here is output of "df": (I included the full output just in case the added data might be useful. Yep, I've got as many drives as IDE can handle)


# df
Filesystem         1024-blocks  Used Available Capacity Mounted on
/dev/hda5             967677  880562    77116     92%   /
/dev/hda1            1028116 1017468    10648     99%   /mnt/c
/dev/hdb1            8609416   64304  8545112      1%   /mnt/bigboy1
/dev/hdd1            8615982   64304  8551678      1%   /mnt/bigboy2
/dev/sda4              98078   97394      684     99%   /mnt/zip
/dev/hdc              108240  108240        0    100%   /mnt/cdrom
(!) Not quite! You could have /dev/hdd --- for a total of four IDE drives on two channels. I've heard of people running more than that --- but I think that's just silly.

(?) And according to "bc" 8545112 bytes / 1024 bytes per meg / 1024 megs per gig = 8 gigs

The c/h/s numbers printed on both drives: chs: 16383/16/63 lba: 19,807,200

(!) Hmm. Those don't add up. But I'm not surprised.

(?) I wish I knew how to calculate total space in megs using C/H/S numbers!

(!) Sectors are 512 bytes. You multiple cylinders (C), heads (H), and sectors per track (S) to get the total number of sectors. Think of track as one head on one cylinder. That is to say that it is one concentric ring on one side of one platter.
That's all really a fiction since all of the high capacity drives in the last decade (everything over about 200Mb) have used "ZBR" (zone bit recording) and consequently don't physically have the same number of sectors per track out the outer "zones" (rings) of the platters as they do on the inner zones.
The drive electronics hide these details from the rest of the hardware so that the BIOS can "pretend" that it really is an even number of sectors on a given number of heads with a given number of tracks. The drives (SCSI and IDE) will "auto translate" into BIOS compatible disk addresses (CHS). (Actually SCSI controllers usually replace the BIOS routines that handle this --- but effectively the drive is still abstracting most of the details away from the controller and the OS).
The BIOS was only set to handle 10 bits of cylinder (1024 maximum), six bits of sector (per track) and eight bits of "head" which fits neatly into a 16 bit register and one byte register. Those were convenient for programming the 8086 based systems that were common about 20 years ago.
(They're pretty silly now).
In any event the famed 8Gb limit is derived from
max cylinders * max sectors * max heads 
                            = maximum total sectors
or:
1024 * 64 * 255 = 16777216
which we convert to Kilobytes, Megabytes and Gigabytes by:
16777216 / 2 = 8388608  (maximum total K)
         / 1000 = 8388 (maximum total Mb)
         / 1000 = 8.4  (maximum total Gb)
... note that we don't use 1024 to compute Mb and Gb. This is common practice among drive manufacturers (and unheard of for memory chips). That has been a matter of some controversy as those extra 24 K per Mb start to had up when you're doing them by the thousand.
I won't pretend to be authoritative on that subject. Let's suffice to say that given the original contraints of the BIOS addressing system the maximum addressable space (in 512 byte sectors) is between 8 and 8.4 Gb (depending on how you calculate your Gigabytes).
Over the years there have been various other limitation with parts of that. This trick of lying about the number of "heads" and claiming that there were 255 heads was the earliest way to over come the "1024 cylinder problem" --- which had lead to the early "540Mb" limit on IDE drives. Various different ways of accomplishing this were labelled EIDE and ATA-2. We no have ATA-3 and UltraDMA.

(?) fdisk reports these numbers for each of the disks:


/dev/hdb:
=====================================================================
Disk /dev/hdb: 255 heads, 63 sectors, 1232 cylinders
Units = cylinders of 16065 * 512 bytes

Device Boot   Begin    Start      End   Blocks   Id  System

/dev/hdb1            1        1     1232  9896008+  83  Linux native
=====================================================================

/dev/hdd:
=====================================================================
Disk /dev/hdd: 16 heads, 63 sectors, 19650 cylinders
Units = cylinders of 1008 * 512 bytes

Device Boot   Begin    Start      End   Blocks   Id  System

/dev/hdd1            1        1    19650  9903568+  83  Linux native
=====================================================================

Strange I know that different numbers of cylinders and heads are reported for the two drives since they are identical models: IBM #DTTA-351010

(!) The drive's electronics will take all of the parts of any address (CHS) that are presented to it and multiply them all together to get a "linear block address" (LBA). So It really doesn't matter what your CMOS says.
However, you probably have to add lilo.conf directives to pass the drive's true "geometry" to the kernel (so it will ignore the CMOS values).

(?) Here is my /etc/lilo.conf in case it might help:


=========================================================================
boot     = /dev/hda          # Device containing boot sector
default  = 2.2.0-pre6        # Default image to load
prompt                       # Forces boot prompt
timeout  = 50                # Wait <val>/10 sec. after prompt then boot def

image = /boot/vmlinuz-2.0.33

label  = 2.0.33
root   = /dev/hda5
read-only
vga    = 8
append = "mem=143M"

image = /boot/vmlinuz-2.0.36

label  = 2.0.36
root   = /dev/hda5
read-only
vga    = 8
append = "mem=143M"

image = /boot/vmlinuz-2.2.0

label  = 2.2.0-pre6
root   = /dev/hda5
read-only
vga    = 8

other = /dev/hda1

label  = win95
table  = /dev/hda
==========================================================================
(!) First try adding the "linear" directive to your lilo.conf "Global" section.
See if that helps.

(?) I have each drive in LBA mode in the BIOS with the autodetected settings. CHS autodetected match the numbers printed on the drive, but the BIOS only sees 8 gig I believe.

I just don't know what the deal is.

There is some rucus on "Ask Slashdot" about this same thing, how to overcome the 8gig barrier with Linux: but I'm at a loss for trying so many things.

http://slashdot.org/askslashdot/98/12/22/1143236.shtml

Perhaps you can help investigate this further, and finally put this problem to rest once and for all in the annals of Linux Gazette!

If there is any other info you may need about my system, please don't hesitate to ask...

And if I find a "Correct"[tm] solution, would you like me to post it to you for publication in LG? As it may be beneficial to many people. I will also post it to the maintainer of the Large Disk HOWTO (http://www.linux-howto.com/LDP/HOWTO/mini/Large-Disk.html) as well, for inclusion... if I actually get at a solution!

(!) Actually, Andries Brouwer, maintainer/author of the LargeDisk mini-HOWTO already has a small section on the 8Gb Linux IDE limit at:
http://metalab.unc.edu/LDP/HOWTO/mini/Large-Disk-7.html
... this could probably use a bit of elaboration.
Basically it suggests that recent kernels (2.0.35+ and 2.1.90+) should automatically handle the large drives --- but that they do a sanity check when the reported LBA capacity exceeds from the C*H*S by more than a certain about. Presumably this sanity check is still byting you --- so it may be that you need to apply his suggested patch. (That replaces the sanity check with a stub that always returns the "O.K" value).
I suspect that adding the "linear" directive to your lilo.conf (and running /sbin/lilo to rebuild the maps from it --- of course) will solve the problem. If that doesn't work, try adding appropriate "disk=" parameters to the lilo.conf. Then try this kernel patch.

(?) There is also a white paper on the so called 8.4 gig limit from IBM, in case that might also help give you clues... as I'm only stumped:

http://www.storage.ibm.com/hardsoft/diskdrdl/library/8.4gb.htm

(!) It seems like you did a bit of leg work looking for the answer (so you get an A+ for effort). However, you probably should skim over the whole LargeDisk mini-HOWTO (even the boring parts).
Andries does mention the "linear" option in section 6. It's also listed in the lilo.conf man page (big surprise). Personally I think he might want to provide a bit more meat, even if it only re-iterates or repeats what he said earlier. Many people (including me) will just skip to the section labelled "8Gb IDE Limit." Some will not understand that they should be trying things from other sections of the same HOWTO.

(?) Sincerely,
R. Brock Lynn
Debian 2.0


(?) Ultra-DMA and the 8.4Gb IDE Disk Limit

From R. Brock Lynn on Mon, 18 Jan 1999

Jim Dennis wrote:


># df
>Filesystem         1024-blocks  Used Available Capacity Mounted on
>/dev/hda5             967677  880562    77116     92%   /
>/dev/hda1            1028116 1017468    10648     99%   /mnt/c
>/dev/hdb1            8609416   64304  8545112      1%   /mnt/bigboy1
>/dev/hdd1            8615982   64304  8551678      1%   /mnt/bigboy2
>/dev/sda4              98078   97394      684     99%   /mnt/zip
>/dev/hdc              108240  108240        0    100%   /mnt/cdrom

Not quite! You could have /dev/hdd --- for a total of four IDE drives on two channels. I've heard of people running more than that --- but I think that's just silly.

Just out of mad curiosity, I wonder if you overlooked the hdd, or whether I'm overlooking the posibility of one more drive. (I also have a new IDE CDR I'd like to put in, but according to what I know, I'd have to take something else out. I think...)

(!) I don't see hdc on this listing --- so I presume you have some other OS on it. I was thinking of 'fdisk -l' output when I was looking at this.

(?) Hmm, I've got: hda (HD), hdb (HD), hdc (HD), hdd (CD) I think it's maxed out, but maybe you have a few tricks up your sleeve?

(!) No. I was just too tired to be trying to write LG/TAG stuff when I read your message and tossed off my first answer.

(?)
>The c/h/s numbers printed on both drives:
>chs: 16383/16/63
>lba: 19,807,200

Hmm. Those don't add up. But I'm not surprised.

Yes, I found one solution that seems to have worked to give me the maximum space on the drives!

I have to give credit to Jason Gunthorpe <[email protected]> of the Debian Project for this solution! (and also several other Debian and non-Debian people on the Open Projects IRC network.

(I frequently, or rather much more than frequently, "hang out" on the #debian and #linpeople channels of the irc.openprojects.net IRC server network, where also quite a few Debian developers and package maintainers "hang out". My handle is "bytor". Jason's is "Culus". The main reason I switched to Debian from Red Hat was the level of support I can get just being in the channel and asking questions from time to time. And I also help out newbies as well. :)

[Actually the system I'm using now is one that I converted in place from Red Hat 5.0 (upgraded from 4.2) to Debian 2.0. I wrote up a HOWTO and a tool, a short perl script, to help convert your passwd/group/shadow files from one system to the other (and all files on the system to reflect the new uid's/gid's) You can have a gander if curious at:

http://www.geocities.com/ResearchTriangle/3328/rh5todeb-howto.txt and http://www.geocities.com/ResearchTriangle/3328/conversion-tools.tar.gz

Please feel free to include this in anyway at the Answer Guy or anywhere on Linux Gazette. I will one day write it up properly in SGML, and submit to the LDP... just not enough time recently. Maybe I should write a short article for LG? (and then RH would never consider me for a job ever again!)

(!) This thread will probably get in there somehow.
I'm not sure we need another HOWTO for this issue --- although you might submit a set of patches and suggestions to the LargeDisk mini-HOWTO (and I think we might then upgrade it from a "mini-HOWTO" to a "full" HOWTO --- though that's a matter for Andries, Greg Hankins and whoever else is managing LDP HOWTOs these days.

(?) I hope this doesn't put me in bad standing with the Red Hat guys! I think Red Hat is great! But I really wanted to try Debian and didn't have the resources to start fresh! It's working great! I'm about to do an online "apt-get dist-upgrade" to slink soon using this very system, the rh-->deb conversion guinea pig. :)]

(!) Nobody should apologize for which Linux distribution they are running.
Oh! You're saying you might release a package to help Red Hat users convert to Debian, and a HOWTO on that.

(?) Anyhow, here's one more trick to put up your sleeve: (or what worked for me to make Linux see all of my big harddrives.)

The BIOS/CMOS is messed up anyway. At least mine is. It's several years old now. It can't handled drives over 8gig(calculated with 1024^n). It autodetects the "correct" numbers that are printed on the drive. But the numbers printed on the drive are actually bogus!

(!) Like Andries and I have said 8Gb is the maximum that can be expressed in CHS format. However, much larger capacities can be expressed in LBA ("linear") mode.

(?) chs: 16383/16/63 (incorrect number of cylinders to match the heads and sectors per track)

lba: 19,807,200 (this number I believe is the correct number of total number of sectors though.)

(!) Yes! You're getting it!
LBA stands for "linear block addressing" --- which needs to be supported by your drive and your OS for it to work. (I suspect that you also need at least an EIDE controller).

(?) Let's see what I've learned!

Total Bytes = [Sectors per track (S)] * [Heads (H)] * [Cylinders (C)]

* [Bytes per sector (512)] and

Total Bytes = [Total Sectors ("lba" on my drive)] * [Bytes per sector (512)]

These are good formulas to know... perhaps Andries can add this in an "appendix" to his HOWTO!

(!) I think he walks through these calculations a couple of times already. He doesn't seem to show them in "formula" format.

(?) Anyhow I can now calculate what the proper number of cylinders should be based on those formulas. (set both expressions for total bytes equal, and solve for Cylinders... yep I'm a math egghead.)

(!) You don't care what the cylinders/heads and sectors are. You want to use "linear."
(?)
[Total Sectors ("lba" on my drive)] * [Bytes per sector (512)]

Cylinders(C)= -----------------------------------------------------------
[Sectors per track (S)] * [Heads (H)] * [Bytes per sector (512)]
[Total Sectors ("lba" on my drive)]

Cylinders(C)= -------------------------------------
[Sectors per track (S)] * [Heads (H)]

for me this is: C = 19,807,200 / (16 * 63 ) = 19650

(And that is exaclty what Linux sees at boot up, and what fdisk and cfdisk see ... after the fix Jason Gunthorpe suggeted was done)

And if I calculate Gigs, from either formula above, I get:

Total Bytes = [Total Sectors ("lba" on my drive)] * [Bytes per sector (512)]

= 19,807,200 * 512 = 10141286400 bytes
= ( 10141286400 bytes / ( 1024  1024  1024 bytes/gig )
= 9.44 Gigabytes = 9671.48 Megabytes

At boot Linux now sees: CHS=19650,16,63 9671MB and cfdisk sees CHS=19650,16,63 9671.49 MB (right on the money!)

(I think fdisk will see CHS=19650,16,63 also, but I was suggested to use cfdisk instead of fdisk by Jason, as fdisk is no longer being maintained by the "upstream provider" as Debian calls them.

(!) I blind copied Andries on my message to you and he pointed out that I should have ignored the CHS values in the example calculations that I showed.
Your ' fdisk 'output already shows the correct values.

(?) Mystery unraveled! Wide Smile

But I still haven't said how I fixed my system:

Here's what Jason suyggested:

Wipe the partition table:

either

"cat /dev/zero > /hdb"

and count ten seconds as it blasts away at the drive... you only need to wipe the first few K

or

"dd if=/dev/zero of=/dev/hdb bs=1024 count=1024"

(!) Actually a count of one and a block size of 512 bytes would have been sufficient.

(?) I think that will wipe the first Megabyte of the drive that supposedly destroys the partition table.

(!) The partition table is the in the last ~50 bytes of the master boot record (MBR) which is exactly one sector.
That's all you need to blow away.

(?) Next, if you have a broken BIOS, like mine, completely disable the setup for your large drives... Linux will detect them anyway whether they are listed in the BIOS or not. (At least 2.2.0-pre6 did) I set the "Not installed" flag for both large drives hdb and hdd in the BIOS.

(!) Hmmm. I think you want to look for an LBA, "linear" or "PIO" mode for the CMOS IDE settings.

(?) Then I rebooted and BINGO, Linux reports the above CHS=19650,16,63 9671MB for both drives! (before with the BIOS crap enabled, Linux would see CHS=19650,16,63 for one drive, and CHS=1232,255,63 for the other drive. Strange I know.

(!) I think the "linear" option would still do the trick. Most systems won't boot off of a drive that the CMOS has listed as "not installed"

(?) And cfdisk worked for both of them and saw CHS=19650,16,63 9671.49 MB for both drives!

(!) I think it should have shown that anyway. (Maybe it needs the "linear" option).

(?) Next I partitioned each with one large partition, hdb1 and hdd1, and then formatted with mke2fs: "mke2fs -i 1024 -m 0 /dev/hdb1"

-i 1024 is inode density -m 0 says reserve none for "root only".

(!) Bad idea! You should reserve a small amount to lessen the chances of damage to the filesystem when it gets full.
Try just 1% on these larger drives. You can use 'tune2fs' to change it (-m to express it as a percentage, -r to use blocks). You can also set the "reserved user/group" for that filesystem so that it's not just 'root' that can use the reserved space on a drive.

(?) -c says to check for bad blocks, which I will do later once I settle down on a partition table I can live with.

(!) Do it when you first create the partition. Otherwise some important chunk of data may land on a bad sector before you remember to do it with 'fsck'.

(?) Course you know all that... (but I put in here for documentation... I will write Andries and ask him to add some of this to his HOWTO.)

It turned out that after the format, using the maximum "Inode Density" of 1024, (I'm kind of fuzzy in this point but...) I lost a LOT of space to inode overhead. "df" only saw about 8.2gig 9.44gig - 8.2gig = 1.24gig lost on both disks for a total of 2.48gig lost total!!! ... there was much pulling of hair and gnashing of teeth at that moment... until I was gently told that increasing the "inode density" number... that lowers the density, would help reduce the inode overhead.

(!) Basically each file uses an inode. Any individual file can use a large number of data blocks. The total number of inodes and data blocks is set when the filesystem is created. Additional inodes (extents?) are also allocated to track indirect blocks (that is blocks of data that are aren't listed in the first inode --- but are listed on one of the inodes that links specially them.
If you set the ratio wrong you can run out of inodes when plenty of disk space is available. The filesystem will still appear to be "full" in that you won't be able to create new files --- though you'd be able to append some data to some existing ones until you needed more of these "extents" (indirect blocks).
You can use 'df -i' to measure the available number of inodes rather than the number and percentage of datablocks.
Basically you should only reduce the inode density if you know that most of the files will be large --- that you won't have alot of small files. Even then reducing it can be a bad idea. It is far more common to increase the inode density to handle lots of smaller files.
Think about it. Every file uses at least one inode. Multiple hard links don't use additional inodes, they are additional references to existing inodes. All file names (directory entries) are links to inodes (except for some symlinks which can be embedded directly into ext2 directory structures). So, if you have small files you run of out inodes faster than when you have large ones.

(?) I then reformatted with:

mke2fs -i 16384 -m 0

And that time, after mounting the partition, "df -m" reported: 9547MB or 9.32gig, so the loss to inode overhead was reduced. (but of course I risk running out of inodes! So I may redue the inode number to something in between 1024 and 16384!) But this time the loss was: 9.44gig - 9.32gig = 0.12gig MUCH better!

(!) I think that you're cutting it a bit thin. But let us all know how it works out as the drive gets some use.

(?) I also have to thank DJ Delorie <[email protected]> (author of the DJGPP port of gcc to DOS, and the compiler of choice for DOS Quake) for his kind replies to my email for help as well. He had posted on the Ask Slashdot thread about large hard drive problems.

He wrote in with the following:


-------------------------------------------------------------------
c  h  s * 512 = total bytes

16383  16  63 * 512 = 8,455,200,768

For 10.1g, c would have to be about 19650. The LBA number is the number of sectors on the disk, so 19,807,200 / (16*63) = 19650, which is what you need to tell fdisk.

Disk /dev/hdb: 255 heads, 63 sectors, 1232 cylinders Disk /dev/hdd: 16 heads, 63 sectors, 19650 cylinders

255 63 1232 * 512 = 10,133,544,960 16 63 19650 * 512 = 10,141,286,400

Anyhow, the darned IBM drives, after formatting only show about 8.2gig. To be precise, here is output of "df": (I included the full output just in case the

Don't use df. The capacity it reports is less than the size of the partition due to the overhead of the ext2 file system (inodes, free block maps, etc). For example, my 2,096,451 block boot partition shows 2,028,098 blocks in df.

(!) Yeah. It would be nice if the man page for 'df' not only warned you about the overhead but gave you an idea about the typical percentages to expect.
Heck! It would be even nicer if the 'df' command itself offered an option to print the percentage of overhead in inodes, badblocks, reserved space, and any other categories that might exist.

(?) [regarding me being pissed at 10.1gig actually being 9.44gig:]

That makes me MAD! Theses guys are the cream of the crop... they make the hardware, they should know and use the proper "1024" rather than the 1000 multiplier! ooh that strikes a nerve! Anyhow...

Seagate always uses the 1000^n values, so you get what you expect. Most manufacturers tell you which measure they use.

But later I found out that -i 1024 was not the "cluster size" but rather inode density and increasing it to say 10240 would help cut down on the overhead of all the inodes and give me more space according to Jason. Haven't tried, but will soon. (but I fear running out of inodes... will have to experiment)

"inode density" is tech speak for "average file size". If you know how big the average file will be, you can make it so that you run out of space and inodes at about the same time.

(!) That's a great simplication. It's absolutely true and doesn't explain the mechanism at all.

(?) Yes, I plan to make a 10 to 20 meg /boot partition just for kernels at the front of the drive... I hope 20 meg is small enough to fit under the 1024th cylinder!

Your kernel is only 1Mb. One cylinder (~8Mb on most big drives) should be plenty.

Heh, perhaps I can sue IBM or the vendor in a local court in my hometown? over the difference between 1024 and 1000. And show that 1000 is not the proper multiplier in the world of computers? If nothing else just to prove a point that consumers don't like to be lied to!

Many catalogs explicitly state "1Gb=1000Mb" somewhere, to tell you which measure they use. Both are equally likely.

Which helped!


>I wish I knew how to calculate total space in megs using C/H/S numbers!

Sectors are 512 bytes. You multiple cylinders (C), heads (H), and sectors per track (S) to get the total number of sectors. Think of track as one head on one cylinder. That is to say that it is one concentric ring on one side of one platter.

That's all really a fiction since all of the high capacity drives in the last decade (everything over about 200Mb) have used "ZBR" (zone bit recording) and consequently don't physically have the same number of sectors per track out the outer "zones" (rings) of the platters as they do on the inner zones.

The drive electronics hide these details from the rest of the hardware so that the BIOS can "pretend" that it really is an even number of sectors on a given number of heads with a given number of tracks. The drives (SCSI and IDE) will "auto translate" into BIOS compatible disk addresses (CHS). (Actually SCSI controllers usually replace the BIOS routines that handle this --- but effectively the drive is still abstracting most of the details away from the controller and the OS).

The BIOS was only set to handle 10 bits of cylinder (1024 maximum), six bits of sector (per track) and eight bits of "head" which fits neatly into a 16 bit register and one byte register. Those were convenient for programming the 8086 based systems that were common about 20 years ago.

(They're pretty silly now).

In any event the famed 8Gb limit is derived from

"

max cylinders max sectors max heads = maximum total sectors

or:

1024 64 255 = 16777216 "

which we convert to Kilobytes, Megabytes and Gigabytes by:

" 16777216 / 2 = 8388608 (maximum total K)

/ 1000 = 8388 (maximum total Mb) / 1000 = 8.4 (maximum total Gb) "

... note that we don't use 1024 to compute Mb and Gb. This is common practice among drive manufacturers (and unheard of for memory chips). That has been a matter of some controversy as those extra 24 K per Mb start to had up when you're doing them by the thousand.

I won't pretend to be authoritative on that subject. Let's suffice to say that given the original contraints of the BIOS addressing system the maximum addressable space (in 512 byte sectors) is between 8 and 8.4 Gb (depending on how you calculate your Gigabytes).

Over the years there have been various other limitation with parts of that. This trick of lying about the number of "heads" and claiming that there were 255 heads was the earliest way to over come the "1024 cylinder problem" --- which had lead to the early "540Mb" limit on IDE drives. Various different ways of accomplishing this were labelled EIDE and ATA-2. We no have ATA-3 and UltraDMA.

Thanks a TON for the above information! Very helpful stuff!

The drive's electronics will take all of the parts of any address (CHS) that are presented to it and multiply them all together to get a "linear block address" (LBA). So It really doesn't matter what your CMOS says.

However, you probably have to add lilo.conf directives to pass the drive's true "geometry" to the kernel (so it will ignore the CMOS values).

I was pondering doing that, instead of twidling with with disabling the drives in the BIOS. As I might heaven help me, want to put NT, *BSD, Solarisx86, or BeOS on the drives as well, and they might require a BIOS entry!

I suppose now that I have the correct "bogus" geometries, I can add that in lilo as:
' append = "hdb=19650,16,63 hdd=19650,16,63" '

And then maybe reenable the BIOS entries? (Jason suggested once I got the drives partitioned and formatted correctly I might be able to reenable the BIOS settings so that DOS or other OS's would be able to see it... not sure on that though. But he warned me that possibly cfdisk or fdisk might not partition the drive to where the partition boundaries would land at places where DOS, NT, or other OS's might expect them to.

Another thing that was suggested by Jason, (something he says he's done before) is to take the drive to someone with a PentiumII MB (assuming they have a working BIOS) and partition with DOS fdisk. So you know the partition table is acceptable to DOS style OS's. (in case you ever have a need to fool with such things.) Then take the drive back to your broken BIOS computer, and then change the partiton types to Linux and Linux Swap, but not changing the boundaries. (dunno if you have to disable the BIOS entries of not first) and then it should *work*!

(!) That's good advice. Think about doing a BIOS upgrade for yourself, too.

(?)
>Perhaps you can help investigate this further, and finally put
>this problem to rest once and for all in the annals of Linux
>Gazette!


>And if I find a "Correct"[tm] solution, would you like me to post
>it to you for publication in LG? As it may be beneficial to many
>people. I will also post it to the maintainer of the Large Disk
>HOWTO (> >http://www.linux-howto.com/LDP/HOWTO/mini/Large-Disk.html)
>as well, for inclusion... if I actually get at a solution!

Actually, Andries Brouwer, maintainer/author of the LargeDisk mini-HOWTO already has a small section on the 8Gb Linux IDE limit at:

http://metalab.unc.edu/LDP/HOWTO/mini/Large-Disk-7.html

... this could probably use a bit of elaboration.

Basically it suggests that recent kernels (2.0.35+ and 2.1.90+) should automatically handle the large drives --- but that they do a sanity check when the reported LBA capacity exceeds from the C*H*S by more than a certain about. Presumably this sanity check is still byting you --- so it may be that you need to apply his suggested patch. (That replaces the sanity check with a stub that always returns the "O.K" value).

Ah, I will look into that. If I reenable the BIOS entries and Linux starts to see funny values again, I'll try it.

I haven't had a working windows partition on my system for over a year now. I love Linux, but since I have all the space now with the new drives I decided I might want to try NT... the main interest being to experiment with Cygwin to get a Unix-like layer working for NT (in case I ever have a job with NT servers, I'll have experience in Unix-ifying them ;)

I suspect that adding the "linear" directive to your lilo.conf (and running /sbin/lilo to rebuild the maps from it --- of course) will solve the problem. If that doesn't work, try adding appropriate "disk=" parameters to the lilo.conf. Then try this kernel patch.

Hmm, I'm not familiar with the reasoning behind the "linear" option. I seem to recall all SCSI disks need it? May try it also and see what happens. Is "linear" a global option to lilo, that affects all disks in the system, or a per disk option? I think it is global, but I'm not sure. And if global, would it adversely affect the smaller drives that have, up till now, worked well w/o that option? I'll have to investigate this.

(!) It's listed in "Global Option" section of the man page. But I'm not sure.

(?)
>There is also a white paper on the so called 8.4 gig limit from
>IBM, in case that might also help give you clues... as I'm only
>stumped:


>http://www.storage.ibm.com/hardsoft/diskdrdl/library/8.4gb.htm

It seems like you did a bit of leg work looking for the answer (so you get an A+ for effort). However, you probably should skim over the whole LargeDisk mini-HOWTO (even the boring parts).

Well, thanks for the commendation. 8-)

I've just got to know the real answer! I'll go to almost any length to get at "what's really going on" :)

Andries does mention the "linear" option in section 6. It's also listed in the lilo.conf man page (big surprise). Personally I think he might want to provide a bit more meat, even if it only re-iterates or repeats what he said earlier. Many people (including me) will just skip to the section labelled "8Gb IDE Limit." Some will not understand that they should be trying things from other sections of the same HOWTO.

Yes, I have to admit I didn't read the whole thing, I skimmed a bit and focused on that short section. I'll give it another look, this time reading it carefully, and if I see that any of the things above are missing, I'll prepare and email, and send it off to him for inclusion in the next version.

Also, one other thing that I can do is try the Ontrack Disk Manager software for the IBM drives. It's similar to EZDrive, and is supported by Linux... only someone told me it wasn't supported by FreeBSD... and I want to expriement with it. As I was told this Ontrack disk manager install to the boot drive, even if it's not the drive that needs it. And gets loaded at boot time, before even the lilo code in the MBR gets called. It supposedly replaces the BIOS disk routines. This may be the better solution for Linux and NT but not if I want to try one of the BSD's. I will have to look more into this also.

I remember back when I needed EZDrive with my 486 to recognize the full 540meg drive I had back then. And was suprised when Linux detected and dealt with EZdrive properly!

(!) I was surprised when they added the support for OnTrack EZDrive and a few others, too.
I still won't go near them. But its nice to know that we can.

(?) Thanks for your reply! Will you write up an "Answer Guy" section detailing this question / problem in the next LG, or is it too involved?

(!) It's certainly not my longest or most complicated thread. However, writing it up in a more organized fashion, as an LG article and as a set of suggested enhancements to the mini-HOWTO..
[ Once Jim's written it, it stays in. The only messages or threads I ever toss out completely are some with no Linux in them. But I do sometimes defer confusing threads until the next issue, so I can spend the first week of a month polishing them so they don't make me dizzy. This one's pretty close, but I think it'll do alright. -- Heather ]

(?) R. Brock Lynn


Copyright © 1999, James T. Dennis
Published in The Linux Gazette Issue 37 February 1999


[ Answer Guy Index ] 1 2 3 4 5 6 7 8 9 10
11 12 14 15 16 17 18 19 21 22
23 28 29 30 31 32 33 34 37 38
39 41 42 43 44 45 46 47 48 49



[ Table Of Contents ] [ Front Page ] [ Previous Section ] [ Next Section ]