1650 and VFS: unable to mount root fs

Chris Pascoe c.pascoe at itee.uq.edu.au
Thu Dec 5 07:36:01 CST 2002


Hi Lars,

Trying to help - haven't read the whole thread...

> dmesg snip>>
> TCP: Hash tables configured (established 32768 bind 65536)
> Linux IP multicast router 0.06 plus PIM-SM
> NET4: Unix domain sockets 1.0/SMP for Linux NET4.0.
> RAMDISK: Compressed image found at block 0
> Freeing initrd memory: 245k freed
> VFS: Mounted root (ext2 filesystem).
> SCSI subsystem driver Revision: 1.00
> kmod: failed to exec /sbin/modprobe -s -k scsi_hostadapter, errno = 2
> PCI: Assigned IRQ 3 for device 01:06.0
> PCI: Assigned IRQ 7 for device 01:06.1

^^^^
This looks to be potentially bad.  It's loaded the AIC7XXX driver on the
same interrupt as serial/parallel ports, as your machine only seems to be
using ISA interrupts.

Did you enable the IO APIC on this machine?  Is it a SMP kernel?  If not a
SMP kernel, have you turned on "Local APIC support on uniprocessors" and
"IO-APIC support on uniprocessors" options (they are enabled automatically
in SMP builds).  Without these turned on the system will be forced to map
all the devices onto the traditional ISA interrupts - of which there
aren't very many.

> scsi0 : Adaptec AIC7XXX EISA/VLB/PCI SCSI HBA DRIVER, Rev 6.2.8
>         <Adaptec aic7899 Ultra160 SCSI adapter>
>         aic7899: Ultra160 Wide Channel A, SCSI Id=7, 32/253 SCBs
>
> scsi1 : Adaptec AIC7XXX EISA/VLB/PCI SCSI HBA DRIVER, Rev 6.2.8
>         <Adaptec aic7899 Ultra160 SCSI adapter>
>         aic7899: Ultra160 Wide Channel B, SCSI Id=7, 32/253 SCBs
>
> blk: queue c24c8a14, I/O limit 4095Mb (mask 0xffffffff)
>   Vendor: FUJITSU   Model: MAM3367MC         Rev: 5A01
>   Type:   Direct-Access                      ANSI SCSI revision: 03
> blk: queue c24c8c14, I/O limit 4095Mb (mask 0xffffffff)
>   Vendor: PE/PV     Model: 1x3 SCSI BP       Rev: 0.28
>   Type:   Processor                          ANSI SCSI revision: 02
> <<end snip
>
> The standard kernels hang just before this, at the
> VFS: Mounted root (ext2 filesystem)
> I guess.
>
> But what puzzles me is why the kernel mounts the root _before_ the
> detection of the scsi-controller and disks (how can it?) and also why it
> seems to see the root filesystem as ext2 when in fact it is ext3.

The kernel is mounting the "initrd root" at that point.  The initrd is
loaded into RAM by the boot loader (LILO, presumably) and decompressed by
the boot loader.  The kernel then runs /linuxrc from the initial ram disk
to load whatever SCSI drivers are needed to let it find the "real" root
filesystem.

Generally the initrd is made with the 'ext2' filesystem, and that's what
you're seeing with the "VFS: Mounted root (ext2 filesystem)" line.  After
the SCSI drivers are all loaded and the ext3 module, etc, are loaded, the
system then proceeds to mount the real root filesystem.

You should see output like below when this happens (assuming the standard
Redhat mkinitrd package is installed):

Mounting /proc filesystem
Creating root device
Mounting root filesystem
kjournald starting.  Commit interval 5 seconds
EXT3-fs: mounted filesystem with ordered data mode.
Freeing unused kernel memory: 88k freed
INIT: version 2.78 booting

You can confirm the contents of the initrd are correct and load the right
drivers, as below:
    gzip -dc /boot/initrd-2.4.18-xfs-20020524.img > /tmp/a.initrd
    sudo mount -o loop /tmp/a.initrd /mnt
    cat /mnt/linuxrc

>From the contents of the linuxrc file you should be able to determine what
at which point the driver load is failing - it "echo"s something to the
console before loading each driver.

I suspect that if the system is failing at or just before that "Mounting
root filesystem" line that your initrd is corrupt or for some reason the
boot loader isn't loading it correctly (e.g. if you are using lilo and
haven't rerun 'lilo' after a change to the initrd)..

I'm confused as to why you don't see any output like "Loading scsi_mod
module" - my RedHat initrds all have lots of 'echo' statements included in
them... or did they remove them from RH 8.0 for some reason?  I'm only
running 7.3 locally.

Regards,
Chris




More information about the Linux-PowerEdge mailing list