[mdlug] "Device busy" errors and RAID setup issues
Michael Mikowski
z_mikowski at yahoo.com
Tue Nov 17 20:41:34 EST 2009
I'm not sure what you are using the RAID for, but for home use I find RAID 1 sufficient, and simple.
That said, you certainly have good reason for your choice, and I feel your pain. Here is what I would try:
Removed drive /dev/sde1, and replace with drive /dev/sdf1 (the physical drives). Set aside our former /dev/sde1, and reboot. Then try to rebuild without a hot spare. You can add one later IIRC. Does that work?
If so, you might be able to re-introduce our former /dev/sde1 drive and add it as a spare later using mdadm.
Seems like the problem may be with your mobo SATA for more than 4 drives (I've seen this before, but its just a guess).
Good luck!
----- Original Message ----
From: David McMillan <skyefire at skyefire.org>
To: MDLUG's Main discussion list <mdlug at mdlug.org>
Sent: Tue, November 17, 2009 4:54:11 PM
Subject: Re: [mdlug] "Device busy" errors and RAID setup issues
David McMillan wrote:
> Jeff Hanson wrote:
>> On Mon, Nov 9, 2009 at 8:09 PM, David McMillan <skyefire at skyefire.org> wrote:
>>> Not sure what I have going on here. I've built myself a (intended)
>>> RAID server with five SATA 1TB drives dedicated to the RAID array
>>> (separate dedicated boot drive), running Jaunty Jackalope (I tried a few
>>> server-dedicated distros, but JJ is what I'm comfortable with). The
>>> HOWTOs I've found on the net make building a RAID array using mdadm
>>> pretty simple, but I keep getting "Device or resource busy" on one or
>>> more of the RAID drives when I try to do an mdadm build. What's really
>>> odd is that the "busy" drive(s) are different each time I reboot the
>>> machine. These drives have been fdisk'd to auto-detect RAID partitions,
>>> are completely empty, and are not mounted -- never *have been* mounted,
>>> for that matter. Running an lsof turns up nothing using any of the
>>> /dev/sd* drives aside from the boot drive. 'tis a pozzlement.
>> BIOS boot order and other things can affect assignments which is why
>> they use UUIDs now. Check the UUIDs using blkid to see if you somehow
>> ended up with a duplicate (like cloning an existing RAID member
>> instead of creating each drive separately with fdisk/cfdisk/parted and
>> mdadm).
>
> Hm. Okay, I'll try that.
Busy, busy, busy. But I finally got time to sit down and do all of
this. So, I did:
dd if=zero of=/dev/sdx for all five of the drives I want to RAID.
Confirmed that each disk no longer had a partition table using fdisk.
Used fdisk to partition each drive as Linux Raid Autodetect.
Ran the following command: sudo mdadm --create --verbose /dev/md0
--level=10 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
--spare-devices=1 /dev/sdf1
And what I got was:
mdadm: layout defaults to n1
mdadm: chunk size defaults to 64K
mdadm: Cannot open /dev/sde1: Device or resource busy
mdadm: /dev/sdf1 appears to be part of a raid array:
level=raid10 devices=4 ctime=Sat Oct 10 02:50:35 2009
mdadm: create aborted
Running blkid gave me this:
sudo blkid
/dev/sda5: UUID="cc3d2e49-6504-459d-af40-eeb214cc41cb" TYPE="ext3"
/dev/sda6: UUID="9bf141dd-1ef9-4bf6-b55e-b699229fcb08" TYPE="ext3"
/dev/sda7: TYPE="swap" UUID="a246c936-2a1f-4d57-a4ac-c0fa72baa766"
/dev/sde1: UUID="fdd19ba2-6598-f2fa-2621-c6fd51a58e1d" TYPE="mdraid"
/dev/sdf1: UUID="02138639-c13a-b909-2621-c6fd51a58e1d" TYPE="mdraid"
So... I'm lost. I did all of this in one sitting -- nothing should
have mounted or taken up /dev/sde or /dev/sdf in between the time I
fdisk'd them and the time I ran mdadm. Heck, it was less than five
minutes between commands. I can't figure out what's going on here, or
how to break loose sde and sdf from whatever's got them.
Just for giggles, I did a 'sudo rm -r /dev/md*', and that got rid of
the /dev/sde error, but the /dev/sdf error lingers.
Any thoughts on what I should try next?
>>> The other problem is that when I experimentally tried creating a RAID
>>> array out of the drives that *weren't* busy, some odd things happened.
>>> The array built successfully, as far as I could tell -- no error
>>> messages, and it showed up as an active /dev/md0 in /proc/mdstat, but
>>> when I rebooted the computer, mdadm won't show /dev/md0 as an active
>>> array device, although it'll accept start/stop commands. Trying to
>>> build the array again from scratch yields a bunch of "device appears to
>>> already be part of a RAID array" error messages. Thing is, I can't seem
>>> to get rid of the old array so I can start over again.
>> Stop the arrays and try zeroing the drives using dd.
>
> Ah, dd. Is there anything it *can't* do?
> Actually, I've already done this (do you know how *long* it takes to
> dd-zero five full 1TB drives? Yowza!), and found that sometimes even
> "sudo dd" will generate the "busy" error. I'm working around it for the
> moment. I should have fully zer0'd drives to try again with when I get
> home from work tonight.
>
>> Also check mdadm bug reports:
>> https://bugs.launchpad.net/ubuntu/+source/mdadm?field.searchtext=&orderby=-datecreated
>
> D'oh. Why didn't I think of that?
> _______________________________________________
> mdlug mailing list
> mdlug at mdlug.org
> http://mdlug.org/mailman/listinfo/mdlug
_______________________________________________
mdlug mailing list
mdlug at mdlug.org
http://mdlug.org/mailman/listinfo/mdlug
More information about the mdlug
mailing list