[Crowbar] disk management and partitioning concepts
Andi_Abes at Dell.com
Andi_Abes at Dell.com
Thu Mar 15 10:24:07 CDT 2012
Thanks cool !!
As far as I know, Nova volume expects a reliable store (i.e. doesn't handle disk failures), so we setup nova controller nodes to use a RAID10 of all available disks.
I think you'd want to be able to configure nova volume for a path to use, rather than discover separate drives....
From: Dae Woo [mailto:sanconnected at gmail.com]
Sent: Thursday, March 15, 2012 11:16 AM
To: Abes, Andi
Subject: Re: [Crowbar] disk management and partitioning concepts
Thanks! Shame on me, I never looked at Swift and Hadoop barclamps.
So, if I understand correctly all additional disk management occurs only when it is really needed by a appropriate barclamp. That's good.
I'm just looking at nova-volume recipes and discovered that it uses regular file for a logical volume group. I'm thinking about separate nova-volume-server role and the recipe that can make volume group on the real disks, not files.
On Thu, Mar 15, 2012 at 6:49 PM, <Andi_Abes at dell.com<mailto:Andi_Abes at dell.com>> wrote:
So, Crowbar does manage the additional disks, when it needs too.
So both Hadoop and Swift look for any available drives, and will setup them up based on their requirements, when a node is configured to be part of a swift or hadoop data node (the filesystem params are somewhat different between hadoop and swift). Both discover the available disks, filter out disks that are otherwise used, and then configure them.
This is in-line with Crowbar's devOps principles - build what you need when you need it.
As for RAID, yes. There is a proprietary barclamp, which can configure a node as either:
* RAID 10 - where the OS seems just 1 big drive. This is used for workloads (or Roles) that need reliable storage, and doesn't handle disk failures well.
* JBOD - for workloads that are smart (ish)about handling disks
Configuring the appropriate role also follows devOps - configure the raid when you know what it needs to be, based on what the node is going to do.
So, your proposal of configuring all the disks upfront could be a bit problematic. When installing the OS, we don't really know enough about how to setup the additional disks (e.g. swift likes XFS)
All that said and done... what are you trying to get done or deployed? Does it have any requirements for the filesystem that the component consumes?
From: crowbar-bounces On Behalf Of Dae Woo
Sent: Thursday, March 15, 2012 10:32 AM
Subject: [Crowbar] disk management and partitioning concepts
it is very interesting how crowbar is doing partitioning and disk management. I'm guessing that it can configure RAID on the unallocated nodes with the help of proprietary barclamp.
Is the configured RAID storage visible to the operating system as only one existing disk?
Anyway, for ubuntu distros crowbar is using preseed with the atomic partitioning scheme with LVM. It uses the whole disk /dev/sda and creates only 2 volume groups: for root and a swap. But if we have some more disks, e.g. /dev/sdb, /dev/sdc, it leaves them untouched at all.
In fact crowbar has no any adjustable method to perform the disk/'logical volume' management and partitioning.
My suggestion is to perform this management step during the operating system installation.
Sledgehammer will provide all information about every unallocated node during the discovery state by ohai. And then when a specific proposal is being applied barclamp-provisioner can adjust preseed configuration to make some logical volume management and partitioning. This disk configuration is depends on the actual role assigned to the node.
Do you have any future plans/ideas/thoughts on this?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Crowbar