My NetApp Flashpool implementation

The other day I was designing and implementing an all new NetApp FAS3250 setup running Clustered ONTAP 8.2 supporting a vSphere environment. This setup contains a bunch of SAS 10K disks and a DS2246 shelf filled with 24x 200GB SSD’s.

Because of the requirements stated by the customer, most of the SSD’s are used for a SSD-only aggregate. But to accelerate the SAS disks, we opted to use 6 SSD’s to create a Flashpool. I guess Flashpool doesn’t need any further detailed introduction. It is a mechanism used by NetApp to utilize SSD’s to automatically cache random reads and random overwritten writes in a dedicated Flashpool aggregate. Note the bold ‘overwritten’! This cached data is available during a takeover or giveback.

netapp-flashpool

 

Although the implementation of a Flashpool is pretty straight forward, there are a few things I would like to point out. Things I encountered during the implementation:

Firstly, consider to place the SSD disk shelves in a separate stack other then the disk shelves containing SAS disks. Because the potential high IOPS, thus high SAS I/O bandwidth utilization, we don’t want to be limited by our backend SAS I/O. As NetApp states; “Full SSD shelves (24 SSDs) are best placed in their own stack”.

I used this stack config:

flashpool-stacks

 

Once I was ready with the SAS and ACP cabling, I booted the disk shelves and controllers/nodes. Hmmm… forgot to disable the auto disk assignment. I had to manually correct the assignment of the SAS disks to node #2 for the SAS aggregate. Node #1 was taking care of the SSD aggregate.

Now what about the SSD’s used for the Flashpool in the SAS aggregate. I had to assign them to node #2. Because Clustered ONTAP uses a changed CLI, I needed to check the right syntax. Takes some getting used to. Here is how to manually assign disks to a node:

removeowner -disk 3b.00.21
removeowner -disk 3b.00.20
removeowner -disk 3b.00.19
removeowner -disk 3b.00.18

assign -disk 3b.00.18 -owner <node name>
assign -disk 3b.00.19 -owner <node name>
assign -disk 3b.00.20 -owner <node name>
assign -disk 3b.00.21 -owner <node name>

When ready, check the disk assignment using the command Storage disk show. It can take a while for the disks to be assigned. When being assigned, they are marked ‘Pending’. The output should be something like this:

XXXXX::storage disk> show
Usable Container
Disk Size Shelf Bay Type Position Aggregate Owner
—————- ———- —– — ———– ———- ——— ——–
XXXXX-01:0a.00.0 186.1GB 0 0 aggregate dparity SAS01 XXXXX-01
XXXXX-01:0a.00.2 186.1GB 0 2 aggregate data SAS01XXXXX-01
XXXXX-01:0a.00.4 186.1GB 0 4 aggregate data SAS01XXXXX-01

 

Now it was a matter of configuring the SAS01 aggregate to be a hybrid aggregate and use the SSD’s for the flashpool. Although we had 6 SDD’s ready to go, I could only use 5 SSD’s because of the minimum spares. It is still recommended to use a spare disk. For the Flashpool raidgroup? Yes, it is.

This part took my head for a little spin; which raid type will I use for my Flashpool raid group? Do I need a SSD spare for the Flashpool?

As of ONTAP version 8.2 it is possible to configure a different raid type for the Flashpool than the other raid groups it is used for. Prior versions of ONTAP dictated a single raid type policy for all raid groups in an aggregate. But why should I change my Flashpool raid group to use raid4? Performance-wise there is not much to gain using raid4. So capacity is a legit reason? A spare disk is practically mandatory; Netapp stating the one hot spare SSD per node is required when using raid4, and strongly recommended when using raid-dp!

This is because, although the SSD’s in a flashpool are only used to cache a copy of blocks stored on SAS or SATA disks, data blocks in the write cache hold the only copy of that data. Well, that makes it a no-brainer, right? So in the end, I chose to configure the Flashpool raid group to use raid-dp with 1 spare disk.

If you, for whatever reason, want a different raid type for the Flashpool raid group it is configured using these commands:

modify -aggregate SAS01 -disktype SAS -t raid_dp
modify -aggregate SAS01 – disktype SSD -t raid4

You can choose a raid type per disk type. When this is done, the raid status is set to ‘mixed_raid_type’.

Hybrid Enabled: true
Available Size: 26.91TB

Plexes: /SAS01/plex0
RAID Groups: /SAS01/plex0/rg0 (block)
/SAS01/plex0/rg1 (block)
/SAS01/plex0/rg2 (block)
/SAS01/plex0/rg3 (block)
RAID Status: mixed_raid_type, hybrid, normal
RAID Type: mixed_raid_type

Note that when mixed raid type is used, you cannot edit the aggregate in OnCommand System Manager.

All that needs to be done right now, is to migrate the customer data and experience how the SAS aggregate with the Flashpool is performing…

 

I used this technical report from NetApp as a resource;  http://www.netapp.com/us/media/tr-4070.pdf

Leave a Reply

Your email address will not be published. Required fields are marked *