QNAP TS451A: How to enable standby mode for HDDs

For many years I used a QNAP TS219 NAS Filer for storing and protecting data. The capacity of the QNAP device now reached it’s limits. I decided to go with a new device instead of just plugging again two larger harddrives into the TS219.

After doing some research I bought a QNAP TS451A and configured the device with two different drive sets, each configured as RAID1.

  1. Two SDDs with a capacity of 256GB each. This RAID will contain all services run on the QNAP (LXC, Docker, KVM, QNAP packages, …)
  2. Two HDDs with a capacity of 4TB each which will serve NAS and the Backup Services like TimeMachine

My rational for choosing this setup was that I wanted to cut down the energy consumption of the spinning rust (hdd) while providing large capacity for file storage.

So let’s see which energy consumption can be measured in different situations. The main differentiation is the state of the hard drives. Whenever they will be allowed to go into the standby state, the energy consumption drops.

My measurements showed with freshly configured QNAP TS451A using Firmware 4.2.2:

  1. 18 Watt for SDDs in active/idle state, HDDs in standby state (yearly energy costs of €44)
  2. >30 Watt for SDDs in active/idle state, HDDs in  active/idle  (yearly energy costs of > €75)

So I configured the QNAP to use only the RAID based on the SSDs for all applications. Additionally I set the disk timeout time to 5 minutes.

But instead of letting the hdds to go into the standby state, the drives continued to stay in the active/idle state.

Let’s have a look at the RAID configuration:

[~] # cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] 
md2 : active raid1 sdc3[0] sdd3[1]
      3897063616 blocks super 1.0 [2/2] [UU]

md1 : active raid1 sda3[0] sdb3[1]
      240099264 blocks super 1.0 [2/2] [UU]

md256 : active raid1 sdd2[1] sdc2[0]
      530112 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md321 : active raid1 sdb5[2] sda5[0]
      8283712 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sdc4[2] sdd4[3] sda4[0] sdb4[1]
      458880 blocks super 1.0 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk
md9 : active raid1 sdd1[3] sdc1[2] sda1[0] sdb1[1]
      530048 blocks super 1.0 [4/4] [UUUU]
      bitmap: 1/1 pages [4KB], 65536KB chunk
Detailed view for one of these devices:
[~] # mdadm --detail /dev/md9
/dev/md9:
        Version : 1.0
  Creation Time : Sun Nov 20 16:19:23 2016
     Raid Level : raid1
     Array Size : 530048 (517.71 MiB 542.77 MB)
  Used Dev Size : 530048 (517.71 MiB 542.77 MB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent
  Intent Bitmap : Internal
    Update Time : Sun Dec 18 16:06:48 2016
          State : clean 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
           Name : 9
           UUID : bd28148d:cd2ba3ae:af4f1a06:85b700a6
         Events : 5063

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1
So md9 and md13 are using all 4 drives - but what are these devices used for?
[~] # df | grep md
/dev/md9                493.5M    113.5M    380.0M  23% /mnt/HDA_ROOT
/dev/md13               355.0M    321.5M     33.5M  91% /mnt/ext

The volumes are used to back the root filesystem and the configuration of the installed applications.

No problem – I just reconfigured the RAID array to just include the SSDs sda and sdb.

[~] # mdadm /dev/md13 --fail sdc4
mdadm: set sdc4 faulty in /dev/md13
[~] # mdadm /dev/md13 --fail sdd4
mdadm: set sdd4 faulty in /dev/md13
[~] # mdadm /dev/md9 --fail sdc1 
mdadm: set sdc1 faulty in /dev/md9
[~] # mdadm /dev/md9 --fail sdd1
mdadm: set sdd1 faulty in /dev/md9

[~] # mdadm --grow /dev/md13 --raid-devices=2 
raid_disks for /dev/md13 set to 2
[~] # mdadm --grow /dev/md9 --raid-devices=2 
raid_disks for /dev/md9 set to 2

Everything worked fine – the IOs of the system filesystems to the HDDs vanished and the HDDs nicely went to the standby mode after 5 minutes.

But when I rebooted the QNAP the next time it just hung during startup. Removing the two HDDs enabled me again to boot from the SDDs. You can swap in the HDDs later on and rediscover the missing RAID volume. It turned out that the QNAP expected to find on each drive one partition for md9 and md13.

So the workaround to get the achieved goal is to only fail the partitions and not to remove it from the RAID configuration:

[~] # mdadm /dev/md13 --fail sdc4
mdadm: set sdc4 faulty in /dev/md13
[~] # mdadm /dev/md13 --fail sdd4
mdadm: set sdd4 faulty in /dev/md13
[~] # mdadm /dev/md9 --fail sdc1 
mdadm: set sdc1 faulty in /dev/md9
[~] # mdadm /dev/md9 --fail sdd1
mdadm: set sdd1 faulty in /dev/md9

Now the HDDs don’t get any IOs for md9 and md13 and can go into the standby state. One drawback is that after a reboot the partitions are synced automatically so that the commands have to be executed again.

This small script just checks for the configuration and does the job – it can be scheduled using crontab to fix the setting after a reboot.

#! /bin/bash

LOGFILE=/var/log/adjust_raid.log

echo $(date) >> $LOGFILE

# check hdisk status
state_sda=$(hdparm -C /dev/sda | grep state | awk '{print $4}')
state_sdb=$(hdparm -C /dev/sdb | grep state | awk '{print $4}')
state_sdc=$(hdparm -C /dev/sdc | grep state | awk '{print $4}')
state_sdd=$(hdparm -C /dev/sdd | grep state | awk '{print $4}')
echo "(sda|sdb|sdc|sdd) states: ($state_sda $state_sdb $state_sdc $state_sdd)" >> $LOGFILE

# Check mirroring

number_active_sync_md9=$(mdadm --detail /dev/md9 | grep "active sync" | wc -l)
number_active_sync_md13=$(mdadm --detail /dev/md13 | grep "active sync" | wc -l)

state_md9=$(mdadm --detail /dev/md9 | grep "State :" | awk '{print $3 $4}')
state_md13=$(mdadm --detail /dev/md13 | grep "State :" | awk '{print $3 $4}')

echo "md9 : state: $state_md9 Number of active sync drives in md9 : $number_active_sync_md9" >> $LOGFILE
echo "md13: state: $state_md13 Number of active sync drives in md13: $number_active_sync_md9" >> $LOGFILE

if [ $number_active_sync_md9 -eq "4" ]; then
 echo "Failing sdc1 and sdd1 for RAID md9" >> $LOGFILE
 mdadm /dev/md9 --fail sdc1
 mdadm /dev/md9 --fail sdd1
fi

if [ $number_active_sync_md9 -eq "2" ]; then
 echo "Nothing to do for md9" >> $LOGFILE
fi


if [ $number_active_sync_md13 -eq "4" ]; then
 echo "Failing sdc4 and sdd4 for RAID md13" >> $LOGFILE
 mdadm /dev/md13 --fail sdc4
 mdadm /dev/md13 --fail sdd4
fi

if [ $number_active_sync_md13 -eq "2" ]; then
 echo "Nothing to do for md13" >> $LOGFILE
fi

One thought on “QNAP TS451A: How to enable standby mode for HDDs

  • February 9, 2019 at 3:17 pm
    Permalink

    Thank you for sharing this! It is exactly what I needed.
    Allow the disks from one raid group to go to sleep and only keep awake the ssd used for OS.
    This functionality should be included in QTS

    Reply

Leave a Reply to Marius Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.