読者です 読者をやめる 読者になる 読者になる

ML110 G7に触ってみた その5

ML110 G7の安売り版にはヒートシンクが付いてないのがあるけど大丈夫なの?って聞かれたので調べてみた。
ヒートシンクが付いてない事はSAS RAIDカードを取り付ける時に気づいたけど、別に動作に支障なかったので気にしなかった。
数時間かけてLINPACKベンチマークしても問題なかったし…

ML110 G7のマザーボード上のC204チップセット。言われるようにヒートシンクは付いて無い。
表面が汚れているように見えるが、別に何か付けようとしたというような事はない。
上下に見える穴はヒートシンクの固定用だろうと思われる。
f:id:papillon326:20120331212609j:plain

自作ユーザならたぶん誰でも持ってる、温度センサーとカプトンテープ。
相対的な温度変化を見たいだけなので、センサーの精度がどうこうとかは考えない。
f:id:papillon326:20120331212610j:plain

温度センサーをC204表面にぺたりと固定。
f:id:papillon326:20120331212611j:plain

エアシュラウドを取り付けて、こんな感じで温度を調べる。
f:id:papillon326:20120331212619j:plain

条件その1

まずは取り付けていたSASのカードを外してテスト。適当なSATAのHDDが見つからなかったので、POST画面の無限ループ。
どんどん上昇していき、10分くらい放置しているとPOST画面でアイドルさせているだけで54℃にもなった。
f:id:papillon326:20120331212615j:plain

条件その2

SASのカードを取り付けてテスト。
f:id:papillon326:20120331212614j:plain

別件でインストールしてあったLinuxを起動させてしばらく放置。
温度は45℃になった。条件1より前面ファンの回転数が上がっているせいか温度は低い。
f:id:papillon326:20120331212612j:plain

I/OでC204の温度が上がるかな?と考えて dd if=/dev/sda of=/dev/null してみた。
10分程度で温度はちょっと上がった。
f:id:papillon326:20120331212613j:plain

条件その3

SASのカードに加えて、そのへんに転がっていたGF7300GSも付けてみた。
f:id:papillon326:20120331212616j:plain

条件2と同じようにLinux起動後に放置しておいておよそ51℃。
f:id:papillon326:20120331212618j:plain

おまけ

残念ながらC204の温度はIPMIで拾えないみたい。室温は19℃だったらしい。
f:id:papillon326:20120331212617j:plain

結局驚くほど温度が上がる訳では無かった。長時間稼働させても問題ないと思う。責任取れないけど。
サーマルデザインガイドでもTcontrolが104℃なので余裕はある。

実際には2→1→3の順で試したのだが、条件1ではLinux起動させてないので、ちょっとフェアではないと思う。
また、C204の温度と前面ファンの回転数は関係ない模様。IPMIでも値を取ってないし。

ReadyNAS Ultra4 Plusに触ってみた その6

ReadyNAS SNMP

ReadyNAS Ultra4 PlusでSNMPを使ってみる。

本来、ストレージとしての選定理由の一つにSNMPが使えることを入れていたので、標準でSNMPが使えなかったのは予想外だった。
調べてみるとReadyNAS Proでは対応しているが、コンシューマ向けモデルでは対応してない模様。
使えないと致命的な訳では無いが、使えると便利なのでなんとかならないか調べてみる。

Webユーザインターフェイスの設定画面では項目が無いためどうしようも無いが、rootを取ってSSHでログインしてみるとSNMPが導入済みのように見える。

# /etc/init.d/snmpd start

でsnmpdを起動して、リモートサーバからsnmpwalkで確認してみる。

# snmpwalk -v 1 -c public  192.168.101.201
SNMPv2-MIB::sysDescr.0 = STRING: Linux nas-C1-28-9A 2.6.37.6.RNx86_64.2.1 #1 SMP Mon Aug 15 16:19:41 PDT 2011 x86_64
SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10
DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (71936062) 8 days, 7:49:20.62
SNMPv2-MIB::sysContact.0 = STRING: root
SNMPv2-MIB::sysName.0 = STRING: nas-C1-28-9A
SNMPv2-MIB::sysLocation.0 = STRING: Unknown
SNMPv2-MIB::sysORLastChange.0 = Timeticks: (1) 0:00:00.01
(略)

無事にSNMPで監視できそう。

ReadyNASの再起動時に自動的にsnmpdが起動するように設定しておく。

# update-rc.d snmpd defaults
 Adding system startup for /etc/init.d/snmpd ...
   /etc/rc0.d/K20snmpd -> ../init.d/snmpd
   /etc/rc1.d/K20snmpd -> ../init.d/snmpd
   /etc/rc6.d/K20snmpd -> ../init.d/snmpd
   /etc/rc2.d/S20snmpd -> ../init.d/snmpd
   /etc/rc3.d/S20snmpd -> ../init.d/snmpd
   /etc/rc4.d/S20snmpd -> ../init.d/snmpd
   /etc/rc5.d/S20snmpd -> ../init.d/snmpd

ReadyNAS固有の情報でどのようなものが取得できるか確認するため、ReadyNAS Communityに置いてあるREADYNAS-MIB.txtを確認する。

$ snmpwalk -v 2c -c public 192.168.101.201 1.3.6.1.4.1.4526.18
SNMPv2-SMI::enterprises.4526.18.1.0 = STRING: "Seagate ST2000DL003-9VT166 1863 GB"
SNMPv2-SMI::enterprises.4526.18.3.1.1.1 = INTEGER: 1
SNMPv2-SMI::enterprises.4526.18.3.1.1.2 = INTEGER: 2
SNMPv2-SMI::enterprises.4526.18.3.1.1.3 = INTEGER: 3
SNMPv2-SMI::enterprises.4526.18.3.1.1.4 = INTEGER: 4
SNMPv2-SMI::enterprises.4526.18.3.1.2.1 = INTEGER: 1
SNMPv2-SMI::enterprises.4526.18.3.1.2.2 = INTEGER: 2
SNMPv2-SMI::enterprises.4526.18.3.1.2.3 = INTEGER: 3
SNMPv2-SMI::enterprises.4526.18.3.1.2.4 = INTEGER: 4
SNMPv2-SMI::enterprises.4526.18.3.1.3.1 = STRING: "Seagate ST2000DL003-9VT166 1863 GB"
SNMPv2-SMI::enterprises.4526.18.3.1.3.2 = STRING: "Seagate ST2000DL003-9VT166 1863 GB"
SNMPv2-SMI::enterprises.4526.18.3.1.3.3 = STRING: "Seagate ST2000DL003-9VT166 1863 GB"
SNMPv2-SMI::enterprises.4526.18.3.1.3.4 = STRING: "Seagate ST2000DL003-9VT166 1863 GB"
SNMPv2-SMI::enterprises.4526.18.3.1.4.1 = STRING: "ok"
SNMPv2-SMI::enterprises.4526.18.3.1.4.2 = STRING: "ok"
SNMPv2-SMI::enterprises.4526.18.3.1.4.3 = STRING: "ok"
SNMPv2-SMI::enterprises.4526.18.3.1.4.4 = STRING: "ok"
SNMPv2-SMI::enterprises.4526.18.3.1.5.1 = INTEGER: 78
SNMPv2-SMI::enterprises.4526.18.3.1.5.2 = INTEGER: 82
SNMPv2-SMI::enterprises.4526.18.3.1.5.3 = INTEGER: 80
SNMPv2-SMI::enterprises.4526.18.3.1.5.4 = INTEGER: 78
SNMPv2-SMI::enterprises.4526.18.4.1.1.1 = INTEGER: 1
SNMPv2-SMI::enterprises.4526.18.4.1.2.1 = INTEGER: 1691
SNMPv2-SMI::enterprises.4526.18.4.1.3.1 = STRING: "SYS"
SNMPv2-SMI::enterprises.4526.18.5.1.1.1 = INTEGER: 1
SNMPv2-SMI::enterprises.4526.18.5.1.1.2 = INTEGER: 2
SNMPv2-SMI::enterprises.4526.18.5.1.2.1 = INTEGER: 53
SNMPv2-SMI::enterprises.4526.18.5.1.2.2 = INTEGER: 73
SNMPv2-SMI::enterprises.4526.18.5.1.3.1 = STRING: "ok"
SNMPv2-SMI::enterprises.4526.18.5.1.3.2 = STRING: "ok"
SNMPv2-SMI::enterprises.4526.18.7.1.1.1 = INTEGER: 1
SNMPv2-SMI::enterprises.4526.18.7.1.2.1 = STRING: "Volume C"
SNMPv2-SMI::enterprises.4526.18.7.1.3.1 = STRING: "RAID Level X2"
SNMPv2-SMI::enterprises.4526.18.7.1.4.1 = STRING: "ok"
SNMPv2-SMI::enterprises.4526.18.7.1.5.1 = INTEGER: 5676032
SNMPv2-SMI::enterprises.4526.18.7.1.6.1 = INTEGER: 5409792

何度か試してみるとenterprises.4526.18.1.0の項目が化けることがあり、上記の結果でも正常な値が表示されていない。おそらくバグと思われる。

監視サーバからCactiで監視してみる。
設定は他のsnmpdの動作しているサーバと同様なので省略。表示すると以下のようになる。
f:id:papillon326:20120325155612j:plain

温度の項目は通常だと取れないが、Cactiの公式フォーラムから拾ってきたテンプレートを導入して表示している。

温度の項目だけを拡大すると以下のような感じになる。
f:id:papillon326:20120325160233j:plain

SNMPトラップについては省略。

ReadyNAS Ultra4 Plusに触ってみた その5

ReadyNAS

ReadyNAS Ultra4 PlusにSSHでログインしてX-RAID2について調べてみたり。

X-RAID2について

ReadyNASで利用されているX-RAID2の正体が気になったので調べてみた。ハードウェアトラブル時に情報が無いと大変かもしれないんで…

とりあえずmountを見てみる

# mount
/dev/md0 on / type ext3 (rw,noatime,nodiratime)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw)
tmpfs on /ramfs type ramfs (rw)
tmpfs on /USB type tmpfs (rw,size=16k)
usbfs on /proc/bus/usb type usbfs (rw)
/dev/c/c on /c type ext4 (rw,noatime,nodiratime,acl,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv1)
nfsd on /proc/fs/nfsd type nfsd (rw)
configfs on /sys/kernel/config type configfs (rw)

どうも/dev/md0が/、/dev/c/cが/cとして利用されている模様。
続いてfdiskしてみると、

# fdisk -l
Found valid GPT with protective MBR; using GPT

Disk /dev/sda: 3907029168 sectors, 1863G
Logical sector size: 512
Disk identifier (GUID): 95ba255a-1f36-4ad9-99c0-297f0f4b588f
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 3907029134

Number  Start (sector)    End (sector)  Size       Code  Name
   1              64         8388671       4096M   0700
   2         8388672         9437247        512M   0700
   3         9437248      3907025072       1858G   0700
Found valid GPT with protective MBR; using GPT

Disk /dev/sdb: 3907029168 sectors, 1863G
Logical sector size: 512
Disk identifier (GUID): 79ff5f4f-7936-42d2-b091-1f626d511d37
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 3907029134

Number  Start (sector)    End (sector)  Size       Code  Name
   1              64         8388671       4096M   0700
   2         8388672         9437247        512M   0700
   3         9437248      3907025072       1858G   0700
Found valid GPT with protective MBR; using GPT

Disk /dev/sdc: 3907029168 sectors, 1863G
Logical sector size: 512
Disk identifier (GUID): ef66ffd1-60bc-42dc-8fe2-6a5b38e08ca3
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 3907029134

Number  Start (sector)    End (sector)  Size       Code  Name
   1              64         8388671       4096M   0700
   2         8388672         9437247        512M   0700
   3         9437248      3907025072       1858G   0700
Found valid GPT with protective MBR; using GPT

Disk /dev/sdd: 3907029168 sectors, 1863G
Logical sector size: 512
Disk identifier (GUID): 456a7d36-567c-4a88-85aa-2e668bf0a9af
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 3907029134

Number  Start (sector)    End (sector)  Size       Code  Name
   1              64         8388671       4096M   0700
   2         8388672         9437247        512M   0700
   3         9437248      3907025072       1858G   0700

Disk /dev/md0: 4293 MB, 4293906432 bytes
2 heads, 4 sectors/track, 1048317 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn't contain a valid partition table
fdisk: device has more than 2^32 sectors, can't use all of them

Disk /dev/dm-0: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/dm-0 doesn't contain a valid partition table

また、/proc/mdstatを見てみる

# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid5 sda3[0] sdd3[3] sdc3[2] sdb3[1]
      5846378112 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md1 : active raid6 sda2[0] sdd2[3] sdc2[2] sdb2[1]
      1048448 blocks super 1.2 level 6, 64k chunk, algorithm 2 [4/4] [UUUU]

md0 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
      4193268 blocks super 1.2 [4/4] [UUUU]

unused devices: <none>

どうやらsda~sddまでの4台のHDDはそれぞれ3つのパーティションに区切られて、sdX1が4GB、sdX2が512MB、sdX3が残りの1858GBとなっている。
そのsdX1を合わせてRAID 1のmd0、sdX2を合わせてRAID 6のmd1、sdX3を合わせてRAID5のmd2が作成されているのが分かる。
md0はext3でフォーマットされ、/としてマウントされているが、他のmd1、md2はどうなっているのだろうか?

md1はスワップ領域になっているような気がするので、確認してみる。

# cat /proc/swaps
Filename                                Type            Size    Used    Priority
/dev/md1                                partition       1048444 16548   -1

予想通りmd1はスワップ領域になっていた。残るmd2はどうなっているのだろう、おそらく/c以下としてマウントされている/dev/c/cだろう。
LVMだろうと予測して調べてみる

# lvdisplay /dev/c
  --- Logical volume ---
  LV Name                /dev/c/c
  VG Name                c
  LV UUID                TgQaKH-ZY82-1GEz-omL9-ECmt-zKWs-JtLNFu
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                5.44 TB
  Current LE             89048
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:0
# vgdisplay
  --- Volume group ---
  VG Name               c
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  12
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               5.44 TB
  PE Size               64.00 MB
  Total PE              89208
  Alloc PE / Size       89048 / 5.44 TB
  Free  PE / Size       160 / 10.00 GB
  VG UUID               yuglsp-SpfU-1Lo5-MZXv-2Tmm-dhwV-l7A5Qc
# pvdisplay -C
  PV         VG   Fmt  Attr PSize PFree
  /dev/md2   c    lvm2 a-   5.44T 10.00G

結果、md2はLVMで論理ボリューム/dev/cとして抽象化されているのが分かる。
今回は2TBのHDDを最初から取り付けていたため、LVMを利用している意味が感じられないが、HDDを徐々に増やしたりする場合にはこの方法だとデータを保持したまま容量を増やしてくのに便利なのだろう。
次にReadyNASを扱う機会があれば確認してみたい。

あとはNFSでのベンチマークとUPSの扱いについて。
あとSNMP。

ML110 G7に触ってみた その4

ML110 G7のファンと電源について。

ML110 G7の前面ファンと背面ファンを確認してみた。

HDDケージ下にある前面ファンはDELTA社PFB0812GHEで80×38mmのファン。データシートによると定格で6,100rpm、86CFM、55.5dBAの轟音ファン。
f:id:papillon326:20120324175148j:plain
f:id:papillon326:20120324175147j:plain

ピンは3×2の6pinでCPUファン、前面ファン、背面ファンで同じ形状になっている。
f:id:papillon326:20120324175149j:plain

背面ファンはDELTA社AFC0912DFで92×32mmのファン。データシートは見つからなかったが、12V 1.43Aなので回転数はかなり高いと予想される。
なお、このファンはML115 G5のものと同じらしい。ただしコネクタ形状は異なる。
f:id:papillon326:20120324175150j:plain
f:id:papillon326:20120324175151j:plain

M/Bには従来と同じ形状のファン用コネクタ(6×1の6pin)もあるので、どちらかだけ取り付ければ良い可能性もある。手元にML115 G5があるので後日試してみたい。

ML110 G7の電源を確認してみた。

ケース背面から見たところ。一般的なATX電源のネジ位置だが、狭いネジ位置の方が下になっている。そのため、12cmファン搭載の電源などは取り付けに向かない。
f:id:papillon326:20120325025142j:plain

Chicony(Hipro)製で電源容量は350W。入力電圧切り替えスイッチは見られないため、PFC付きの模様。
電圧毎の出力は以下の通り

+12V 14.5A
+12cpu 12A
*3.3V 18A
+5V 19A
+5Vaux 2A

f:id:papillon326:20120325025108j:plain

余っている電源端子は5インチベイ付近にペリフェラル用が2つとSATA用が1つある。
f:id:papillon326:20120324175153j:plain

HDDベイ付近には謎の10pinの端子があるが、使い方は不明。奥に見えるペリフェラル用×2はHDDベイへの給電用に利用されている。
f:id:papillon326:20120324175155j:plain

ML110 G7に触ってみた その3

ML110 G7

安売りしてたML110 G7が手に入ったので、SASのRAIDカードとHDDが取り付けられるか試してみた。

安売りしてたML110 G7はノンホットプラグSATA仕様だったが、以前に確認したときにSASのHDDも使えるだろうと思っていたので、転がっていたRAIDカードとSASのHDDで確認してみた。

まず、HDDケージからオンボードのSATAに繋がってるSFF-8087のコネクタを外す。狭いので外すのは結構面倒。
f:id:papillon326:20120324171006j:plain
f:id:papillon326:20120324171007j:plain

PCI-Express x4対応のSAS RAIDカード。向かって右端にSFF-8087のコネクタ接続用のポートがある。
SFF-8087は4ポート分の配線が一組になっているだけで、ポートマルチプライヤでは無い。
f:id:papillon326:20120324171008j:plain
f:id:papillon326:20120324171009j:plain

RAIDカードを取り付けてケーブルを差し込んで終了。なお、このRAIDカードは1番目と2番目のPCI-Expressスロットでは設定画面で正常に操作することができなかったため、3番目のスロットに取り付けている。
f:id:papillon326:20120324171010j:plain
f:id:papillon326:20120324171011j:plain

SASのHDDをトレイにはめ込んで取り付けて完了。
f:id:papillon326:20120324171012j:plain

オンボードのSATAに配線している場合にSAS HDDを取り付けるとPOSTに時間がかかるようになってHDDも認識しない。SASのRAIDカードにSATA HDDを取り付けた場合は普通に利用可能になる。
そのため、ML110 G7オンボードSATAだとHDDからの起動順の設定ができなくて困っている人はSASのHBAやRAIDカードを付けると幸せになれるかもしれない。3Gbps対応の中古なら安いし。

ただ、SAS RAIDカードを取り付けてから前面ファンの音が騒がしくなったのは閉口だった。

ReadyNAS Ultra4 Plusに触ってみた その4

ReadyNAS benchmark

ReadyNASにEnable Root SSHを導入してリモートからSSHで接続していろいろテスト。
Enable Root SSHの導入自体はダウンロードしたbinファイルをReadyNAS上でインストールして再起動するだけなので詳細は省略。

ReadyNASへのSSH接続

再起動後にSSH接続してみると見慣れた画面が出てくる。

# uname -a
Linux nas-C1-28-9A 2.6.37.6.RNx86_64.2.1 #1 SMP Mon Aug 15 16:19:41 PDT 2011 x86_64 GNU/Linux

# cat /etc/debian_version
4.0

どうやらDebian 4.0 etchがベースらしい。

fioのインストール

fioを導入すべく、apt-get updateを実行するとエラーが出る。調べてみると4.2.19固有の不具合の模様。
必要なパッケージをインストールしてからfioを導入。

# apt-get install libaio-dev make gcc build-essential
# wget http://brick.kernel.dk/snaps/fio-2.0.5.tar.gz
# tar zxvf fio-2.0.5.tar.gz
# cd fio
# make
# make install

その3でiSCSIのベンチマークを行った時と同様にテスト。ただし、今回はdirectory=/cにしている。詳細な理由は次回以降に記述するが、そこがNASのユーザ領域になるからとしておく。
…書くことがどんどん増えていく。

シーケンシャルリード

seqread: (g=0): rw=read, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
fio 2.0.5
Starting 1 process

seqread: (groupid=0, jobs=1): err= 0: pid=5124
  read : io=1024.0MB, bw=367715KB/s, iops=359 , runt= 14258msec
    clat (msec): min=1 , max=93 , avg= 2.77, stdev= 2.26
     lat (msec): min=1 , max=93 , avg= 2.77, stdev= 2.26
    clat percentiles (usec):
     |  1.00th=[ 2008],  5.00th=[ 2008], 10.00th=[ 2024], 20.00th=[ 2064],
     | 30.00th=[ 2128], 40.00th=[ 2192], 50.00th=[ 2352], 60.00th=[ 2448],
     | 70.00th=[ 2640], 80.00th=[ 3056], 90.00th=[ 3536], 95.00th=[ 5280],
     | 99.00th=[ 6752], 99.50th=[11584], 99.90th=[22912]
    bw (KB/s)  : min=276866, max=397312, per=99.85%, avg=367157.32, stdev=28419.65
    lat (msec) : 2=0.59%, 4=92.13%, 10=6.66%, 20=0.49%, 50=0.10%
    lat (msec) : 100=0.04%
  cpu          : usr=0.60%, sys=18.26%, ctx=5123, majf=0, minf=0
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=5120/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=5120.0MB, aggrb=367714KB/s, minb=376540KB/s, maxb=376540KB/s, mint=14258msec, maxt=14258msec

Disk stats (read/write):
    dm-0: ios=81424/0, merge=0/0, ticks=113246/0, in_queue=113244, util=94.77%, aggrios=81920/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
    md2: ios=81920/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=10240/6, aggrmerge=10239/3, aggrticks=14154/83, aggrin_queue=14235, aggrutil=68.40%
  sdb: ios=10241/6, merge=10239/3, ticks=13600/100, in_queue=13695, util=64.53%
  sdc: ios=10240/6, merge=10240/3, ticks=13474/85, in_queue=13557, util=62.20%
  sdd: ios=10240/6, merge=10240/3, ticks=15542/66, in_queue=15607, util=68.40%
  sda: ios=10240/6, merge=10240/3, ticks=14002/83, in_queue=14082, util=65.66%

シーケンシャルライト

seqwrite: (g=0): rw=write, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
fio 2.0.5
Starting 1 process
seqwrite: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 1 (f=1): [W] [100.0% done] [0K/8355K /s] [0 /7  iops] [eta 00m:00s]
seqwrite: (groupid=0, jobs=1): err= 0: pid=8990
  write: io=5120.0MB, bw=36115KB/s, iops=35 , runt=145173msec
    clat (msec): min=13 , max=733 , avg=28.26, stdev=36.18
     lat (msec): min=13 , max=733 , avg=28.35, stdev=36.18
    clat percentiles (msec):
     |  1.00th=[   14],  5.00th=[   15], 10.00th=[   15], 20.00th=[   15],
     | 30.00th=[   15], 40.00th=[   16], 50.00th=[   17], 60.00th=[   21],
     | 70.00th=[   26], 80.00th=[   33], 90.00th=[   48], 95.00th=[   76],
     | 99.00th=[  165], 99.50th=[  223], 99.90th=[  469]
    bw (KB/s)  : min= 2409, max=63109, per=100.00%, avg=37125.47, stdev=13231.65
    lat (msec) : 20=59.77%, 50=30.78%, 100=6.23%, 250=2.75%, 500=0.37%
    lat (msec) : 750=0.10%
  cpu          : usr=0.31%, sys=0.44%, ctx=5179, majf=0, minf=25
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=5120/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=5120.0MB, aggrb=36114KB/s, minb=36981KB/s, maxb=36981KB/s, mint=145173msec, maxt=145173msec

Disk stats (read/write):
  sda: ios=2/25363, merge=0/1425, ticks=37/778774, in_queue=778822, util=99.51%

ランダムリード 512k

randread512k: (g=0): rw=randread, bs=512K-512K/512K-512K, ioengine=sync, iodepth=1
fio 2.0.5
Starting 1 process
randread512k: Laying out IO file(s) (1 file(s) / 1024MB)

randread512k: (groupid=0, jobs=1): err= 0: pid=5644
  read : io=1024.0MB, bw=44493KB/s, iops=86 , runt=117835msec
    clat (msec): min=1 , max=103 , avg=11.49, stdev= 4.02
     lat (msec): min=1 , max=103 , avg=11.49, stdev= 4.02
    clat percentiles (usec):
     |  1.00th=[ 1592],  5.00th=[ 6816], 10.00th=[ 8256], 20.00th=[ 9792],
     | 30.00th=[10560], 40.00th=[11200], 50.00th=[11712], 60.00th=[12096],
     | 70.00th=[12480], 80.00th=[12864], 90.00th=[13376], 95.00th=[13888],
     | 99.00th=[24704], 99.50th=[33536], 99.90th=[57088]
    bw (KB/s)  : min=27592, max=50996, per=99.92%, avg=44458.70, stdev=3088.60
    lat (msec) : 2=1.56%, 4=0.57%, 10=20.76%, 20=75.22%, 50=1.73%
    lat (msec) : 100=0.15%, 250=0.01%
  cpu          : usr=0.13%, sys=2.56%, ctx=10254, majf=0, minf=0
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=10240/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=5120.0MB, aggrb=44493KB/s, minb=45561KB/s, maxb=45561KB/s, mint=117835msec, maxt=117835msec

Disk stats (read/write):
    dm-0: ios=81920/18, merge=0/0, ticks=626863/770, in_queue=627693, util=99.26%, aggrios=81920/18, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
    md2: ios=81920/18, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=13670/82, aggrmerge=6829/45, aggrticks=105204/1002, aggrin_queue=106187, aggrutil=69.97%
  sdb: ios=13672/82, merge=6832/41, ticks=105787/1013, in_queue=106793, util=68.88%
  sdc: ios=13670/79, merge=6838/41, ticks=107480/898, in_queue=108372, util=69.97%
  sdd: ios=13668/84, merge=6824/50, ticks=102860/970, in_queue=103828, util=67.47%
  sda: ios=13673/85, merge=6823/50, ticks=104692/1130, in_queue=105758, util=68.65%

ランダムライト 512k

randwrite512k: (g=0): rw=randwrite, bs=512K-512K/512K-512K, ioengine=sync, iodepth=1
fio 2.0.5
Starting 1 process
randwrite512k: Laying out IO file(s) (1 file(s) / 1024MB)

randwrite512k: (groupid=0, jobs=1): err= 0: pid=6256
  write: io=1024.0MB, bw=20329KB/s, iops=39 , runt=257902msec
    clat (msec): min=3 , max=465 , avg=25.01, stdev=28.90
     lat (msec): min=3 , max=465 , avg=25.17, stdev=28.90
    clat percentiles (msec):
     |  1.00th=[    5],  5.00th=[   11], 10.00th=[   13], 20.00th=[   15],
     | 30.00th=[   16], 40.00th=[   18], 50.00th=[   21], 60.00th=[   24],
     | 70.00th=[   28], 80.00th=[   32], 90.00th=[   39], 95.00th=[   45],
     | 99.00th=[   73], 99.50th=[  310], 99.90th=[  404]
    bw (KB/s)  : min= 4071, max=33657, per=100.00%, avg=20658.85, stdev=5288.02
    lat (msec) : 4=0.87%, 10=2.89%, 20=45.86%, 50=47.28%, 100=2.34%
    lat (msec) : 250=0.21%, 500=0.55%
  cpu          : usr=0.66%, sys=3.84%, ctx=10327, majf=0, minf=0
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=0/d=10240, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=5120.0MB, aggrb=20328KB/s, minb=20816KB/s, maxb=20816KB/s, mint=257902msec, maxt=257902msec

Disk stats (read/write):
    dm-0: ios=1/82214, merge=0/0, ticks=8/1643859, in_queue=1643906, util=99.06%, aggrios=1/82312, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
    md2: ios=1/82312, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=17605/31724, aggrmerge=235079/433420, aggrticks=206131/103203, aggrin_queue=309479, aggrutil=67.46%
  sdb: ios=17726/33928, merge=239823/459607, ticks=202144/109413, in_queue=311921, util=64.01%
  sdc: ios=16193/34139, merge=232567/459013, ticks=188005/109286, in_queue=297401, util=61.48%
  sdd: ios=19809/30159, merge=278272/406805, ticks=243615/101061, in_queue=344717, util=67.46%
  sda: ios=16692/28673, merge=189656/408256, ticks=190762/93055, in_queue=283877, util=61.36%

ランダムリード4k

# fio randread4k.fio
randread4k: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
fio 2.0.5
Starting 1 process
randread4k: Laying out IO file(s) (1 file(s) / 100MB)
Jobs: 1 (f=1): [r] [100.0% done] [2213K/0K /s] [540 /0  iops] [eta 00m:00s]
randread4k: (groupid=0, jobs=1): err= 0: pid=13218
  read : io=307200KB, bw=1796.1KB/s, iops=449 , runt=170954msec
    clat (usec): min=120 , max=92009 , avg=2215.24, stdev=3882.82
     lat (usec): min=122 , max=92011 , avg=2216.65, stdev=3882.82
    clat percentiles (usec):
     |  1.00th=[  193],  5.00th=[  215], 10.00th=[  245], 20.00th=[  290],
     | 30.00th=[  338], 40.00th=[  398], 50.00th=[  466], 60.00th=[  556],
     | 70.00th=[  724], 80.00th=[ 3952], 90.00th=[ 8256], 95.00th=[10432],
     | 99.00th=[15040], 99.50th=[18560], 99.90th=[28032]
    bw (KB/s)  : min=  761, max= 2563, per=99.47%, avg=1786.48, stdev=258.32
    lat (usec) : 250=11.27%, 500=42.49%, 750=17.10%, 1000=3.20%
    lat (msec) : 2=1.71%, 4=4.36%, 10=13.92%, 20=5.58%, 50=0.32%
    lat (msec) : 100=0.05%
  cpu          : usr=0.51%, sys=2.55%, ctx=76846, majf=0, minf=0
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=76800/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=307200KB, aggrb=1796KB/s, minb=1840KB/s, maxb=1840KB/s, mint=170954msec, maxt=170954msec

Disk stats (read/write):
    dm-0: ios=76648/10, merge=0/0, ticks=167077/210, in_queue=167286, util=98.07%, aggrios=76800/10, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
    md2: ios=76800/10, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=19203/192, aggrmerge=0/81, aggrticks=41745/1314, aggrin_queue=43047, aggrutil=25.48%
  sdb: ios=19202/193, merge=1/82, ticks=40379/1285, in_queue=41656, util=24.17%
  sdc: ios=19207/188, merge=1/82, ticks=42548/1282, in_queue=43820, util=25.48%
  sdd: ios=19202/192, merge=1/83, ticks=41647/1334, in_queue=42964, util=24.91%
  sda: ios=19203/195, merge=0/80, ticks=42406/1357, in_queue=43751, util=25.36%||<

** ランダムライト4k
>||
# fio randwrite4k.fio
randwrite4k: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
fio 2.0.5
Starting 1 process
randwrite4k: Laying out IO file(s) (1 file(s) / 100MB)
Jobs: 1 (f=1): [w] [100.0% done] [0K/1091K /s] [0 /266  iops] [eta 00m:00s]
randwrite4k: (groupid=0, jobs=1): err= 0: pid=11855
  write: io=307200KB, bw=960935 B/s, iops=234 , runt=327361msec
    clat (usec): min=274 , max=211192 , avg=4249.68, stdev=10041.85
     lat (usec): min=276 , max=211194 , avg=4251.68, stdev=10041.86
    clat percentiles (usec):
     |  1.00th=[  294],  5.00th=[  302], 10.00th=[  310], 20.00th=[  322],
     | 30.00th=[  334], 40.00th=[  354], 50.00th=[  374], 60.00th=[  406],
     | 70.00th=[  502], 80.00th=[ 4448], 90.00th=[13248], 95.00th=[26496],
     | 99.00th=[48384], 99.50th=[55552], 99.90th=[78336]
    bw (KB/s)  : min=  333, max= 1733, per=99.85%, avg=936.58, stdev=286.28
    lat (usec) : 500=69.96%, 750=5.38%, 1000=2.72%
    lat (msec) : 2=0.90%, 4=0.81%, 10=6.17%, 20=6.97%, 50=6.23%
    lat (msec) : 100=0.83%, 250=0.03%
  cpu          : usr=0.30%, sys=1.70%, ctx=76944, majf=0, minf=0
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=0/d=76800, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=307200KB, aggrb=938KB/s, minb=960KB/s, maxb=960KB/s, mint=327361msec, maxt=327361msec

Disk stats (read/write):
    dm-0: ios=1/77948, merge=0/0, ticks=6/356186, in_queue=356218, util=98.69%, aggrios=1/78121, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
    md2: ios=1/78121, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=8443/38992, aggrmerge=151/564, aggrticks=98013/20708, aggrin_queue=118693, aggrutil=36.32%
  sdb: ios=8422/39071, merge=148/565, ticks=102467/20690, in_queue=123128, util=36.32%
  sdc: ios=8264/38984, merge=149/558, ticks=92527/20348, in_queue=112847, util=33.17%
  sdd: ios=8715/38831, merge=151/576, ticks=100732/21697, in_queue=122393, util=35.94%
  sda: ios=8374/39085, merge=159/557, ticks=96328/20098, in_queue=116404, util=34.34%

多重化した項目も同様に計測したが、詳細な結果は省略する。需要があれば掲載。

先のiSCSIの結果と同様に表にすると以下のようになる。

TypeiSCSI (ext3)X-RAID2 (ext4)
Speed
(MB/s)
IOPSSpeed
(MB/s)
IOPS
seq read66.264367.7359
seq write36.13554.353
seq read*447.7-152.9-
seq write*437.1-78.5-
rand read 512k11.32244.586
rand write 512k34.36620.339
rand read 4k10.826991.8452
rand write 4k8.420871.0234
rand read 4k*3217.0-7.0-
rand write 4k*326.9-1.2-

(※ ランダムリード4kの結果を張り間違えてたので修正@3/21)

結果はiSCSIと大きく異なるもので、iSCSIでシーケンシャル性能がどれだけ生かされてないかというのがよく分かった。

次回はX-RAID2がどうなっているかについて。

ReadyNAS Ultra4 Plusに触ってみた その3

ReadyNAS benchmark iSCSI

主な利用用途であるiSCSINFSベンチマークを比較したい。まずはiSCSIから。

マルチパスについて

iSCSIブートはdm-multipathと相性が悪いのでHDD起動のサーバに2ポートの1000BASE-Tを取り付けてReadyNASのLAN1、LAN2とそれぞれ直接接続。
iSCSIの設定とdm-multipathの設定を行った。

# multipath -l
mpathe (360014052e419090150ad002000000000) dm-0 LIO-ORG,FILEIO
size=20G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 45:0:0:0 sdg 8:96  active undef running
  `- 44:0:0:0 sdi 8:128 active undef running
mpathd (360014052e419090150ad001000000000) dm-3 LIO-ORG,FILEIO
size=200G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 42:0:0:0 sde 8:64  active undef running
  `- 43:0:0:0 sdf 8:80  active undef running
mpathc (360014052e419090150ad004000000000) dm-1 LIO-ORG,FILEIO
size=20G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 40:0:0:0 sdc 8:32  active undef running
  `- 41:0:0:0 sdd 8:48  active undef running
mpathf (360014052e419090150ad003000000000) dm-2 LIO-ORG,FILEIO
size=20G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 47:0:0:0 sdj 8:144 active undef running
  `- 46:0:0:0 sdh 8:112 active undef running

/dev/sdcと/dev/sddが同一のiSCSIボリュームでそれぞれ別経路から見ていることになる。それをdm-multipathで/dev/mapper/mpathcとしている。
いろいろボリュームが見えているけど気にしない。別に異常では無い。

それではhdparmで簡単にベンチマーク

# hdparm -tT /dev/sdc
/dev/sdc:
 Timing cached reads:   25690 MB in  2.00 seconds = 12865.15 MB/sec
 Timing buffered disk reads:  296 MB in  3.02 seconds =  98.00 MB/sec

# hdparm -tT /dev/sdd
/dev/sdd:
 Timing cached reads:   26050 MB in  2.00 seconds = 13045.94 MB/sec
 Timing buffered disk reads:  312 MB in  3.01 seconds = 103.78 MB/sec

# hdparm -tT /dev/mapper/mpathc
/dev/mapper/mpathc:
 Timing cached reads:   25616 MB in  2.00 seconds = 12829.08 MB/sec
 Timing buffered disk reads:  256 MB in  3.02 seconds =  84.76 MB/sec

/dev/sdXに単独にアクセスした場合よりパフォーマンスが低くなっている。どうもマルチパス構成としてうまく動作していない模様。
ReadyNASの方が対応しないようになっていると思われる。

iSCSIベンチマーク

iSCSIブート設定にしたサーバでReadyNASのiSCSIボリュームから起動してベンチマーク。なんでHDD起動からテストしてないかと言うと、本番設定をした後で思いついたから。

ext3でフォーマットした領域に対してfio 2.0.5を使ってベンチマークを実行。パラメータなんかは複眼中心さんを参考にさせていただきました。

シーケンシャルリード

[seqread]
readwrite=read
blocksize=1m
size=1g
directory=./
direct=1
loops=5
# fio seqread.fio
seqread: (g=0): rw=read, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
fio 2.0.5
Starting 1 process
Jobs: 1 (f=1): [R] [100.0% done] [30287K/0K /s] [28 /0  iops] [eta 00m:00s]
seqread: (groupid=0, jobs=1): err= 0: pid=7344
  read : io=5120.0MB, bw=66234KB/s, iops=64 , runt= 79157msec
    clat (msec): min=11 , max=528 , avg=15.46, stdev=17.15
     lat (msec): min=11 , max=528 , avg=15.46, stdev=17.15
    clat percentiles (msec):
     |  1.00th=[   12],  5.00th=[   12], 10.00th=[   12], 20.00th=[   13],
     | 30.00th=[   13], 40.00th=[   13], 50.00th=[   13], 60.00th=[   13],
     | 70.00th=[   13], 80.00th=[   14], 90.00th=[   17], 95.00th=[   29],
     | 99.00th=[   68], 99.50th=[  120], 99.90th=[  227]
    bw (KB/s)  : min= 9582, max=83136, per=100.00%, avg=67208.73, stdev=19041.60
    lat (msec) : 20=91.35%, 50=6.91%, 100=1.11%, 250=0.53%, 500=0.08%
    lat (msec) : 750=0.02%
  cpu          : usr=0.05%, sys=0.70%, ctx=5132, majf=0, minf=282
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=5120/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=5120.0MB, aggrb=66233KB/s, minb=67823KB/s, maxb=67823KB/s, mint=79157msec, maxt=79157msec

Disk stats (read/write):
  sda: ios=12928/220, merge=12443/118, ticks=201216/19282, in_queue=220501, util=99.71%

シーケンシャルライト

[seqwrite]
readwrite=write
blocksize=1m
size=1g
directory=./
direct=1
loops=5
# fio seqwrite.fio
seqwrite: (g=0): rw=write, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
fio 2.0.5
Starting 1 process
seqwrite: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 1 (f=1): [W] [100.0% done] [0K/8355K /s] [0 /7  iops] [eta 00m:00s]
seqwrite: (groupid=0, jobs=1): err= 0: pid=8990
  write: io=5120.0MB, bw=36115KB/s, iops=35 , runt=145173msec
    clat (msec): min=13 , max=733 , avg=28.26, stdev=36.18
     lat (msec): min=13 , max=733 , avg=28.35, stdev=36.18
    clat percentiles (msec):
     |  1.00th=[   14],  5.00th=[   15], 10.00th=[   15], 20.00th=[   15],
     | 30.00th=[   15], 40.00th=[   16], 50.00th=[   17], 60.00th=[   21],
     | 70.00th=[   26], 80.00th=[   33], 90.00th=[   48], 95.00th=[   76],
     | 99.00th=[  165], 99.50th=[  223], 99.90th=[  469]
    bw (KB/s)  : min= 2409, max=63109, per=100.00%, avg=37125.47, stdev=13231.65
    lat (msec) : 20=59.77%, 50=30.78%, 100=6.23%, 250=2.75%, 500=0.37%
    lat (msec) : 750=0.10%
  cpu          : usr=0.31%, sys=0.44%, ctx=5179, majf=0, minf=25
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=5120/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=5120.0MB, aggrb=36114KB/s, minb=36981KB/s, maxb=36981KB/s, mint=145173msec, maxt=145173msec

Disk stats (read/write):
  sda: ios=2/25363, merge=0/1425, ticks=37/778774, in_queue=778822, util=99.51%

多重度4のシーケンシャルリード

[seqread4]
readwrite=read
blocksize=1m
size=1g
directory=./
direct=1
numjobs=4
loops=5
# fio seqread4.fio
seqread4: (g=0): rw=read, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
...
seqread4: (g=0): rw=read, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
fio 2.0.5
Starting 4 processes
seqread4: Laying out IO file(s) (1 file(s) / 1024MB)
seqread4: Laying out IO file(s) (1 file(s) / 1024MB)
seqread4: Laying out IO file(s) (1 file(s) / 1024MB)
seqread4: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 4 (f=4): [RRRR] [300.0% done] [0K/0K /s] [0 /0  iops] [eta 1158050441d:07hJobs: 4 (f=4): [RRRR] [inf% done] [0K/0K /s] [0 /0  iops] [eta 1158050441d:07h:0Jobs: 4 (f=4): [RRRR] [inf% done] [0K/0K /s] [0 /0  iops] [eta 1158050441d:07h:0Jobs: 4 (f=4): [RRRR] [inf% done] [0K/0K /s] [0 /0  iops] [eta 1158050441d:07h:0Jobs: 4 (f=4): [RRRR] [400.0% done] [0K/0K /s] [0 /0  iops] [eta 1158050441d:07hJobs: 4 (f=4): [RRRR] [inf% done] [963K/0K /s] [0 /0  iops] [eta 1158050441d:07hJobs: 4 (f=4): [RRRR] [inf% done] [5221K/0K /s] [4 /0  iops] [eta 1158050441d:07Jobs: 4 (f=4): [RRRR] [inf% done] [13577K/0K /s] [12 /0  iops] [eta 1158050441d:Jobs: 4 (f=4): [RRRR] [0.0% done] [7310K/0K /s] [6 /0  iops] [eta 09h:57m:13s]  Jobs: 1 (f=1): [__R_] [98.9% done] [63708K/0K /s] [60 /0  iops] [eta 00m:05s]
seqread4: (groupid=0, jobs=1): err= 0: pid=9002
  read : io=5120.0MB, bw=13432KB/s, iops=13 , runt=390342msec
    clat (msec): min=11 , max=2952 , avg=76.23, stdev=143.01
     lat (msec): min=11 , max=2952 , avg=76.23, stdev=143.01
    clat percentiles (msec):
     |  1.00th=[   12],  5.00th=[   13], 10.00th=[   13], 20.00th=[   13],
     | 30.00th=[   13], 40.00th=[   13], 50.00th=[   14], 60.00th=[   19],
     | 70.00th=[   30], 80.00th=[  115], 90.00th=[  318], 95.00th=[  347],
     | 99.00th=[  424], 99.50th=[  490], 99.90th=[ 1975]
    bw (KB/s)  : min=  346, max=30658, per=29.96%, avg=14279.41, stdev=6330.28
    lat (msec) : 20=61.35%, 50=15.51%, 100=2.79%, 250=7.29%, 500=12.60%
    lat (msec) : 750=0.31%, 1000=0.02%, 2000=0.04%, >=2000=0.10%
  cpu          : usr=0.01%, sys=0.13%, ctx=5218, majf=0, minf=282
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=5120/w=0/d=0, short=r=0/w=0/d=0
seqread4: (groupid=0, jobs=1): err= 0: pid=9003
  read : io=5120.0MB, bw=12326KB/s, iops=12 , runt=425368msec
    clat (msec): min=11 , max=5712 , avg=83.08, stdev=175.50
     lat (msec): min=11 , max=5712 , avg=83.08, stdev=175.50
    clat percentiles (msec):
     |  1.00th=[   12],  5.00th=[   13], 10.00th=[   13], 20.00th=[   13],
     | 30.00th=[   13], 40.00th=[   15], 50.00th=[   22], 60.00th=[   29],
     | 70.00th=[   43], 80.00th=[  124], 90.00th=[  302], 95.00th=[  343],
     | 99.00th=[  445], 99.50th=[  840], 99.90th=[ 1549]
    bw (KB/s)  : min=  179, max=50712, per=28.47%, avg=13571.64, stdev=7380.59
    lat (msec) : 20=48.46%, 50=24.39%, 100=5.72%, 250=9.14%, 500=11.50%
    lat (msec) : 750=0.27%, 1000=0.10%, 2000=0.37%, >=2000=0.04%
  cpu          : usr=0.01%, sys=0.12%, ctx=5225, majf=0, minf=283
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=5120/w=0/d=0, short=r=0/w=0/d=0
seqread4: (groupid=0, jobs=1): err= 0: pid=9004
  read : io=5120.0MB, bw=11917KB/s, iops=11 , runt=439947msec
    clat (msec): min=11 , max=9006 , avg=85.92, stdev=309.61
     lat (msec): min=11 , max=9006 , avg=85.92, stdev=309.61
    clat percentiles (msec):
     |  1.00th=[   12],  5.00th=[   13], 10.00th=[   13], 20.00th=[   13],
     | 30.00th=[   13], 40.00th=[   13], 50.00th=[   15], 60.00th=[   21],
     | 70.00th=[   32], 80.00th=[   64], 90.00th=[  314], 95.00th=[  347],
     | 99.00th=[  465], 99.50th=[ 1188], 99.90th=[ 4621]
    bw (KB/s)  : min=  173, max=76148, per=31.86%, avg=15188.06, stdev=11432.61
    lat (msec) : 20=59.73%, 50=18.26%, 100=3.52%, 250=6.52%, 500=11.05%
    lat (msec) : 750=0.33%, 1000=0.08%, 2000=0.14%, >=2000=0.37%
  cpu          : usr=0.01%, sys=0.13%, ctx=5241, majf=0, minf=283
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=5120/w=0/d=0, short=r=0/w=0/d=0
seqread4: (groupid=0, jobs=1): err= 0: pid=9005
  read : io=5120.0MB, bw=13183KB/s, iops=12 , runt=397707msec
    clat (msec): min=11 , max=2951 , avg=77.67, stdev=147.40
     lat (msec): min=11 , max=2951 , avg=77.67, stdev=147.40
    clat percentiles (msec):
     |  1.00th=[   12],  5.00th=[   13], 10.00th=[   13], 20.00th=[   13],
     | 30.00th=[   13], 40.00th=[   14], 50.00th=[   17], 60.00th=[   21],
     | 70.00th=[   32], 80.00th=[  106], 90.00th=[  314], 95.00th=[  347],
     | 99.00th=[  441], 99.50th=[  619], 99.90th=[ 1795]
    bw (KB/s)  : min=  347, max=33851, per=29.67%, avg=14144.82, stdev=6534.68
    lat (msec) : 20=58.89%, 50=18.38%, 100=2.62%, 250=7.17%, 500=12.23%
    lat (msec) : 750=0.37%, 1000=0.14%, 2000=0.12%, >=2000=0.10%
  cpu          : usr=0.01%, sys=0.13%, ctx=5220, majf=0, minf=282
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=5120/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=20480MB, aggrb=47668KB/s, minb=12203KB/s, maxb=13753KB/s, mint=390342msec, maxt=439947msec

Disk stats (read/write):
  sda: ios=59655/1473, merge=46382/960, ticks=9162485/989380, in_queue=10027085, util=100.00%

多重度4のシーケンシャルライト

[seqwrite4]
readwrite=write
blocksize=1m
size=1g
directory=./
direct=1
numjobs=4
loops=5
# fio seqwrite4.fio
seqwrite4: (g=0): rw=write, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
...
seqwrite4: (g=0): rw=write, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1
fio 2.0.5
Starting 4 processes
seqwrite4: Laying out IO file(s) (1 file(s) / 1024MB)
seqwrite4: Laying out IO file(s) (1 file(s) / 1024MB)
seqwrite4: Laying out IO file(s) (1 file(s) / 1024MB)
seqwrite4: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 1 (f=1): [W___] [100.0% done] [0K/42820K /s] [0 /40  iops] [eta 00m:00s]
seqwrite4: (groupid=0, jobs=1): err= 0: pid=9029
  write: io=5120.0MB, bw=9273.5KB/s, iops=9 , runt=565367msec
    clat (msec): min=14 , max=2243 , avg=110.33, stdev=141.81
     lat (msec): min=14 , max=2243 , avg=110.42, stdev=141.81
    clat percentiles (msec):
     |  1.00th=[   16],  5.00th=[   17], 10.00th=[   18], 20.00th=[   20],
     | 30.00th=[   27], 40.00th=[   41], 50.00th=[   61], 60.00th=[   84],
     | 70.00th=[  122], 80.00th=[  180], 90.00th=[  273], 95.00th=[  343],
     | 99.00th=[  627], 99.50th=[  906], 99.90th=[ 1319]
    bw (KB/s)  : min=  802, max=50906, per=26.56%, avg=9852.55, stdev=5032.29
    lat (msec) : 20=20.68%, 50=23.42%, 100=21.37%, 250=22.73%, 500=10.04%
    lat (msec) : 750=1.05%, 1000=0.25%, 2000=0.43%, >=2000=0.02%
  cpu          : usr=0.08%, sys=0.10%, ctx=5753, majf=0, minf=25
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=5120/d=0, short=r=0/w=0/d=0
seqwrite4: (groupid=0, jobs=1): err= 0: pid=9030
  write: io=5120.0MB, bw=9315.6KB/s, iops=9 , runt=562809msec
    clat (msec): min=13 , max=4681 , avg=109.83, stdev=164.63
     lat (msec): min=13 , max=4681 , avg=109.92, stdev=164.63
    clat percentiles (msec):
     |  1.00th=[   15],  5.00th=[   17], 10.00th=[   18], 20.00th=[   19],
     | 30.00th=[   25], 40.00th=[   39], 50.00th=[   60], 60.00th=[   81],
     | 70.00th=[  116], 80.00th=[  180], 90.00th=[  269], 95.00th=[  343],
     | 99.00th=[  627], 99.50th=[  881], 99.90th=[ 1860]
    bw (KB/s)  : min=  344, max=34403, per=27.01%, avg=10020.61, stdev=4993.68
    lat (msec) : 20=22.99%, 50=22.42%, 100=21.11%, 250=21.99%, 500=9.80%
    lat (msec) : 750=0.92%, 1000=0.35%, 2000=0.33%, >=2000=0.08%
  cpu          : usr=0.08%, sys=0.09%, ctx=5741, majf=0, minf=25
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=5120/d=0, short=r=0/w=0/d=0
seqwrite4: (groupid=0, jobs=1): err= 0: pid=9031
  write: io=5120.0MB, bw=9333.9KB/s, iops=9 , runt=561710msec
    clat (msec): min=13 , max=2428 , avg=109.62, stdev=145.08
     lat (msec): min=13 , max=2429 , avg=109.70, stdev=145.08
    clat percentiles (msec):
     |  1.00th=[   15],  5.00th=[   17], 10.00th=[   18], 20.00th=[   19],
     | 30.00th=[   26], 40.00th=[   39], 50.00th=[   60], 60.00th=[   82],
     | 70.00th=[  119], 80.00th=[  180], 90.00th=[  269], 95.00th=[  347],
     | 99.00th=[  619], 99.50th=[  906], 99.90th=[ 1434]
    bw (KB/s)  : min=  678, max=31507, per=26.78%, avg=9933.07, stdev=4731.98
    lat (msec) : 20=22.29%, 50=23.09%, 100=20.55%, 250=22.38%, 500=9.92%
    lat (msec) : 750=1.05%, 1000=0.37%, 2000=0.31%, >=2000=0.04%
  cpu          : usr=0.08%, sys=0.10%, ctx=5765, majf=0, minf=26
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=5120/d=0, short=r=0/w=0/d=0
seqwrite4: (groupid=0, jobs=1): err= 0: pid=9032
  write: io=5120.0MB, bw=9325.5KB/s, iops=9 , runt=562214msec
    clat (msec): min=14 , max=4000 , avg=109.72, stdev=170.30
     lat (msec): min=14 , max=4000 , avg=109.80, stdev=170.30
    clat percentiles (msec):
     |  1.00th=[   15],  5.00th=[   17], 10.00th=[   18], 20.00th=[   20],
     | 30.00th=[   26], 40.00th=[   39], 50.00th=[   59], 60.00th=[   81],
     | 70.00th=[  116], 80.00th=[  174], 90.00th=[  265], 95.00th=[  338],
     | 99.00th=[  660], 99.50th=[  938], 99.90th=[ 2147]
    bw (KB/s)  : min=  508, max=35929, per=27.24%, avg=10105.39, stdev=4912.41
    lat (msec) : 20=21.64%, 50=23.79%, 100=21.00%, 250=22.68%, 500=9.22%
    lat (msec) : 750=0.86%, 1000=0.39%, 2000=0.29%, >=2000=0.14%
  cpu          : usr=0.08%, sys=0.09%, ctx=5807, majf=0, minf=25
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=5120/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=20480MB, aggrb=37093KB/s, minb=9495KB/s, maxb=9557KB/s, mint=561710msec, maxt=565367msec

Disk stats (read/write):
  sda: ios=0/72719, merge=0/44002, ticks=0/10909055, in_queue=10909124, util=99.98%

ランダムリード512k

[randread512k]
readwrite=randread
blocksize=512k
size=1g
directory=./
direct=1
loops=5
# fio randread512k.fio
randread512k: (g=0): rw=randread, bs=512K-512K/512K-512K, ioengine=sync, iodepth=1
fio 2.0.5
Starting 1 process
randread512k: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 1 (f=1): [r] [100.0% done] [10966K/0K /s] [20 /0  iops] [eta 00m:00s]
randread512k: (groupid=0, jobs=1): err= 0: pid=9073
  read : io=5120.0MB, bw=11334KB/s, iops=22 , runt=462585msec
    clat (msecin=5 , max=1213 , avg=45.17, stdev=35.23
     lat (msec): min=5 , max=1213 , avg=45.17, stdev=35.23
    clat perntiles (msec):
     |  1.00th=[    6],  5.00th=[    6], 10.00th=[    7], 20.00th=[   18],
     | 30.00th=[   28], 40.00th=[   35], 50.00th=[   42], 60.00th=[   49],
     | 70.00th=[   57], 80.00th=[   67], 90.00th=[   83], 95.00th=[  102],
     | 99.00th=[  151], 99.50th=[  178], 99.90th=[  285]
    bw (KB/s)  : min=  640, max=20480, per=100.00%, avg=11392.76, stdev=2561.48
    lat (msec) : 10=17.86%, 20=3.89%, 50=40.21%, 100=32.78%, 250=5.01%
    lat (msec) : 500=0.23%, 750=0.01%, 2000=0.01%
  cpu          : usr=0.02%, sys=0.15%, ctx=10292, majf=0, minf=154
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=10240/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=5120.0MB, aggrb=11333KB/s, minb=11605KB/s, maxb=11605KB/s, mint=462585msec, maxt=462585msec

Disk stats (read/write):
  sda: ios=12996/1670, merge=12779/625, ticks=673643/430788, in_queue=1104454, util=99.92%

ランダムライト512k

[randwrite512k]
readwrite=randwrite
blocksize=512k
size=1g
directory=./
direct=1
loops=5
# fio randwrite512k.fio
randwrite512k: (g=0): rw=randwrite, bs=512K-512K/512K-512K, ioengine=sync, iodepth=1
fio 2.0.5
Starting 1 process
randwrite512k: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 1 (f=1): [w] [100.0% done] [0K/29765K /s] [0 /56  iops] [eta 00m:00s]
randwrite512k: (groupid=0, jobs=1): err= 0: pid=22853
  write: io=5120.0MB, bw=34276KB/s, iops=66 , runt=152959msec
    clat (msec): min=7 , max=1158 , avg=14.89, stdev=28.56
     lat (msec): min=7 , max=1158 , avg=14.93, stdev=28.56
    clat percentiles (msec):
     |  1.00th=[    9],  5.00th=[    9], 10.00th=[    9], 20.00th=[    9],
     | 30.00th=[    9], 40.00th=[   10], 50.00th=[   10], 60.00th=[   11],
     | 70.00th=[   11], 80.00th=[   14], 90.00th=[   23], 95.00th=[   35],
     | 99.00th=[   90], 99.50th=[  128], 99.90th=[  416]
    bw (KB/s)  : min=  873, max=56440, per=100.00%, avg=35549.34, stdev=12392.43
    lat (msec) : 10=60.17%, 20=26.65%, 50=10.36%, 100=2.06%, 250=0.58%
    lat (msec) : 500=0.14%, 750=0.01%, 1000=0.02%, 2000=0.02%
  cpu          : usr=0.27%, sys=1.67%, ctx=10951, majf=0, minf=25
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=10240/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=5120.0MB, aggrb=34276KB/s, minb=35099KB/s, maxb=35099KB/s, mint=152959msec, maxt=152959msec

Disk stats (read/write):
  sda: ios=0/23917, merge=0/260483, ticks=0/773513, in_queue=773513, util=98.40%

ランダムリード4k

[randread4k]
readwrite=randread
blocksize=4k
size=100m
directory=./
direct=1
loops=3
# fio randread4k.fio
randread4k: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
fio 2.0.5
Starting 1 process
randread4k: Laying out IO file(s) (1 file(s) / 100MB)
Jobs: 1 (f=1): [r] [100.0% done] [11704K/0K /s] [2857 /0  iops] [eta 00m:00s]
randread4k: (groupid=0, jobs=1): err= 0: pid=22895
  read : io=307200KB, bw=10796KB/s, iops=2699 , runt= 28454msec
    clat (usec): min=256 , max=128032 , avg=365.92, stdev=839.38
     lat (usec): min=256 , max=128033 , avg=366.17, stdev=839.38
    clat percentiles (usec):
     |  1.00th=[  330],  5.00th=[  334], 10.00th=[  338], 20.00th=[  338],
     | 30.00th=[  342], 40.00th=[  342], 50.00th=[  346], 60.00th=[  350],
     | 70.00th=[  354], 80.00th=[  362], 90.00th=[  374], 95.00th=[  382],
     | 99.00th=[  402], 99.50th=[  426], 99.90th=[ 1020]
    bw (KB/s)  : min= 7024, max=11440, per=99.96%, avg=10791.29, stdev=955.92
    lat (usec) : 500=99.79%, 750=0.07%, 1000=0.04%
    lat (msec) : 2=0.03%, 4=0.01%, 10=0.02%, 20=0.03%, 50=0.01%
    lat (msec) : 100=0.01%, 250=0.01%
  cpu          : usr=1.39%, sys=3.71%, ctx=76802, majf=0, minf=27
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=76800/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=307200KB, aggrb=10796KB/s, minb=11055KB/s, maxb=11055KB/s, mint=28454msec, maxt=28454msec

Disk stats (read/write):
  sda: ios=76084/241, merge=0/87, ticks=27026/15233, in_queue=42258, util=95.51%

ランダムライト4k

[randwrite4k]
readwrite=randwrite
blocksize=4k
size=100m
directory=./
direct=1
loops=3
# fio randwrite4k.fio
randwrite4k: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
fio 2.0.5
Starting 1 process
randwrite4k: Laying out IO file(s) (1 file(s) / 100MB)
Jobs: 1 (f=1): [w] [100.0% done] [0K/9893K /s] [0 /2415  iops] [eta 00m:00s]
randwrite4k: (groupid=0, jobs=1): err= 0: pid=22907
  write: io=307200KB, bw=8351.3KB/s, iops=2087 , runt= 36786msec
    clat (usec): min=323 , max=131784 , avg=474.03, stdev=1177.03
     lat (usec): min=323 , max=131785 , avg=474.44, stdev=1177.03
    clat percentiles (usec):
     |  1.00th=[  350],  5.00th=[  354], 10.00th=[  358], 20.00th=[  362],
     | 30.00th=[  370], 40.00th=[  374], 50.00th=[  386], 60.00th=[  398],
     | 70.00th=[  502], 80.00th=[  532], 90.00th=[  588], 95.00th=[  604],
     | 99.00th=[  724], 99.50th=[  884], 99.90th=[14272]
    bw (KB/s)  : min=  296, max=10664, per=99.71%, avg=8327.08, stdev=2390.60
    lat (usec) : 500=68.98%, 750=30.12%, 1000=0.50%
    lat (msec) : 2=0.20%, 4=0.04%, 10=0.04%, 20=0.05%, 50=0.05%
    lat (msec) : 100=0.01%, 250=0.01%
  cpu          : usr=1.30%, sys=4.31%, ctx=76814, majf=0, minf=25
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=76800/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=307200KB, aggrb=8351KB/s, minb=8551KB/s, maxb=8551KB/s, mint=36786msec, maxt=36786msec

Disk stats (read/write):
  sda: ios=0/76445, merge=0/392, ticks=0/47345, in_queue=47412, util=95.54%

多重度32のランダムライト4

[randread4k32]
readwrite=randread
blocksize=4k
size=10m
directory=./
direct=1
numjobs=32
loops=3
# fio randread4k32.fio
randread4k32: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
...
randread4k32: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
fio 2.0.5
Starting 32 processes
(略)
randread4k32: (groupid=0, jobs=1): err= 0: pid=9275
  read : io=30720KB, bw=564134 B/s, iops=137 , runt= 55762msec
    clat (usec): min=250 , max=206707 , avg=7254.77, stdev=14087.63
     lat (usec): min=250 , max=206707 , avg=7255.13, stdev=14087.65
    clat percentiles (usec):
     |  1.00th=[  318],  5.00th=[  334], 10.00th=[  334], 20.00th=[  342],
     | 30.00th=[  346], 40.00th=[  354], 50.00th=[  374], 60.00th=[ 1304],
     | 70.00th=[ 4448], 80.00th=[11968], 90.00th=[25216], 95.00th=[36608],
     | 99.00th=[63744], 99.50th=[79360], 99.90th=[109056]
    bw (KB/s)  : min=   87, max= 2216, per=3.09%, avg=528.11, stdev=462.15
    lat (usec) : 500=56.29%, 750=1.65%, 1000=1.13%
    lat (msec) : 2=3.57%, 4=6.48%, 10=8.53%, 20=9.23%, 50=11.05%
    lat (msec) : 100=1.88%, 250=0.18%
  cpu          : usr=0.10%, sys=0.25%, ctx=8095, majf=0, minf=27
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=7680/w=0/d=0, short=r=0/w=0/d=0
randread4k32: (groupid=0, jobs=1): err= 0: pid=9276
  read : io=30720KB, bw=640117 B/s, iops=156 , runt= 49143msec
    clat (usec): min=267 , max=233232 , avg=6393.08, stdev=13452.57
     lat (usec): min=267 , max=233232 , avg=6393.42, stdev=13452.63
    clat percentiles (usec):
     |  1.00th=[  322],  5.00th=[  334], 10.00th=[  338], 20.00th=[  338],
     | 30.00th=[  342], 40.00th=[  350], 50.00th=[  362], 60.00th=[  386],
     | 70.00th=[ 2448], 80.00th=[ 9664], 90.00th=[22656], 95.00th=[34048],
     | 99.00th=[60672], 99.50th=[77312], 99.90th=[111104]
    bw (KB/s)  : min=   62, max= 1801, per=3.66%, avg=625.30, stdev=460.94
    lat (usec) : 500=63.33%, 750=0.64%, 1000=0.69%
    lat (msec) : 2=2.93%, 4=5.48%, 10=7.11%, 20=8.16%, 50=9.88%
    lat (msec) : 100=1.59%, 250=0.18%
  cpu          : usr=0.12%, sys=0.29%, ctx=8059, majf=0, minf=28
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=7680/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=983040KB, aggrb=17083KB/s, minb=546KB/s, maxb=807KB/s, mint=38957msec, maxt=57543msec

Disk stats (read/write):
  sda: ios=245065/214, merge=0/123, ticks=1641829/5071, in_queue=1646895, util=99.88%

多重度32のランダムライト4k

[randwrite4k32]
readwrite=randwrite
blocksize=4k
size=10m
directory=/c
direct=1
numjobs=32
loops=3
# fio randwrite4k32.fio
randwrite4k32: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
...
randwrite4k32: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
fio 2.0.5
Starting 32 processes
(略)
randwrite4k32: (groupid=0, jobs=1): err= 0: pid=9386
  write: io=30720KB, bw=261951 B/s, iops=63 , runt=120088msec
    clat (usec): min=327 , max=5199.8K, avg=15631.22, stdev=137044.57
     lat (usec): min=328 , max=5199.8K, avg=15631.68, stdev=137044.60
    clat percentiles (usec):
     |  1.00th=[  350],  5.00th=[  358], 10.00th=[  362], 20.00th=[  366],
     | 30.00th=[  370], 40.00th=[  378], 50.00th=[  386], 60.00th=[  398],
     | 70.00th=[  410], 80.00th=[  466], 90.00th=[  716], 95.00th=[  756],
     | 99.00th=[460800], 99.50th=[684032], 99.90th=[1810432]
    bw (KB/s)  : min=    0, max= 2424, per=4.43%, avg=303.23, stdev=588.61
    lat (usec) : 500=81.11%, 750=13.41%, 1000=2.02%
    lat (msec) : 2=0.12%, 4=0.17%, 10=0.03%, 20=0.05%, 50=0.13%
    lat (msec) : 100=0.14%, 250=0.89%, 500=1.07%, 750=0.44%, 1000=0.12%
    lat (msec) : 2000=0.22%, >=2000=0.09%
  cpu          : usr=0.04%, sys=0.06%, ctx=7711, majf=0, minf=26
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=7680/d=0, short=r=0/w=0/d=0
randwrite4k32: (groupid=0, jobs=1): err= 0: pid=9387
  write: io=30720KB, bw=340819 B/s, iops=83 , runt= 92299msec
    clat (usec): min=313 , max=5199.1K, avg=12012.91, stdev=122783.50
     lat (usec): min=314 , max=5199.1K, avg=12013.35, stdev=122783.51
    clat percentiles (usec):
     |  1.00th=[  350],  5.00th=[  354], 10.00th=[  358], 20.00th=[  366],
     | 30.00th=[  374], 40.00th=[  378], 50.00th=[  386], 60.00th=[  394],
     | 70.00th=[  410], 80.00th=[  438], 90.00th=[  580], 95.00th=[  716],
     | 99.00th=[350208], 99.50th=[593920], 99.90th=[1810432]
    bw (KB/s)  : min=    0, max= 3847, per=6.04%, avg=414.00, stdev=771.07
    lat (usec) : 500=86.95%, 750=9.61%, 1000=0.72%
    lat (msec) : 2=0.10%, 4=0.17%, 10=0.05%, 20=0.03%, 50=0.10%
    lat (msec) : 100=0.09%, 250=0.68%, 500=0.92%, 750=0.20%, 1000=0.10%
    lat (msec) : 2000=0.20%, >=2000=0.08%
  cpu          : usr=0.05%, sys=0.11%, ctx=7721, majf=0, minf=26
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=7680/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=983040KB, aggrb=6852KB/s, minb=219KB/s, maxb=643KB/s, mint=48878msec, maxt=143459msec

Disk stats (read/write):
  sda: ios=0/245669, merge=0/380, ticks=0/3939189, in_queue=3939238, util=100.00%

上記の結果をまとめた物が下の表になる。多重化した時のIOPSがよく分からないので空白、たぶん省略した場所の数字が必要なのだろうけど、文字数制限を軽く超えるため割愛。

Type Speed (MB/s) IOPS
seq read 66.2 64
seq write 36.1 35
seq read*4 47.7
seq write*4 37.1
rand read 512k 11.3 22
rand write 512k 34.3 66
rand read 4k 10.8 2699
rand write 4k 8.4 2087
rand read 4k*32 17.0
rand write 4k*32 6.9

ランダム4kの性能が非常に良いように見えるが、これはReadyNASのキャッシュが効いているためだと思われる。
逆にシーケンシャルの速度が見劣りするが、それがX-RAID2のせいなのかが気になる。
そのため、X-RAID2のベンチマークを取ってみる。NFSは後回し。