What is the best way to calculate & verify, how much I/O's are received on a disk?

Tags:

Answer: 1

17 hours ago

I'm trying to find out if we are getting an expected I/O's for writing & / or reading data from SCSI or SAN LUN?

In this test environment, I have a Ubuntu 14.04.5 LTS server and have got a single LUN on a single path from SAN storage team on an FC port.

we requested for 6000 I/O's however, it's been an issue that we strongly feel that the requested I/O's are not received on that SAN LUN but the storage team is saying they have given those many I/O's.

So to prove them that there is something wrong from their end I need to know how can I capture or check how much I/O's are received in actual against requested?

can someone help me with this? Thanks in advance & sorry for the poor English.

Added by: Einar Abbott DVM

Answer: 2

30 hours ago

For live monitoring of read/write throughput to specific disks or partitions, you can use the tool nmon. It displays the absolute data rates per partition/disk as well as estimates how busy each one is in percent, including a nice bar chart.

You can install it using:

sudo apt install nmon

After that invoke it either as nmon and then press D to show the disk monitor, or run it as NMON=d nmon to directly display the disk monitor without further input. The refresh rate is 2 seconds by default, but you can manually change that using the -s parameter, e.g. nmon -s 1 for 1 second. To quit nmon, press Q or Ctrl+C.

Here's an example how it looks:

┌nmon─14g─────────────────────Hostname=type40mark3──Refresh= 6secs ───16:26.08─┐
│ Disk I/O ──/proc/diskstats────mostly in KB/s─────Warning:contains duplicates─│
│DiskName Busy  Read WriteMB|0          |25         |50          |75       100|│
│sda        0%    0.0    0.0|>                                                |│
│sda1       0%    0.0    0.0|>                                                |│
│sda2       0%    0.0    0.0|>                                                |│
│sda3       0%    0.0    0.0|>                                                |│
│sda4       0%    0.0    0.0|>                                                |│
│sda5       0%    0.0    0.0|>                                                |│
│sdb       49%  230.5    0.0|RRRRRRRRRRRRRRRRRRRRRRRRR                        >│
│sdb1      15%   75.0    0.0|RRRRRRRR                                     >   |│
│sdb2       4%   16.7    0.0|RR    >                                          |│
│sdb3       0%    2.7    0.0|R>                                               |│
│sdb4      30%  135.9    0.0|RRRRRRRRRRRRRRR                         >        |│
│sdb5       0%    0.0    0.0|>                                                |│
│sdb6       0%    0.0    0.0|>                                                |│
│dm-0       0%    0.0    0.0|>                                                |│
│Totals Read-MB/s=460.7    Writes-MB/s=0.1      Transfers/sec=469.0            │                                                                     
└──────────────────────────────────────────────────────────────────────────────┘

Alternatively, if you just want to do a "speed test" of any disk, you can use the hdparm tool:

sudo hdparm -t /dev/sda

This will read from the specified device (here /dev/sda) for about 3 seconds and display the data rate afterwards. Note that it performs a "buffered read", but without using the disk cache. Use -T instead of -t to test read speeds from the disk's cache.

You can not test or determine write speeds this way though.

Added by: Fiona Pfeffer

Answer: 3

8 hours ago

I/O performance will be affected by block device configuration. An excellent reference for block device tuning is here which covers settings like queue depth, I/O scheduler, read ahead, and partition alignment. Note that the page shows defaults for their storage array; yours may be quite different.

Get you block device configuration so you can reset them after testing:

    cat /sys/block/lun0/queue/nr_requests
    cat /sys/block/lun0/queue/scheduler
    cat /sys/block/lun0/queue/read_ahead_kb

Set block device parameters for testing:

    echo noop > /sys/block/lun0/queue/scheduler

Queue depth is whatever the block device advertises and for sequential read tests ot doesn't matter, simple NOOP FIFO I/O scheduler, and we don't care about read ahead far streaming reads.

Check IOPS in a non-destructive way:

    dd if=/dev/lun0 of=/dev/null bs=4k iflag=direct status=progress

Where /dev/lun0 is the multipathed virtual disk device. iflags=direct bypasses your server's cache.

6,000 IOPS with a 4k block size should give a throughput of 24.57 MiB/sec. Actual IOPS is throughput/blocksize. Random read IOPS will be worse on any medium with seek times. Use 4k block size becuause the test is for IOPS, not throughput,

6,000 IOPS @ 4k should be 24.57 MiB/sec (6,000 * 4 KiB).

Added by: Clifford Kertzmann

Popular Search

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 1 2 3 4 5 6 7 8 9