Skip to content

Instantly share code, notes, and snippets.

@tcpdump-examples
Created March 21, 2021 11:40
Show Gist options
  • Save tcpdump-examples/e6aabc3e2e81d478a8e4242cca6738aa to your computer and use it in GitHub Desktop.
Save tcpdump-examples/e6aabc3e2e81d478a8e4242cca6738aa to your computer and use it in GitHub Desktop.
How to Fix io wait issue in Linux

Linux high IOwait is a common Linux performance issue. Today we will look at what iowait means and what contributes to this problem. Hope this can give you more ideas about high IOwait issue.

What is IOwait?

IO wait is related to the CPU resource on the server.

The iowait column on top command output shows the percentage of time that the processor was waiting for I/O to complete. It indicates that the system is waiting on disk or network IO. Because the system is waiting on those resources, it can not fully utilize the CPU.

Learn more about what IOwait here.

I/O Wait is a problem that requires the use of advanced tools to debug, and of course there are many advanced uses of the basic tools.

Determine if an I/O problem is causing the system to slow

Verify that the system is slow due to I/O we can use multiple commands, but the simplest is the UNIX command top.


[[email protected] ~]# top
top - 15:19:26 up  6:10,  4 users,  load average: 0.00, 0.01, 0.05
Tasks: 147 total,   1 running, 146 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.3 sy,  0.0 ni, 99.7 id,  96.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem :   999936 total,   121588 free,   328672 used,   549676 buff/cache
KiB Swap:  2097148 total,  2095792 free,     1356 used.   450460 avail Mem 

From the CPU line we can see the percentage of CPU wasted on I/O wait; the higher the number, the more CPU resources are waiting for I/O permissions

wa -- iowait
 AmountoftimetheCPUhasbeenwaitingfor I/O to complete.

check this post to understand CPU Usage us sy wa hi in Linux Top

Find which disk is being written

The top command above explains I/O wait from a whole, but does not indicate which disk is affected, and to know which disk is causing the problem, we use another command Iostat command. Check this post Troubleshooting Disk Issues In Linux for more.

[[email protected] ~]# iostat -x 2 5
Linux 3.10.0-514.el7.x86_64 (localhost.localdomain)     2017年03月03日     _x86_64_    (1 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.34    0.00    0.31    0.01    0.00   99.33

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.05    1.16    0.17    39.00    17.38    84.60     0.00    2.17    0.87   11.14   0.65   111.41
scd0              0.00     0.00    0.00    0.00     0.00     0.00     8.00     0.00    0.64    0.64    0.00   0.64   0.00
dm-0              0.00     0.00    1.10    0.20    37.85    17.21    84.71     0.00    2.43    0.90   10.88   0.66   0.09
dm-1              0.00     0.00    0.01    0.02     0.07     0.08     9.70     0.00    1.42    0.27    2.05   0.09   0.00

In the example above, Iostat will be updated every 2 seconds, printing 5 times of information, and-X's option is to print out the extended information

The first Iostat report prints the statistics after the last boot of the system, which means that in most cases the first printed information should be ignored, and the remaining reports are based on the time of the previous interval. For example, this command will be printed 5 times, the second report is a statistic from the first report, the third time is based on the second, and so on

In the above example, the%utilized of SDA is 111.41%, which is a good indication that a process is being written to the SDA disk.

In addition to%utilized, we can get richer resources from iostat, such as read/write requests per millisecond (rrqm/s & wrqm/s)), read and write per second (r/s & w/s), and of course more. In the example above, our project seems to be reading and writing very much information. This is very useful for us to find the appropriate process.

Find processes that cause high I/O wait

[[email protected] ~]# iotop

Total DISK READ :    0.00 B/s | Total DISK WRITE :       0.00 B/s
Actual DISK READ:    0.00 B/s | Actual DISK WRITE:       0.00 B/s
   TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND   
  1028 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % sshd

The simplest way to find the culprit is to use Linux Iotop to check disk IO usage Per process , by looking at iotop statistics, we can easily guide sshd as the culprit

Although Iotop is a very powerful tool and is easy to use, it is not installed by default on all Linux operating systems.

Find which file caused the i/owait

The lsof command can show all the files that a process opens, or all the processes that open a file. From this list, we can find out exactly what files are written, depending on the size of the file and the specific data of the IO file in/proc

We can use the-p method to reduce the output, PID is the specific process

[[email protected] ~]# lsof -p 1028
COMMAND  PID USER   FD   TYPE DEVICE SIZE/OFF    NODE NAME
sshd    1028 root  cwd    DIR  253,0      233      64 /
sshd    1028 root  rtd    DIR  253,0      233      64 /
sshd    1028 root  txt    REG  253,0   819640 2393730 /usr/sbin/sshd
sshd    1028 root  mem    REG  253,0    61752  180464 /usr/lib64/libnss_files-2.17.so
sshd    1028 root  mem    REG  253,0    43928  180476 /usr/lib64/librt-2.17.so
sshd    1028 root  mem    REG  253,0    15688  269136 /usr/lib64/libkeyutils.so.1.5
sshd    1028 root  mem    REG  253,0    62744  482870 /usr/lib64/libkrb5support.so.0.1
sshd    1028 root  mem    REG  253,0    11384  180425 /usr/lib64/libfreebl3.so
sshd    1028 root  mem    REG  253,0   143352  180472 /usr/lib64/libpthread-2.17.so
sshd    1028 root  mem    REG  253,0   251784  202440 /usr/lib64/libnspr4.so
sshd    1028 root  mem    REG  253,0    20016  202441 /usr/lib64/libplc4.so
sshd    1028 root  mem    REG  253,0    15768  202442 /usr/lib64/libplds4.so
sshd    1028 root  mem    REG  253,0   182056  202443 /usr/lib64/libnssutil3.so
sshd    1028 root  mem    REG  253,0  1220240  650074 /usr/lib64/libnss3.so
sshd    1028 root  mem    REG  253,0   164048  650076 /usr/lib64/libsmime3.so
sshd    1028 root  mem    REG  253,0   276752  650077 /usr/lib64/libssl3.so
sshd    1028 root  mem    REG  253,0   121296  269112 /usr/lib64/libsasl2.so.3.0.0
sshd    1028 root  mem    REG  253,0   398264  202404 /usr/lib64/libpcre.so.1.2.0
sshd    1028 root  mem    REG  253,0  2116736  180446 /usr/lib64/libc-2.17.so
sshd    1028 root  mem    REG  253,0    15848  202439 /usr/lib64/libcom_err.so.2.1
sshd    1028 root  mem    REG  253,0   202568  482862 /usr/lib64/libk5crypto.so.3.1
sshd    1028 root  mem    REG  253,0   959008  482868 /usr/lib64/libkrb5.so.3.3
sshd    1028 root  mem    REG  253,0   324888  482858 /usr/lib64/libgssapi_krb5.so.2.2
sshd    1028 root  mem    REG  253,0   110632  180474 /usr/lib64/libresolv-2.17.so
sshd    1028 root  mem    REG  253,0    40640  180450 /usr/lib64/libcrypt-2.17.so
sshd    1028 root  mem    REG  253,0   113152  180456 /usr/lib64/libnsl-2.17.so
sshd    1028 root  mem    REG  253,0    90664  202424 /usr/lib64/libz.so.1.2.7
sshd    1028 root  mem    REG  253,0    14432  186432 /usr/lib64/libutil-2.17.so
sshd    1028 root  mem    REG  253,0    61872  766946 /usr/lib64/liblber-2.4.so.2.10.3
sshd    1028 root  mem    REG  253,0   344280  766948 /usr/lib64/libldap-2.4.so.2.10.3
sshd    1028 root  mem    REG  253,0    19344  180452 /usr/lib64/libdl-2.17.so
sshd    1028 root  mem    REG  253,0  2025472  482880 /usr/lib64/libcrypto.so.1.0.1e
sshd    1028 root  mem    REG  253,0    23968  202508 /usr/lib64/libcap-ng.so.0.0.0
sshd    1028 root  mem    REG  253,0   155744  202421 /usr/lib64/libselinux.so.1
sshd    1028 root  mem    REG  253,0    61672  539049 /usr/lib64/libpam.so.0.83.1
sshd    1028 root  mem    REG  253,0   122936  202512 /usr/lib64/libaudit.so.1.0.0
sshd    1028 root  mem    REG  253,0    42520  298848 /usr/lib64/libwrap.so.0.7.6
sshd    1028 root  mem    REG  253,0    11328  568388 /usr/lib64/libfipscheck.so.1.2.1
sshd    1028 root  mem    REG  253,0   155064  180439 /usr/lib64/ld-2.17.so
sshd    1028 root    0u   CHR    1,3      0t0    5930 /dev/null
sshd    1028 root    1u   CHR    1,3      0t0    5930 /dev/null
sshd    1028 root    2u   CHR    1,3      0t0    5930 /dev/null
sshd    1028 root    3u  IPv4  21185      0t0     TCP *:ssh (LISTEN)
sshd    1028 root    4u  IPv6  21194      0t0     TCP *:ssh (LISTEN)

In order to further confirm that these files are read and written frequently, we can view them with the following command

[[email protected] ~] # df / tmp
File system 1K-block used available used% mount point
/ dev / mapper / cl-root 17811456 3981928 13829528 23% /

From the results of the above command, we can determine that/tmp is the root of our environment's logical disk

[[email protected] ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               cl
  PV Size               19.00 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              4863
  Free PE               0
  Allocated PE          4863
  PV UUID               4QfaOy-DNSO-niK1-ayn2-K6AY-WZMy-9Nd2It

Over Pvdisplay we can see that/dev/sda2 is actually the specific disk we used to create the logical disk. With the above information we can safely say that the result of lsof is the file we are looking for

Reference:

Troubleshooting high iowait linux

Sort Disk Read Write in Linux Iotop Command

Debugging High CPU Utilization For a Process on Linux

How to Check Disk usage Disk utilization on Linux

What is High IOwait on Linux

Using Linux Iotop to check disk IO usage Per process

How to Check if a Disk is Busy in Linux

Troubleshooting Disk Issues In Linux

@mikkorantalainen
Copy link

Note that if you find a single file is having bad performance, running defrag for that single file may help. For example, if you do not have logrotate configured for the log files, a single log file may be fragmented very heavily if it's big enough and multiple years old. If you're using ext4 filesystem, you can simply run sudo e4defrag /path/to/problematic/file.ext.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment