You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
gives IO information for disks sda to sdf. Those are looked up in the output of mount -l in line 905, but are not found, and so the check stops with the log message "DEBUG: Skipping as no mount point". This is because disks in ZFS pools do not show up in the output of mount -l, only the ZFS pool names. This leads to autoshutdown suspending my server, even though some ZFS resilvers or scrubs are running (which have more than enough disk IO).
For now, I have just commented out the mount point check in lines 905-907 and it correctly works in my setup. But I am not sure what the best general solution would be.
Is this check necessary at all? If there are disks that are not mounted, does this lead to some error or undesired behavior further down the line?
Is there a nice way to properly detect if disks are assigned in a ZFS pool that is mounted? E.g., the command zpool status outputs all available pools with their respective disks, but they may not be represented by the identifier used by iostat, but instead by their UUID or something similar.
The text was updated successfully, but these errors were encountered:
Answering my second bullet point, I think one could extend the check in lines 905-907 by changing it from
! mount -l | grep -q "${hdd}" && {
_log "DEBUG: Skipping as no mount point"
continue; }
to
! mount -l | grep -q "${hdd}" &&
command -v zpool >/dev/null 2>&1 &&
! ZPOOL_SCRIPTS_AS_ROOT=1 zpool status -c upath | grep "${hdd}" | grep -q -e "ONLINE" -e "DEGRADED" && {
_log "DEBUG: Skipping as no mount point"
continue; }
Line
Explanation
1
Test if ${hdd} is mounted normally (same as before). If NOT, go to line 2.
2
Test if the zpool command is available. If YES, go to line 3.
3
Test if ${hdd} appears as a disk with status "ONLINE" or "DEGRADED" in the output of zpool status -c upath. If NOT, go to line 4. The ZPOOL_SCRIPTS_AS_ROOT=1 variable is needed if autoshutdown is run with root privileges (I don't know if that is the case). If not, it can be omitted.
4
Log that the disk is not mounted (same as before).
The function
_check_hddio
does not take disks in ZFS pools into account. In my case, theiostat
command in lines 973-975gives IO information for disks
sda
tosdf
. Those are looked up in the output ofmount -l
in line 905, but are not found, and so the check stops with the log message "DEBUG: Skipping as no mount point". This is because disks in ZFS pools do not show up in the output ofmount -l
, only the ZFS pool names. This leads to autoshutdown suspending my server, even though some ZFS resilvers or scrubs are running (which have more than enough disk IO).For now, I have just commented out the mount point check in lines 905-907 and it correctly works in my setup. But I am not sure what the best general solution would be.
zpool status
outputs all available pools with their respective disks, but they may not be represented by the identifier used byiostat
, but instead by their UUID or something similar.The text was updated successfully, but these errors were encountered: