Skip to content

Instantly share code, notes, and snippets.

@mgerdts
Last active June 21, 2019 21:42
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save mgerdts/4d3d7b184b8bc1d81084c07696b31c05 to your computer and use it in GitHub Desktop.
Save mgerdts/4d3d7b184b8bc1d81084c07696b31c05 to your computer and use it in GitHub Desktop.
illumos-9318

Overview

This fixes

9318 vol_volsize_to_reservation does not account for raidz skip blocks

$ git whatchanged -v master..
commit 969a6b100f3e67c3a623d898c3825506f1a40b49
Author: Mike Gerdts <mike.gerdts@joyent.com>
Date:   Tue Jun 11 04:05:22 2019 +0000

    9318 vol_volsize_to_reservation does not account for raidz skip blocks
    Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
    Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
    Reviewed by: Jerry Jelinek <jerry.jelinek@joyent.com>
    Reviewed by: Matt Ahrens <matt@delphix.com>
    Reviewed by: Kody Kantor <kody.kantor@joyent.com>

:100644 100644 ca11041cba 283f3ff044 M	usr/src/cmd/zfs/zfs_main.c
:100644 100644 3dc5454c48 af5e5c35d5 M	usr/src/lib/libzfs/common/libzfs.h
:100644 100644 1b2bf860e2 9db1c948e9 M	usr/src/lib/libzfs/common/libzfs_dataset.c
:100644 100644 a29f2c6bc8 bf1e75980b M	usr/src/pkg/manifests/system-test-zfstest.mf
:100644 100644 dbee1a5433 738fe89309 M	usr/src/test/zfs-tests/runfiles/delphix.run
:100644 100644 875b529e9e 926f65cb20 M	usr/src/test/zfs-tests/runfiles/omnios.run
:100644 100644 f8c0c40328 f86d6d9a7b M	usr/src/test/zfs-tests/runfiles/openindiana.run
:000000 100644 0000000000 052ebc0474 A	usr/src/test/zfs-tests/tests/functional/refreserv/refreserv_multi_raidz.ksh
:000000 100644 0000000000 d8d8061fd1 A	usr/src/test/zfs-tests/tests/functional/refreserv/refreserv_raidz.ksh

The bits

Code review

  • Review Board
  • Sanjay Nadkarni wrote to me privately saying he was having rb troubles but the code looks good.

I believe that I have addressed all of the code review feedback, but have not received a +1 from Matt, Richard, or Sanjay after the latest tweaks.

In the first round of review, I made this change to libzfs_dataset.c per Richard's recommendation

Matt and Richard later agreed that this change was improper and I reverted it in libzfs_dataset.c lines 5222 5307-5311. This returns all code (but not comments) in libzfs_dataset.c to the state it was when Sanjay reviewed it. Jerry and Kody have given a +1 after reverting this. Although not explicit in rb, I think that Matt and Richard are happy with it.

Aside from renames of test files, the test changes that lack +1's are quite minor.

Build logs

Testing

Early testing was done with debug bits, later with non-debug. Since there's no kernel code, this probably doesn't matter a lot.

zfstest

On my test rig, some tests that were unrelated to my change caused zfs-on-zfs deadlock even on the baseline. I disabled those tests and those in the immediate blast radius. This patch shows the tests that were omitted.

Baseline tests were run on the latest OmniOS bloody, pkg://omnios/entire@11-151031.0:20190603T195911Z. To make diffing the baseline and final results useful, I massaged the output (/var/tmp/test_results/$stamp/log) with:

awk '$1 == "Test:" { print $NF, $2 }'

Before the final round of code review there were no new failures. After the latest round of code review, there were some new failures. None of these new failures are likely to be related to the changes made. See new_test_failures.md.

I ran the new tests in several different configurations to ensure that they worked with 512N and 4Kn when the minimum and a great number of test disks are available. All tests passed.

Other ad-hoc testing

To ensure that refservation was sufficient across lots of different raidz{1,2,3} stripe widths and block sizes, I created a bunch of 100MB zvols and filled them. The results are summarized in this spreadsheet (tab single fix2). Other tests showed that as the volume size increased to 1 GB, the overestimate (F2/C2) for 4k and 8k blocks became closer to 3%. The poor estimate when volblocksize is close to the sector size is believed to be a manifestation of illumos#11237.

I did the same as above with a bunch of different combinations of mirrors, raidz{1,2,3}. The results are in the multi fix2 tab. The purpose here was to ensure that refreservation was chosen based on the worst-case scenario of all of a zvol's blocks end up on the most space inneficient vdev.

==== Nightly distributed build started: Fri Jun 21 08:38:33 CDT 2019 ====
==== Nightly distributed build completed: Fri Jun 21 10:06:15 CDT 2019 ====
==== Total build time ====
real 1:27:42
==== Build environment ====
/usr/bin/uname
SunOS bloody 5.11 omnios-master-01c610f1f5 i86pc i386 i86pc
/opt/onbld/bin/i386/dmake
dmake: illumos make
number of concurrent jobs = 6
cw version 4.0
primary: /opt/gcc-4.4.4/bin/gcc
gcc (GCC) 4.4.4
Copyright (C) 2010 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
shadow: /opt/gcc-7/bin/gcc
gcc (OmniOS 151031/7.4.0-il-1) 7.4.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
/usr/java/bin/javac
openjdk full version "1.8.0_202-omnios-151031-20190219"
/usr/bin/openssl
OpenSSL 1.1.1c 28 May 2019
API_COMPAT=0x10000000L
/usr/bin/as
as: Sun Compiler Common 12 SunOS_i386 snv_121 08/03/2009
/usr/ccs/bin/ld
ld: Software Generation Utilities - Solaris Link Editors: 5.11-1.1763 (illumos)
Build project: default
Build taskid: 90
==== Nightly argument issues ====
==== Build version ====
illumos-9318-0-g15824bd8b0
==== Make clobber ERRORS ====
==== Make tools clobber ERRORS ====
==== Tools build errors ====
==== Build errors (non-DEBUG) ====
==== Build warnings (non-DEBUG) ====
==== Elapsed build time (non-DEBUG) ====
real 50:46.2
user 2:27:32.3
sys 33:24.6
==== Build noise differences (non-DEBUG) ====
==== package build errors (non-DEBUG) ====
==== Build errors (DEBUG) ====
==== Build warnings (DEBUG) ====
==== Elapsed build time (DEBUG) ====
real 23:48.4
user 1:13:13.1
sys 13:34.8
==== Build noise differences (DEBUG) ====
==== package build errors (DEBUG) ====
==== Validating manifests against proto area ====
==== Check versioning and ABI information ====
==== Check ELF runtime attributes ====
==== Diff ELF runtime attributes (since last build) ====
==== cstyle/hdrchk errors ====
==== Find core files ====
==== Check lists of files ====
==== Impact on file permissions ====

This test was hung destroying the testpool in cleanup.

Test: /opt/zfs-tests/tests/functional/mmp/mmp_on_off (run as root) [763:34] [KILLED]
14:08:53.32 ASSERTION: mmp thread won't write uberblocks with multihost=off
14:08:53.67 zfs_multihost_interval:         0x3e8                   =       0x64
14:08:53.69 SUCCESS: set_tunable64 zfs_multihost_interval 100
14:08:54.01 zfs_txg_timeout:0x5                     =       0x1388
14:08:54.02 SUCCESS: set_tunable64 zfs_txg_timeout 5000
14:08:57.06 SUCCESS: mmp_set_hostid 01234567
14:08:57.34 SUCCESS: zpool create -f testpool c0t600144F013057A3100005D0AB7C00001d0
14:08:57.42 SUCCESS: zfs create testpool/testfs
14:08:57.55 SUCCESS: zfs set mountpoint=/var/tmp/testdir testpool/testfs
14:08:57.64 SUCCESS: zpool set multihost=off testpool
14:09:03.85 SUCCESS: sleep 5
14:09:05.05 SUCCESS: zpool set multihost=on testpool
14:09:10.06 SUCCESS: sleep 5
14:09:11.25 5c5
14:09:11.25 <   txg = 9
14:09:11.25 ---
14:09:11.25 >   txg = 8
14:09:11.25 7c7
14:09:11.25 <   timestamp = 1560971347 UTC = Wed Jun 19 14:09:07 2019
14:09:11.25 ---
14:09:11.25 >   timestamp = 1560971337 UTC = Wed Jun 19 14:08:57 2019
14:09:11.25 9c9
14:09:11.25 <   mmp_delay = 6844436261
14:09:11.25 ---
14:09:11.25 >   mmp_delay = 0
14:09:11.25 NOTE: Performing local cleanup via log_onexit (cleanup)
02:52:26.16 SUCCESS: rm -rf /var/tmp/testdir
02:52:26.51 zfs_txg_timeout:0x1388                  =       0x5
02:52:26.52 SUCCESS: set_tunable64 zfs_txg_timeout 5
02:52:26.84 zfs_multihost_interval:         0x64                    =       0x3e8
02:52:26.86 SUCCESS: set_tunable64 zfs_multihost_interval 1000
02:52:26.86 SUCCESS: rm -f /var/tmp/mmp-uber-prev.txt /var/tmp/mmp-uber-curr.txt
02:52:27.57 SUCCESS: mmp_clear_hostid
02:52:27.58 mmp thread won't write uberblocks with multihost=off passed

PID 4943 is the zpool export command. It is trying to unmount a file system before exporting the pool.

# dtrace -n 'profile-997 / pid == 4943 / { @s[stack()] = count(); }'
...
              unix`mutex_enter+0x10
              zfs`txg_wait_synced_impl+0xbc
              zfs`txg_wait_synced+0x17
              zfs`dmu_tx_wait+0x15c
              zfs`dmu_tx_assign+0x5b
              zfs`zil_commit_itx_assign+0x3b
              zfs`zil_commit_impl+0x35
              zfs`zil_commit+0x57
              zfs`zil_close+0x102
              zfs`zfsvfs_teardown+0x5c
              zfs`zfs_umount+0xfc
              genunix`fsop_unmount+0x1b
              genunix`dounmount+0x57
              genunix`umount2_engine+0x96
              genunix`umount2+0x163
              unix`_sys_sysenter_post_swapgs+0x149
                9

              genunix`savectx+0x23
              unix`resume+0x70
              unix`swtch+0x141
              genunix`cv_wait+0x70
              zfs`txg_wait_synced_impl+0xbc
              zfs`txg_wait_synced+0x17
              zfs`dmu_tx_wait+0x15c
              zfs`dmu_tx_assign+0x5b
              zfs`zil_commit_itx_assign+0x3b
              zfs`zil_commit_impl+0x35
              zfs`zil_commit+0x57
              zfs`zil_close+0x102
              zfs`zfsvfs_teardown+0x5c
              zfs`zfs_umount+0xfc
              genunix`fsop_unmount+0x1b
              genunix`dounmount+0x57
              genunix`umount2_engine+0x96
              genunix`umount2+0x163
              unix`_sys_sysenter_post_swapgs+0x149
               11

              apix`apic_send_ipi+0x73
              unix`send_dirint+0x18
              unix`poke_cpu+0x2a
              unix`cpu_wakeup+0xd8
              unix`setbackdq+0x201
              genunix`sleepq_wakeall_chan+0x89
              genunix`cv_broadcast+0x65
              zfs`txg_wait_synced_impl+0x89
              zfs`txg_wait_synced+0x17
              zfs`dmu_tx_wait+0x15c
              zfs`dmu_tx_assign+0x5b
              zfs`zil_commit_itx_assign+0x3b
              zfs`zil_commit_impl+0x35
              zfs`zil_commit+0x57
              zfs`zil_close+0x102
              zfs`zfsvfs_teardown+0x5c
              zfs`zfs_umount+0xfc
              genunix`fsop_unmount+0x1b
              genunix`dounmount+0x57
              genunix`umount2_engine+0x96
              418

It is stuck in dmu_tx_assign

# dtrace -n 'dmu_tx_wait:entry,dmu_tx_assign:entry / pid == 4943 / {@s[probefunc,stack()] = count();} tick-5s { printa(@s); exit(0);}'
dtrace: description 'dmu_tx_wait:entry,dmu_tx_assign:entry ' matched 3 probes
CPU     ID                    FUNCTION:NAME
  2  86183                         :tick-5s
  dmu_tx_wait
              zfs`dmu_tx_assign+0x5b
              zfs`zil_commit_itx_assign+0x3b
              zfs`zil_commit_impl+0x35
              zfs`zil_commit+0x57
              zfs`zil_close+0x102
              zfs`zfsvfs_teardown+0x5c
              zfs`zfs_umount+0xfc
              genunix`fsop_unmount+0x1b
              genunix`dounmount+0x57
              genunix`umount2_engine+0x96
              genunix`umount2+0x163
              unix`_sys_sysenter_post_swapgs+0x149
            37982

That is:

int
dmu_tx_assign(dmu_tx_t *tx, uint64_t txg_how)
{
        int err;

        ASSERT(tx->tx_txg == 0);
        ASSERT0(txg_how & ~(TXG_WAIT | TXG_NOTHROTTLE));
        ASSERT(!dsl_pool_sync_context(tx->tx_pool));

        /* If we might wait, we must not hold the config lock. */
        IMPLY((txg_how & TXG_WAIT), !dsl_pool_config_held(tx->tx_pool));

        if ((txg_how & TXG_NOTHROTTLE))
                tx->tx_dirty_delayed = B_TRUE;

        while ((err = dmu_tx_try_assign(tx, txg_how)) != 0) {
                dmu_tx_unassign(tx);

                if (err != ERESTART || !(txg_how & TXG_WAIT))
                        return (err);

                dmu_tx_wait(tx);
        }

        txg_rele_to_quiesce(&tx->tx_txgh);

        return (0);
}

If it is calling dmu_tx_wait() a lot it must also be calling dmu_tx_assign(). What error is returned?

# dtrace -n 'dmu_tx_try_assign:return /pid == 4943/ { @r[arg1] = count(); }'
dtrace: description 'dmu_tx_try_assign:return ' matched 1 probe
^C

               91            22119

which is

#define	ERESTART 91	/* Restartable system call		*/

How do we get ERESTART? The first way is if the pool is suspended.

 872 static int
 873 dmu_tx_try_assign(dmu_tx_t *tx, uint64_t txg_how)
 874 {
 875         spa_t *spa = tx->tx_pool->dp_spa;
 876
 877         ASSERT0(tx->tx_txg);
 878
 879         if (tx->tx_err)
 880                 return (tx->tx_err);
 881
 882         if (spa_suspended(spa)) {
 883                 /*
 884                  * If the user has indicated a blocking failure mode
 885                  * then return ERESTART which will block in dmu_tx_wait().
 886                  * Otherwise, return EIO so that an error can get
 887                  * propagated back to the VOP calls.
 888                  *
 889                  * Note that we always honor the txg_how flag regardless
 890                  * of the failuremode setting.
 891                  */
 892                 if (spa_get_failmode(spa) == ZIO_FAILURE_MODE_CONTINUE &&
 893                     !(txg_how & TXG_WAIT))
 894                         return (SET_ERROR(EIO));
 895
 896                 return (SET_ERROR(ERESTART));
 897         }

Line 882 evaluates to true

# dtrace -n 'spa_suspended:return /pid == $target/ { @s[arg1] = count(); }' -p `pgrep -x zpool`
dtrace: description 'spa_suspended:return ' matched 1 probe
^C

                1            32814

Sure enough, suspended due to MMP. Now I guess I need to learn what MMP is - that's a new feature to me. zpool(1M) and $SRC/uts/common/fs/zfs/mmp.c are helpful. Hmm, this test is testing MMP. Did it find a real bug?

> ::spa
ADDR                 STATE NAME
ffffff024d16f000    ACTIVE rpool
ffffff028663e000    ACTIVE testpool
> ffffff028663e000::print spa_t spa_suspended
spa_suspended = 0x2 (ZIO_SUSPEND_MMP)
>

How could that get set?

523                 /*
524                  * Suspend the pool if no MMP write has succeeded in over
525                  * mmp_interval * mmp_fail_intervals nanoseconds.
526                  */
527                 if (!suspended && mmp_fail_intervals && multihost &&
528                     (gethrtime() - mmp->mmp_last_write) > max_fail_ns) {
529                         cmn_err(CE_WARN, "MMP writes to pool '%s' have not "
530                             "succeeded in over %llus; suspending pool",
531                             spa_name(spa),
532                             NSEC2SEC(gethrtime() - mmp->mmp_last_write));
533                         zio_suspend(spa, NULL, ZIO_SUSPEND_MMP);
534                 }

We see the tell-tale message. Couldn't do something in 0s, imagine that.

Jun 19 13:27:59 bloody  incomplete write- retrying
Jun 19 14:09:07 bloody zfs: [ID 957129 kern.warning] WARNING: MMP writes to pool 'testpool' have not succeeded in over 0s; suspending pool
Jun 19 14:09:10 bloody fmd: [ID 377184 daemon.error] SUNW-MSG-ID: ZFS-8000-HC, TYPE: Error, VER: 1, SEVERITY: Major
Jun 19 14:09:10 bloody EVENT-TIME: Wed Jun 19 14:09:09 CDT 2019
Jun 19 14:09:10 bloody PLATFORM: VMware-Virtual-Platform, CSN: VMware-56-4d-1e-75-fe-fe-cb-71-1e-66-f9-e7-d3-f7-ec-5f, HOSTNAME: bloody
Jun 19 14:09:10 bloody SOURCE: zfs-diagnosis, REV: 1.0
Jun 19 14:09:10 bloody EVENT-ID: af68c98e-e7bd-6b07-f184-b0bca9b86b31
Jun 19 14:09:10 bloody DESC: The ZFS pool has experienced currently unrecoverable I/O
Jun 19 14:09:10 bloody      failures.  Refer to http://illumos.org/msg/ZFS-8000-HC for more information.
Jun 19 14:09:10 bloody AUTO-RESPONSE: No automated response will be taken.
Jun 19 14:09:10 bloody IMPACT: Read and write I/Os cannot be serviced.
Jun 19 14:09:10 bloody REC-ACTION: Make sure the affected devices are connected, then run
Jun 19 14:09:10 bloody      'zpool clear'.

The test sets the zfs_multihost_interval to 100 ms.

log_must set_tunable64 zfs_multihost_interval $MMP_INTERVAL_MIN

Given that I am running this on vmware with disks provided by iscsi from another VM, it is not terribly surprising that even a small amount of I/O took over 100ms. While these tests are running, iterm2, browsers, etc. commonly lock up for several seconds.

I cleared spa_suspeneded via mdb and the tests marched on.

There were some new test failures.

$ grep '>.*' test-results.full.diff | grep -v PASS
> [FAIL] /opt/zfs-tests/tests/functional/cli_root/zpool/zpool_002_pos
> [FAIL] /opt/zfs-tests/tests/functional/cli_root/zpool_clear/zpool_clear_001_pos
> [FAIL] /opt/zfs-tests/tests/functional/cli_root/zpool_destroy/zpool_destroy_001_pos
> [FAIL] /opt/zfs-tests/tests/functional/cli_root/zpool_remove/setup
> [SKIP] /opt/zfs-tests/tests/functional/cli_root/zpool_remove/zpool_remove_001_neg
> [SKIP] /opt/zfs-tests/tests/functional/cli_root/zpool_remove/zpool_remove_002_pos
> [SKIP] /opt/zfs-tests/tests/functional/cli_root/zpool_remove/zpool_remove_003_pos
> [KILLED] /opt/zfs-tests/tests/functional/mmp/mmp_on_off
> [FAIL] /opt/zfs-tests/tests/functional/redundancy/redundancy_003_pos

After the zfstest run was complete, I realized that this time around I ran with 4Kn disks whereas the baseline was with 512n disks - each disk is 1 GiB. This contributes to ENOSPC problems. None of these failures are related to zvol reservations.

opt/zfs-tests/tests/functional/cli_root/zpool/zpool_002_pos

Test: /opt/zfs-tests/tests/functional/cli_root/zpool/zpool_002_pos (run as root) [01:21] [FAIL]
13:13:09.32 ASSERTION: With ZFS_ABORT set, all zpool commands can abort and generate a core file.
13:14:30.12 /var/tmp/testdir/file3: initialized 263192576 of 268435456 bytes: No space left on device
13:14:30.13 /opt/zfs-tests/tests/functional/cli_root/zpool/zpool_002_pos: line 91: 28060: Abort
13:14:30.14 NOTE: Performing test-fail callback (/opt/zfs-tests/callbacks/zfs_dbgmsg)

opt/zfs-tests/tests/functional/cli_root/zpool_clear/zpool_clear_001_pos

Test: /opt/zfs-tests/tests/functional/cli_root/zpool_clear/zpool_clear_001_pos (run as root) [01:24] [FAIL]
13:15:04.40 ASSERTION: Verify 'zpool clear' can clear errors of a storage pool.
13:15:11.48 SUCCESS: mkfile 268435456 /var/tmp/testdir/file.0
13:15:21.20 SUCCESS: mkfile 268435456 /var/tmp/testdir/file.1
13:16:28.24 /var/tmp/testdir/file.2: initialized 263847936 of 268435456 bytes: No space left on device
13:16:28.24 ERROR: mkfile 268435456 /var/tmp/testdir/file.2 exited 1
13:16:28.24 NOTE: Performing test-fail callback (/opt/zfs-tests/callbacks/zfs_dbgmsg)

opt/zfs-tests/tests/functional/cli_root/zpool_destroy/zpool_destroy_001_pos

I suspect that this is a result of earlier tests (not part of the suite) that ran zpool create with the whole-disk nodes causing the disks to get an EFI label rather than an SMI label. Regardless, I've touched nothing near this.

Test: /opt/zfs-tests/tests/functional/cli_root/zpool_destroy/zpool_destroy_001_pos (run as root) [00:47] [FAIL]
13:18:33.25 ASSERTION: 'zpool destroy <pool>' can destroy a specified pool.
13:19:00.70 label error: EFI Labels do not support overlapping partitions
13:19:00.70 Partition 8 overlaps partition 1.
13:19:00.70 Warning: error writing EFI.
13:19:00.70 Label failed.
13:19:00.71 NOTE: Performing test-fail callback (/opt/zfs-tests/callbacks/zfs_dbgmsg)

opt/zfs-tests/tests/functional/cli_root/zpool_remove/setup

Test: /opt/zfs-tests/tests/functional/cli_root/zpool_remove/setup (run as root) [00:33] [FAIL]
13:23:44.08 label error: EFI Labels do not support overlapping partitions
13:23:44.08 Partition 8 overlaps partition 5.
13:23:44.08 Warning: error writing EFI.
13:23:44.08 Label failed.
13:23:44.08 NOTE: Performing test-fail callback (/opt/zfs-tests/callbacks/zfs_dbgmsg)

opt/zfs-tests/tests/functional/cli_root/zpool_remove/zpool_remove_001_neg

Setup failed, see above.

opt/zfs-tests/tests/functional/cli_root/zpool_remove/zpool_remove_002_pos

Setup failed, see above.

opt/zfs-tests/tests/functional/cli_root/zpool_remove/zpool_remove_003_pos

Setup failed, see above.

opt/zfs-tests/tests/functional/mmp/mmp_on_off

Manual intervention. See write up.

opt/zfs-tests/tests/functional/redundancy/redundancy_003_pos

Test: /opt/zfs-tests/tests/functional/redundancy/redundancy_003_pos (run as root) [00:47] [FAIL]
03:26:17.57 ASSERTION: Verify mirrored pool can withstand N-1 devices are failing or missing.
03:26:17.58 SUCCESS: mkdir /var/tmp/basedir.10350
03:26:17.90 SUCCESS: mkfile 268435456 /var/tmp/basedir.10350/vdev0 /var/tmp/basedir.10350/vdev1 /var/tmp/basedir.10350/vdev2 /var/tmp/basedir.10350/vdev3
03:26:18.01 SUCCESS: zpool create -m /var/tmp/testdir testpool mirror /var/tmp/basedir.10350/vdev0 /var/tmp/basedir.10350/vdev1 /var/tmp/basedir.10350/vdev2 /var/tmp/basedir.10350/vdev3
03:26:18.01 NOTE: Filling up the filesystem ...
03:26:20.25 write failed (-1), good_writes = 36, error: No space left on device[28]
03:26:20.28 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pre-record-file.10350 2>&1
03:26:21.08 SUCCESS: sync
03:26:23.09 SUCCESS: sleep 2
03:26:26.22 SUCCESS: sleep 2
03:26:26.27 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
03:26:26.77 SUCCESS: is_data_valid testpool
03:26:26.80 SUCCESS: zpool clear testpool
03:26:26.83 SUCCESS: rm -f /var/tmp/basedir.10350/pst-record-file.10350
03:26:26.86 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
03:26:27.31 SUCCESS: clear_errors testpool
03:26:28.84 SUCCESS: sync
03:26:30.86 SUCCESS: sleep 2
03:26:33.20 SUCCESS: sleep 2
03:26:33.23 SUCCESS: rm -f /var/tmp/basedir.10350/pst-record-file.10350
03:26:33.25 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
03:26:33.72 SUCCESS: is_data_valid testpool
03:26:33.75 SUCCESS: zpool clear testpool
03:26:33.77 SUCCESS: rm -f /var/tmp/basedir.10350/pst-record-file.10350
03:26:33.79 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
03:26:34.25 SUCCESS: clear_errors testpool
03:26:35.85 SUCCESS: sync
03:26:37.87 SUCCESS: sleep 2
03:26:40.21 SUCCESS: sleep 2
03:26:40.23 SUCCESS: rm -f /var/tmp/basedir.10350/pst-record-file.10350
03:26:40.24 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
03:26:40.73 SUCCESS: is_data_valid testpool
03:26:40.77 SUCCESS: zpool clear testpool
03:26:40.79 SUCCESS: rm -f /var/tmp/basedir.10350/pst-record-file.10350
03:26:40.80 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
03:26:41.29 SUCCESS: clear_errors testpool
03:26:42.03 SUCCESS: sync
03:26:44.05 SUCCESS: sleep 2
03:26:46.39 SUCCESS: sleep 2
03:26:46.40 SUCCESS: rm -f /var/tmp/basedir.10350/pst-record-file.10350
03:26:46.42 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
03:26:46.88 SUCCESS: is_data_valid testpool
03:26:46.96 SUCCESS: mkfile 268435456 /var/tmp/basedir.10350/vdev0
03:26:47.12 SUCCESS: zpool replace -f testpool /var/tmp/basedir.10350/vdev0 /var/tmp/basedir.10350/vdev0
03:26:49.26 SUCCESS: sleep 2
03:26:49.31 SUCCESS: rm -f /var/tmp/basedir.10350/pst-record-file.10350
03:26:49.33 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
03:26:49.78 SUCCESS: recover_bad_missing_devs testpool 1
03:26:50.43 SUCCESS: sync
03:26:52.43 SUCCESS: sleep 2
03:26:54.63 SUCCESS: sleep 2
03:26:54.64 SUCCESS: rm -f /var/tmp/basedir.10350/pst-record-file.10350
03:26:54.65 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
03:26:55.12 SUCCESS: is_data_valid testpool
03:26:55.21 SUCCESS: mkfile 268435456 /var/tmp/basedir.10350/vdev1
03:26:55.35 SUCCESS: zpool replace -f testpool /var/tmp/basedir.10350/vdev1 /var/tmp/basedir.10350/vdev1
03:26:57.50 SUCCESS: sleep 2
03:26:57.62 SUCCESS: mkfile 268435456 /var/tmp/basedir.10350/vdev0
03:26:57.81 SUCCESS: zpool replace -f testpool /var/tmp/basedir.10350/vdev0 /var/tmp/basedir.10350/vdev0
03:26:58.03 SUCCESS: rm -f /var/tmp/basedir.10350/pst-record-file.10350
03:26:58.05 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
03:26:58.51 SUCCESS: recover_bad_missing_devs testpool 2
03:26:59.43 SUCCESS: sync
03:27:01.43 SUCCESS: sleep 2
03:27:03.65 SUCCESS: sleep 2
03:27:03.66 SUCCESS: rm -f /var/tmp/basedir.10350/pst-record-file.10350
03:27:03.68 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
03:27:04.13 SUCCESS: is_data_valid testpool
03:27:04.23 SUCCESS: mkfile 268435456 /var/tmp/basedir.10350/vdev2
03:27:04.55 SUCCESS: zpool replace -f testpool /var/tmp/basedir.10350/vdev2 /var/tmp/basedir.10350/vdev2
03:27:04.67 SUCCESS: mkfile 268435456 /var/tmp/basedir.10350/vdev1
03:27:04.69 ERROR: zpool replace -f testpool /var/tmp/basedir.10350/vdev1 /var/tmp/basedir.10350/vdev1 exited 1
03:27:04.69 invalid vdev specification the following errors must be manually repaired: /var/tmp/basedir.10350/vdev1 is part of active pool 'testpool'
03:27:04.69 NOTE: Performing test-fail callback (/opt/zfs-tests/callbacks/zfs_dbgmsg)
--- /opt/zfs-tests/runfiles/omnios.run Sat Jun 15 11:25:24 2019
+++ omnios.run Sat Jun 15 11:31:23 2019
@@ -357,11 +357,11 @@
[/opt/zfs-tests/tests/functional/cli_root/zpool_status]
tests = ['zpool_status_001_pos', 'zpool_status_002_pos']
-[/opt/zfs-tests/tests/functional/cli_root/zpool_upgrade]
-tests = ['zpool_upgrade_001_pos', 'zpool_upgrade_002_pos',
- 'zpool_upgrade_003_pos', 'zpool_upgrade_004_pos', 'zpool_upgrade_005_neg',
- 'zpool_upgrade_006_neg', 'zpool_upgrade_007_pos', 'zpool_upgrade_008_pos',
- 'zpool_upgrade_009_neg']
+#[/opt/zfs-tests/tests/functional/cli_root/zpool_upgrade]
+#tests = ['zpool_upgrade_001_pos', 'zpool_upgrade_002_pos',
+# 'zpool_upgrade_003_pos', 'zpool_upgrade_004_pos', 'zpool_upgrade_005_neg',
+# 'zpool_upgrade_006_neg', 'zpool_upgrade_007_pos', 'zpool_upgrade_008_pos',
+# 'zpool_upgrade_009_neg']
[/opt/zfs-tests/tests/functional/cli_root/zpool_sync]
tests = ['zpool_sync_001_pos', 'zpool_sync_002_neg']
@@ -539,18 +539,18 @@
'refreserv_004_pos', 'refreserv_005_pos', 'refreserv_raidz',
'refreserv_multi_raidz']
-[/opt/zfs-tests/tests/functional/removal]
-pre =
-tests = ['removal_sanity', 'removal_all_vdev', 'removal_check_space',
- 'removal_condense_export',
- 'removal_multiple_indirection', 'removal_remap',
- 'removal_remap_deadlists',
- 'removal_with_add', 'removal_with_create_fs', 'removal_with_dedup',
- 'removal_with_export', 'removal_with_ganging', 'removal_with_remap',
- 'removal_with_remove', 'removal_with_scrub', 'removal_with_send',
- 'removal_with_send_recv', 'removal_with_snapshot', 'removal_with_write',
- 'removal_with_zdb', 'removal_resume_export',
- 'remove_mirror', 'remove_mirror_sanity', 'remove_raidz']
+#[/opt/zfs-tests/tests/functional/removal]
+#pre =
+#tests = ['removal_sanity', 'removal_all_vdev', 'removal_check_space',
+# 'removal_condense_export',
+# 'removal_multiple_indirection', 'removal_remap',
+# 'removal_remap_deadlists',
+# 'removal_with_add', 'removal_with_create_fs', 'removal_with_dedup',
+# 'removal_with_export', 'removal_with_ganging', 'removal_with_remap',
+# 'removal_with_remove', 'removal_with_scrub', 'removal_with_send',
+# 'removal_with_send_recv', 'removal_with_snapshot', 'removal_with_write',
+# 'removal_with_zdb', 'removal_resume_export',
+# 'remove_mirror', 'remove_mirror_sanity', 'remove_raidz']
[/opt/zfs-tests/tests/functional/rename_dirs]
tests = ['rename_dirs_001_pos']
From 9dc1ab058e708daa66c40a5f2ef440b639a88e41 Mon Sep 17 00:00:00 2001
From: Mike Gerdts <mike.gerdts@joyent.com>
Date: Tue, 11 Jun 2019 04:05:22 +0000
Subject: [PATCH] 9318 vol_volsize_to_reservation does not account for raidz
skip blocks Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com> Reviewed by: Jerry
Jelinek <jerry.jelinek@joyent.com> Reviewed by: Matt Ahrens
<matt@delphix.com> Reviewed by: Kody Kantor <kody.kantor@joyent.com>
---
usr/src/cmd/zfs/zfs_main.c | 7 +-
usr/src/lib/libzfs/common/libzfs.h | 3 +-
usr/src/lib/libzfs/common/libzfs_dataset.c | 189 ++++++++++++++++-
usr/src/pkg/manifests/system-test-zfstest.mf | 3 +
usr/src/test/zfs-tests/runfiles/delphix.run | 3 +-
usr/src/test/zfs-tests/runfiles/omnios.run | 3 +-
.../test/zfs-tests/runfiles/openindiana.run | 3 +-
.../refreserv/refreserv_multi_raidz.ksh | 197 ++++++++++++++++++
.../functional/refreserv/refreserv_raidz.ksh | 130 ++++++++++++
9 files changed, 523 insertions(+), 15 deletions(-)
create mode 100644 usr/src/test/zfs-tests/tests/functional/refreserv/refreserv_multi_raidz.ksh
create mode 100644 usr/src/test/zfs-tests/tests/functional/refreserv/refreserv_raidz.ksh
diff --git a/usr/src/cmd/zfs/zfs_main.c b/usr/src/cmd/zfs/zfs_main.c
index ca11041cba..283f3ff044 100644
--- a/usr/src/cmd/zfs/zfs_main.c
+++ b/usr/src/cmd/zfs/zfs_main.c
@@ -23,7 +23,7 @@
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011, 2016 by Delphix. All rights reserved.
* Copyright 2012 Milan Jurik. All rights reserved.
- * Copyright (c) 2012, Joyent, Inc. All rights reserved.
+ * Copyright 2019 Joyent, Inc.
* Copyright (c) 2011-2012 Pawel Jakub Dawidek. All rights reserved.
* Copyright (c) 2013 Steven Hartland. All rights reserved.
* Copyright (c) 2014 Integros [integros.com]
@@ -876,10 +876,11 @@ zfs_do_create(int argc, char **argv)
zpool_close(zpool_handle);
goto error;
}
- zpool_close(zpool_handle);
- volsize = zvol_volsize_to_reservation(volsize, real_props);
+ volsize = zvol_volsize_to_reservation(zpool_handle, volsize,
+ real_props);
nvlist_free(real_props);
+ zpool_close(zpool_handle);
if (nvlist_lookup_string(props, zfs_prop_to_name(resv_prop),
&strval) != 0) {
diff --git a/usr/src/lib/libzfs/common/libzfs.h b/usr/src/lib/libzfs/common/libzfs.h
index 3dc5454c48..af5e5c35d5 100644
--- a/usr/src/lib/libzfs/common/libzfs.h
+++ b/usr/src/lib/libzfs/common/libzfs.h
@@ -671,7 +671,8 @@ extern int zfs_hold(zfs_handle_t *, const char *, const char *,
extern int zfs_hold_nvl(zfs_handle_t *, int, nvlist_t *);
extern int zfs_release(zfs_handle_t *, const char *, const char *, boolean_t);
extern int zfs_get_holds(zfs_handle_t *, nvlist_t **);
-extern uint64_t zvol_volsize_to_reservation(uint64_t, nvlist_t *);
+extern uint64_t zvol_volsize_to_reservation(zpool_handle_t *, uint64_t,
+ nvlist_t *);
typedef int (*zfs_userspace_cb_t)(void *arg, const char *domain,
uid_t rid, uint64_t space);
diff --git a/usr/src/lib/libzfs/common/libzfs_dataset.c b/usr/src/lib/libzfs/common/libzfs_dataset.c
index 1b2bf860e2..2ca09e51d4 100644
--- a/usr/src/lib/libzfs/common/libzfs_dataset.c
+++ b/usr/src/lib/libzfs/common/libzfs_dataset.c
@@ -21,7 +21,7 @@
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
- * Copyright (c) 2018, Joyent, Inc. All rights reserved.
+ * Copyright 2019 Joyent, Inc.
* Copyright (c) 2011, 2016 by Delphix. All rights reserved.
* Copyright (c) 2012 DEY Storage Systems, Inc. All rights reserved.
* Copyright (c) 2011-2012 Pawel Jakub Dawidek. All rights reserved.
@@ -1514,6 +1514,7 @@ zfs_add_synthetic_resv(zfs_handle_t *zhp, nvlist_t *nvl)
uint64_t new_reservation;
zfs_prop_t resv_prop;
nvlist_t *props;
+ zpool_handle_t *zph = zpool_handle(zhp);
/*
* If this is an existing volume, and someone is setting the volsize,
@@ -1528,7 +1529,7 @@ zfs_add_synthetic_resv(zfs_handle_t *zhp, nvlist_t *nvl)
fnvlist_add_uint64(props, zfs_prop_to_name(ZFS_PROP_VOLBLOCKSIZE),
zfs_prop_get_int(zhp, ZFS_PROP_VOLBLOCKSIZE));
- if ((zvol_volsize_to_reservation(old_volsize, props) !=
+ if ((zvol_volsize_to_reservation(zph, old_volsize, props) !=
old_reservation) || nvlist_exists(nvl,
zfs_prop_to_name(resv_prop))) {
fnvlist_free(props);
@@ -1539,7 +1540,7 @@ zfs_add_synthetic_resv(zfs_handle_t *zhp, nvlist_t *nvl)
fnvlist_free(props);
return (-1);
}
- new_reservation = zvol_volsize_to_reservation(new_volsize, props);
+ new_reservation = zvol_volsize_to_reservation(zph, new_volsize, props);
fnvlist_free(props);
if (nvlist_add_uint64(nvl, zfs_prop_to_name(resv_prop),
@@ -1594,7 +1595,8 @@ zfs_fix_auto_resv(zfs_handle_t *zhp, nvlist_t *nvl)
volsize = zfs_prop_get_int(zhp, ZFS_PROP_VOLSIZE);
}
- resvsize = zvol_volsize_to_reservation(volsize, props);
+ resvsize = zvol_volsize_to_reservation(zpool_handle(zhp), volsize,
+ props);
fnvlist_free(props);
(void) nvlist_remove_all(nvl, zfs_prop_to_name(prop));
@@ -5111,12 +5113,176 @@ zfs_get_holds(zfs_handle_t *zhp, nvlist_t **nvl)
}
/*
- * Convert the zvol's volume size to an appropriate reservation.
+ * The theory of raidz space accounting
+ *
+ * The "referenced" property of RAIDZ vdevs is scaled such that a 128KB block
+ * will "reference" 128KB, even though it allocates more than that, to store the
+ * parity information (and perhaps skip sectors). This concept of the
+ * "referenced" (and other DMU space accounting) being lower than the allocated
+ * space by a constant factor is called "raidz deflation."
+ *
+ * As mentioned above, the constant factor for raidz deflation assumes a 128KB
+ * block size. However, zvols typically have a much smaller block size (default
+ * 8KB). These smaller blocks may require proportionally much more parity
+ * information (and perhaps skip sectors). In this case, the change to the
+ * "referenced" property may be much more than the logical block size.
+ *
+ * Suppose a raidz vdev has 5 disks with ashift=12. A 128k block may be written
+ * as follows.
+ *
+ * +-------+-------+-------+-------+-------+
+ * | disk1 | disk2 | disk3 | disk4 | disk5 |
+ * +-------+-------+-------+-------+-------+
+ * | P0 | D0 | D8 | D16 | D24 |
+ * | P1 | D1 | D9 | D17 | D25 |
+ * | P2 | D2 | D10 | D18 | D26 |
+ * | P3 | D3 | D11 | D19 | D27 |
+ * | P4 | D4 | D12 | D20 | D28 |
+ * | P5 | D5 | D13 | D21 | D29 |
+ * | P6 | D6 | D14 | D22 | D30 |
+ * | P7 | D7 | D15 | D23 | D31 |
+ * +-------+-------+-------+-------+-------+
+ *
+ * Above, notice that 160k was allocated: 8 x 4k parity sectors + 32 x 4k data
+ * sectors. The dataset's referenced will increase by 128k and the pool's
+ * allocated and free properties will be adjusted by 160k.
+ *
+ * A 4k block written to the same raidz vdev will require two 4k sectors. The
+ * blank cells represent unallocated space.
+ *
+ * +-------+-------+-------+-------+-------+
+ * | disk1 | disk2 | disk3 | disk4 | disk5 |
+ * +-------+-------+-------+-------+-------+
+ * | P0 | D0 | | | |
+ * +-------+-------+-------+-------+-------+
+ *
+ * Above, notice that the 4k block required one sector for parity and another
+ * for data. vdev_raidz_asize() will return 8k and as such the pool's allocated
+ * and free properties will be adjusted by 8k. The dataset will not be charged
+ * 8k. Rather, it will be charged a value that is scaled according to the
+ * overhead of the 128k block on the same vdev. This 8k allocation will be
+ * charged 8k * 128k / 160k. 128k is from SPA_OLD_MAXBLOCKSIZE and 160k is as
+ * calculated in the 128k block example above.
+ *
+ * Every raidz allocation is sized to be a multiple of nparity+1 sectors. That
+ * is, every raidz1 allocation will be a multiple of 2 sectors, raidz2
+ * allocations are a multiple of 3 sectors, and raidz3 allocations are a
+ * multiple of of 4 sectors. When a block does not fill the required number of
+ * sectors, skip blocks (sectors) are used.
+ *
+ * An 8k block being written to a raidz vdev may be written as follows:
+ *
+ * +-------+-------+-------+-------+-------+
+ * | disk1 | disk2 | disk3 | disk4 | disk5 |
+ * +-------+-------+-------+-------+-------+
+ * | P0 | D0 | D1 | S0 | |
+ * +-------+-------+-------+-------+-------+
+ *
+ * In order to maintain the nparity+1 allocation size, a skip block (S0) was
+ * added. For this 8k block, the pool's allocated and free properties are
+ * adjusted by 16k and the dataset's referenced is increased by 16k * 128k /
+ * 160k. Again, 128k is from SPA_OLD_MAXBLOCKSIZE and 160k is as calculated in
+ * the 128k block example above.
+ *
+ * Compression may lead to a variety of block sizes being written for the same
+ * volume or file. There is no clear way to reserve just the amount of space
+ * that will be required, so the worst case (no compression) is assumed.
+ * Note that metadata blocks will typically be compressed, so the reservation
+ * size returned by zvol_volsize_to_reservation() will generally be slightly
+ * larger than the maximum that the volume can reference.
+ */
+
+/*
+ * Derived from function of same name in uts/common/fs/zfs/vdev_raidz.c.
+ * Returns the amount of space (in bytes) that will be allocated for the
+ * specified block size. Note that the "referenced" space accounted will be less
+ * than this, but not necessarily equal to "blksize", due to RAIDZ deflation.
+ */
+static uint64_t
+vdev_raidz_asize(uint64_t ndisks, uint64_t nparity, uint64_t ashift,
+ uint64_t blksize)
+{
+ uint64_t asize, ndata;
+
+ ASSERT3U(ndisks, >, nparity);
+ ndata = ndisks - nparity;
+ asize = ((blksize - 1) >> ashift) + 1;
+ asize += nparity * ((asize + ndata - 1) / ndata);
+ asize = roundup(asize, nparity + 1) << ashift;
+
+ return (asize);
+}
+
+/*
+ * Determine how much space will be allocated if it lands on the most space-
+ * inefficient top-level vdev. Returns the size in bytes required to store one
+ * copy of the volume data. See theory comment above.
+ */
+static uint64_t
+volsize_from_vdevs(zpool_handle_t *zhp, uint64_t nblocks, uint64_t blksize)
+{
+ nvlist_t *config, *tree, **vdevs;
+ uint_t nvdevs, v;
+ uint64_t ret = 0;
+
+ config = zpool_get_config(zhp, NULL);
+ if (nvlist_lookup_nvlist(config, ZPOOL_CONFIG_VDEV_TREE, &tree) != 0 ||
+ nvlist_lookup_nvlist_array(tree, ZPOOL_CONFIG_CHILDREN,
+ &vdevs, &nvdevs) != 0) {
+ return (nblocks * blksize);
+ }
+
+ for (v = 0; v < nvdevs; v++) {
+ char *type;
+ uint64_t nparity, ashift, asize, tsize;
+ nvlist_t **disks;
+ uint_t ndisks;
+ uint64_t volsize;
+
+ if (nvlist_lookup_string(vdevs[v], ZPOOL_CONFIG_TYPE,
+ &type) != 0 || strcmp(type, VDEV_TYPE_RAIDZ) != 0 ||
+ nvlist_lookup_uint64(vdevs[v], ZPOOL_CONFIG_NPARITY,
+ &nparity) != 0 ||
+ nvlist_lookup_uint64(vdevs[v], ZPOOL_CONFIG_ASHIFT,
+ &ashift) != 0 ||
+ nvlist_lookup_nvlist_array(vdevs[v], ZPOOL_CONFIG_CHILDREN,
+ &disks, &ndisks) != 0) {
+ continue;
+ }
+
+ /* allocation size for the "typical" 128k block */
+ tsize = vdev_raidz_asize(ndisks, nparity, ashift,
+ SPA_OLD_MAXBLOCKSIZE);
+ /* allocation size for the blksize block */
+ asize = vdev_raidz_asize(ndisks, nparity, ashift, blksize);
+
+ /*
+ * Scale this size down as a ratio of 128k / tsize. See theory
+ * statement above.
+ */
+ volsize = nblocks * asize * SPA_OLD_MAXBLOCKSIZE / tsize;
+ if (volsize > ret) {
+ ret = volsize;
+ }
+ }
+
+ if (ret == 0) {
+ ret = nblocks * blksize;
+ }
+
+ return (ret);
+}
+
+/*
+ * Convert the zvol's volume size to an appropriate reservation. See theory
+ * comment above.
+ *
* Note: If this routine is updated, it is necessary to update the ZFS test
- * suite's shell version in reservation.kshlib.
+ * suite's shell version in reservation.shlib.
*/
uint64_t
-zvol_volsize_to_reservation(uint64_t volsize, nvlist_t *props)
+zvol_volsize_to_reservation(zpool_handle_t *zph, uint64_t volsize,
+ nvlist_t *props)
{
uint64_t numdb;
uint64_t nblocks, volblocksize;
@@ -5132,7 +5298,14 @@ zvol_volsize_to_reservation(uint64_t volsize, nvlist_t *props)
zfs_prop_to_name(ZFS_PROP_VOLBLOCKSIZE),
&volblocksize) != 0)
volblocksize = ZVOL_DEFAULT_BLOCKSIZE;
- nblocks = volsize/volblocksize;
+
+ nblocks = volsize / volblocksize;
+ /*
+ * Metadata defaults to using 128k blocks, not volblocksize blocks. For
+ * this reason, only the data blocks are scaled based on vdev config.
+ */
+ volsize = volsize_from_vdevs(zph, nblocks, volblocksize);
+
/* start with metadnode L0-L6 */
numdb = 7;
/* calculate number of indirects */
diff --git a/usr/src/pkg/manifests/system-test-zfstest.mf b/usr/src/pkg/manifests/system-test-zfstest.mf
index a29f2c6bc8..bf1e75980b 100644
--- a/usr/src/pkg/manifests/system-test-zfstest.mf
+++ b/usr/src/pkg/manifests/system-test-zfstest.mf
@@ -2521,6 +2521,9 @@ file path=opt/zfs-tests/tests/functional/refreserv/refreserv_002_pos mode=0555
file path=opt/zfs-tests/tests/functional/refreserv/refreserv_003_pos mode=0555
file path=opt/zfs-tests/tests/functional/refreserv/refreserv_004_pos mode=0555
file path=opt/zfs-tests/tests/functional/refreserv/refreserv_005_pos mode=0555
+file path=opt/zfs-tests/tests/functional/refreserv/refreserv_multi_raidz \
+ mode=0555
+file path=opt/zfs-tests/tests/functional/refreserv/refreserv_raidz mode=0555
file path=opt/zfs-tests/tests/functional/refreserv/setup mode=0555
file path=opt/zfs-tests/tests/functional/removal/cleanup mode=0555
file path=opt/zfs-tests/tests/functional/removal/removal.kshlib mode=0444
diff --git a/usr/src/test/zfs-tests/runfiles/delphix.run b/usr/src/test/zfs-tests/runfiles/delphix.run
index dbee1a5433..738fe89309 100644
--- a/usr/src/test/zfs-tests/runfiles/delphix.run
+++ b/usr/src/test/zfs-tests/runfiles/delphix.run
@@ -536,7 +536,8 @@ tests = ['refquota_001_pos', 'refquota_002_pos', 'refquota_003_pos',
[/opt/zfs-tests/tests/functional/refreserv]
tests = ['refreserv_001_pos', 'refreserv_002_pos', 'refreserv_003_pos',
- 'refreserv_004_pos', 'refreserv_005_pos']
+ 'refreserv_004_pos', 'refreserv_005_pos', 'refreserv_raidz',
+ 'refreserv_multi_raidz']
[/opt/zfs-tests/tests/functional/removal]
pre =
diff --git a/usr/src/test/zfs-tests/runfiles/omnios.run b/usr/src/test/zfs-tests/runfiles/omnios.run
index 875b529e9e..926f65cb20 100644
--- a/usr/src/test/zfs-tests/runfiles/omnios.run
+++ b/usr/src/test/zfs-tests/runfiles/omnios.run
@@ -536,7 +536,8 @@ tests = ['refquota_001_pos', 'refquota_002_pos', 'refquota_003_pos',
[/opt/zfs-tests/tests/functional/refreserv]
tests = ['refreserv_001_pos', 'refreserv_002_pos', 'refreserv_003_pos',
- 'refreserv_004_pos', 'refreserv_005_pos']
+ 'refreserv_004_pos', 'refreserv_005_pos', 'refreserv_raidz',
+ 'refreserv_multi_raidz']
[/opt/zfs-tests/tests/functional/removal]
pre =
diff --git a/usr/src/test/zfs-tests/runfiles/openindiana.run b/usr/src/test/zfs-tests/runfiles/openindiana.run
index f8c0c40328..f86d6d9a7b 100644
--- a/usr/src/test/zfs-tests/runfiles/openindiana.run
+++ b/usr/src/test/zfs-tests/runfiles/openindiana.run
@@ -536,7 +536,8 @@ tests = ['refquota_001_pos', 'refquota_002_pos', 'refquota_003_pos',
[/opt/zfs-tests/tests/functional/refreserv]
tests = ['refreserv_001_pos', 'refreserv_002_pos', 'refreserv_003_pos',
- 'refreserv_004_pos', 'refreserv_005_pos']
+ 'refreserv_004_pos', 'refreserv_005_pos', 'refreserv_raidz',
+ 'refreserv_multi_raidz']
[/opt/zfs-tests/tests/functional/removal]
pre =
diff --git a/usr/src/test/zfs-tests/tests/functional/refreserv/refreserv_multi_raidz.ksh b/usr/src/test/zfs-tests/tests/functional/refreserv/refreserv_multi_raidz.ksh
new file mode 100644
index 0000000000..ec5e0bfc27
--- /dev/null
+++ b/usr/src/test/zfs-tests/tests/functional/refreserv/refreserv_multi_raidz.ksh
@@ -0,0 +1,197 @@
+#!/bin/ksh -p
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright 2019 Joyent, Inc.
+#
+
+. $STF_SUITE/include/libtest.shlib
+. $STF_SUITE/tests/functional/refreserv/refreserv.cfg
+
+#
+# DESCRIPTION:
+# raidz refreservation=auto picks worst raidz vdev
+#
+# STRATEGY:
+# 1. Create a pool with a single raidz vdev
+# 2. For each block size [512b, 1k, 128k] or [4k, 8k, 128k]
+# - create a volume
+# - remember its refreservation
+# - destroy the volume
+# 3. Destroy the pool
+# 4. Recreate the pool with one more disk in the vdev, then repeat steps
+# 2 and 3.
+#
+# NOTES:
+# 1. This test will use up to 14 disks but can cover the key concepts with
+# 5 disks.
+# 2. If the disks are a mixture of 4Kn and 512n/512e, failures are likely.
+#
+
+verify_runnable "global"
+
+typeset -a alldisks=($DISKS)
+
+# The larger the volsize, the better zvol_volsize_to_reservation() is at
+# guessing the right number - though it is horrible with tiny blocks. At 10M on
+# ashift=12, the estimate may be over 26% too high.
+volsize=100
+
+function cleanup
+{
+ default_cleanup_noexit
+ default_setup_noexit "${alldisks[0]}"
+}
+
+log_assert "raidz refreservation=auto picks worst raidz vdev"
+log_onexit cleanup
+
+poolexists "$TESTPOOL" && log_must zpool destroy "$TESTPOOL"
+
+# Testing tiny block sizes on ashift=12 pools causes so much size inflation
+# that small test disks may fill before creating small volumes. However,
+# testing 512b and 1K blocks on ashift=9 pools is an ok approximation for
+# testing the problems that arise from 4K and 8K blocks on ashift=12 pools.
+bps=$(prtvtoc /dev/rdsk/${alldisks[0]} |
+ awk '$NF == "bytes/sector" { print $2; exit 0 }')
+case "$bps" in
+512)
+ allshifts=(9 10 17)
+ ;;
+4096)
+ allshifts=(12 13 17)
+ ;;
+*)
+ log_fail "bytes/sector != (512|4096)"
+ ;;
+esac
+log_note "Testing in ashift=${allshifts[0]} mode"
+
+typeset -A sizes=
+
+#
+# Determine the refreservation for a $volsize MiB volume on each raidz type at
+# various block sizes.
+#
+for parity in 1 2 3; do
+ raid=raidz$parity
+ typeset -A sizes["$raid"]
+
+ # Ensure we hit scenarios with and without skip blocks
+ for ndisks in $((parity * 2)) $((parity * 2 + 1)); do
+ typeset -a disks=(${alldisks[0..$((ndisks - 1))]})
+
+ if (( ${#disks[@]} < ndisks )); then
+ log_note "Too few disks to test $raid-$ndisks"
+ continue
+ fi
+
+ typeset -A sizes["$raid"]["$ndisks"]
+
+ log_must zpool create "$TESTPOOL" "$raid" "${disks[@]}"
+
+ for bits in "${allshifts[@]}"; do
+ vbs=$((1 << bits))
+ log_note "Gathering refreservation for $raid-$ndisks" \
+ "volblocksize=$vbs"
+
+ vol=$TESTPOOL/$TESTVOL
+ log_must zfs create -V ${volsize}m \
+ -o volblocksize=$vbs "$vol"
+
+ refres=$(zfs get -Hpo value refreservation "$vol")
+ log_must test -n "$refres"
+ sizes["$raid"]["$ndisks"]["$vbs"]=$refres
+
+ log_must zfs destroy "$vol"
+ done
+
+ log_must zpool destroy "$TESTPOOL"
+ done
+done
+
+# A little extra info is always helpful when diagnosing problems. To
+# pretty-print what you find in the log, do this in ksh:
+# typeset -A sizes=(...)
+# print -v sizes
+log_note "sizes=$(print -C sizes)"
+
+#
+# Helper furnction for checking that refreservation is calculated properly in
+# multi-vdev pools. "Properly" is defined as assuming that all vdevs are as
+# space inefficient as the worst one.
+#
+function check_vdevs {
+ typeset raid=$1
+ typeset nd1=$2
+ typeset nd2=$3
+ typeset -a disks1 disks2
+ typeset vbs vol refres refres1 refres2 expect
+
+ disks1=(${alldisks[0..$((nd1 - 1))]})
+ disks2=(${alldisks[$nd1..$((nd1 + nd2 - 1))]})
+ if (( ${#disks2[@]} < nd2 )); then
+ log_note "Too few disks to test $raid-$nd1 + $raid=$nd2"
+ return
+ fi
+
+ log_must zpool create -f "$TESTPOOL" \
+ "$raid" "${disks1[@]}" "$raid" "${disks2[@]}"
+
+ for bits in "${allshifts[@]}"; do
+ vbs=$((1 << bits))
+ log_note "Verifying $raid-$nd1 $raid-$nd2 volblocksize=$vbs"
+
+ vol=$TESTPOOL/$TESTVOL
+ log_must zfs create -V ${volsize}m -o volblocksize=$vbs "$vol"
+ refres=$(zfs get -Hpo value refreservation "$vol")
+ log_must test -n "$refres"
+
+ refres1=${sizes["$raid"]["$nd1"]["$vbs"]}
+ refres2=${sizes["$raid"]["$nd2"]["$vbs"]}
+
+ if (( refres1 > refres2 )); then
+ log_note "Expecting refres ($refres) to match refres" \
+ "from $raid-$nd1 ($refres1)"
+ log_must test "$refres" -eq "$refres1"
+ else
+ log_note "Expecting refres ($refres) to match refres" \
+ "from $raid-$nd1 ($refres2)"
+ log_must test "$refres" -eq "$refres2"
+ fi
+
+ log_must zfs destroy "$vol"
+ done
+
+ log_must zpool destroy "$TESTPOOL"
+}
+
+#
+# Verify that multi-vdev pools use the last optimistic size for all the
+# permutations within a particular raidz variant.
+#
+for raid in "${!sizes[@]}"; do
+ # ksh likes to create a [0] item for us. Thanks, ksh!
+ [[ $raid == "0" ]] && continue
+
+ for nd1 in "${!sizes["$raid"][@]}"; do
+ [[ $nd1 == "0" ]] && continue
+
+ for nd2 in "${!sizes["$raid"][@]}"; do
+ [[ $nd2 == "0" ]] && continue
+
+ check_vdevs "$raid" "$nd1" "$nd2"
+ done
+ done
+done
+
+log_pass "raidz refreservation=auto picks worst raidz vdev"
diff --git a/usr/src/test/zfs-tests/tests/functional/refreserv/refreserv_raidz.ksh b/usr/src/test/zfs-tests/tests/functional/refreserv/refreserv_raidz.ksh
new file mode 100644
index 0000000000..8628d65e7e
--- /dev/null
+++ b/usr/src/test/zfs-tests/tests/functional/refreserv/refreserv_raidz.ksh
@@ -0,0 +1,130 @@
+#!/bin/ksh -p
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright 2019 Joyent, Inc.
+#
+
+. $STF_SUITE/include/libtest.shlib
+. $STF_SUITE/tests/functional/refreserv/refreserv.cfg
+
+#
+# DESCRIPTION:
+# raidz refreservation=auto accounts for extra parity and skip blocks
+#
+# STRATEGY:
+# 1. Create a pool with a single raidz vdev
+# 2. For each block size [512b, 1k, 128k] or [4k, 8k, 128k]
+# - create a volume
+# - fully overwrite it
+# - verify that referenced is less than or equal to reservation
+# - destroy the volume
+# 3. Destroy the pool
+# 4. Recreate the pool with one more disk in the vdev, then repeat steps
+# 2 and 3.
+# 5. Repeat all steps above for raidz2 and raidz3.
+#
+# NOTES:
+# 1. This test will use up to 14 disks but can cover the key concepts with
+# 5 disks.
+# 2. If the disks are a mixture of 4Kn and 512n/512e, failures are likely.
+#
+
+verify_runnable "global"
+
+typeset -a alldisks=($DISKS)
+
+# The larger the volsize, the better zvol_volsize_to_reservation() is at
+# guessing the right number. At 10M on ashift=12, the estimate may be over 26%
+# too high.
+volsize=100
+
+function cleanup
+{
+ default_cleanup_noexit
+ default_setup_noexit "${alldisks[0]}"
+}
+
+log_assert "raidz refreservation=auto accounts for extra parity and skip blocks"
+log_onexit cleanup
+
+poolexists "$TESTPOOL" && log_must zpool destroy "$TESTPOOL"
+
+# Testing tiny block sizes on ashift=12 pools causes so much size inflation
+# that small test disks may fill before creating small volumes. However,
+# testing 512b and 1K blocks on ashift=9 pools is an ok approximation for
+# testing the problems that arise from 4K and 8K blocks on ashift=12 pools.
+bps=$(prtvtoc /dev/rdsk/${alldisks[0]} |
+ awk '$NF == "bytes/sector" { print $2; exit 0 }')
+log_must test "$bps" -eq 512 -o "$bps" -eq 4096
+case "$bps" in
+512)
+ allshifts=(9 10 17)
+ maxpct=151
+ ;;
+4096)
+ allshifts=(12 13 17)
+ maxpct=110
+ ;;
+*)
+ log_fail "bytes/sector != (512|4096)"
+ ;;
+esac
+log_note "Testing in ashift=${allshifts[0]} mode"
+
+# This loop handles all iterations of steps 1 through 4 described in strategy
+# comment above,
+for parity in 1 2 3; do
+ raid=raidz$parity
+
+ # Ensure we hit scenarios with and without skip blocks
+ for ndisks in $((parity * 2)) $((parity * 2 + 1)); do
+ typeset -a disks=(${alldisks[0..$((ndisks - 1))]})
+
+ if (( ${#disks[@]} < ndisks )); then
+ log_note "Too few disks to test $raid-$ndisks"
+ continue
+ fi
+
+ log_must zpool create "$TESTPOOL" "$raid" "${disks[@]}"
+
+ for bits in "${allshifts[@]}"; do
+ vbs=$((1 << bits))
+ log_note "Testing $raid-$ndisks volblocksize=$vbs"
+
+ vol=$TESTPOOL/$TESTVOL
+ log_must zfs create -V ${volsize}m \
+ -o volblocksize=$vbs "$vol"
+ log_must dd if=/dev/zero of=/dev/zvol/dsk/$vol \
+ bs=1024k count=$volsize
+ sync
+
+ ref=$(zfs get -Hpo value referenced "$vol")
+ refres=$(zfs get -Hpo value refreservation "$vol")
+ log_must test -n "$ref"
+ log_must test -n "$refres"
+
+ typeset -F2 deltapct=$((refres * 100.0 / ref))
+ log_note "$raid-$ndisks refreservation $refres" \
+ "is $deltapct% of reservation $res"
+
+ log_must test "$ref" -le "$refres"
+ log_must test "$deltapct" -le $maxpct
+
+ log_must zfs destroy "$vol"
+ done
+
+ log_must zpool destroy "$TESTPOOL"
+ done
+done
+
+log_pass "raidz refreservation=auto accounts for extra parity and skip blocks"
--
2.22.0
Test: /opt/zfs-tests/tests/functional/refreserv/setup (run as root) [00:00] [PASS]
11:08:54.36 SUCCESS: zpool create -f testpool c0t600144F013057A3100005D0D4758006Bd0
11:08:54.44 SUCCESS: zfs create testpool/testfs
11:08:54.71 SUCCESS: zfs set mountpoint=/var/tmp/testdir testpool/testfs
Test: /opt/zfs-tests/tests/functional/refreserv/refreserv_multi_raidz (run as root) [00:41] [PASS]
11:08:54.74 ASSERTION: raidz refreservation=auto picks worst raidz vdev
11:08:55.00 SUCCESS: zpool destroy testpool
11:08:55.04 NOTE: Testing in ashift=12 mode
11:08:55.67 SUCCESS: zpool create testpool raidz1 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0
11:08:55.67 NOTE: Gathering refreservation for raidz1-2 volblocksize=4096
11:08:55.78 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:08:55.78 SUCCESS: test -n 113508352
11:08:55.83 SUCCESS: zfs destroy testpool/testvol
11:08:55.83 NOTE: Gathering refreservation for raidz1-2 volblocksize=8192
11:08:55.94 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:08:55.95 SUCCESS: test -n 110362624
11:08:56.03 SUCCESS: zfs destroy testpool/testvol
11:08:56.03 NOTE: Gathering refreservation for raidz1-2 volblocksize=131072
11:08:56.12 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:08:56.13 SUCCESS: test -n 106954752
11:08:56.22 SUCCESS: zfs destroy testpool/testvol
11:08:56.42 SUCCESS: zpool destroy testpool
11:08:57.00 SUCCESS: zpool create testpool raidz1 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 c0t600144F013057A3100005D0D4758006Dd0
11:08:57.00 NOTE: Gathering refreservation for raidz1-3 volblocksize=4096
11:08:57.05 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:08:57.06 SUCCESS: test -n 148460885
11:08:57.09 SUCCESS: zfs destroy testpool/testvol
11:08:57.09 NOTE: Gathering refreservation for raidz1-3 volblocksize=8192
11:08:57.15 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:08:57.16 SUCCESS: test -n 145315157
11:08:57.20 SUCCESS: zfs destroy testpool/testvol
11:08:57.20 NOTE: Gathering refreservation for raidz1-3 volblocksize=131072
11:08:57.26 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:08:57.27 SUCCESS: test -n 106954752
11:08:57.31 SUCCESS: zfs destroy testpool/testvol
11:08:57.44 SUCCESS: zpool destroy testpool
11:08:58.27 SUCCESS: zpool create testpool raidz2 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 c0t600144F013057A3100005D0D4758006Dd0 c0t600144F013057A3100005D0D4758006Ed0
11:08:58.27 NOTE: Gathering refreservation for raidz2-4 volblocksize=4096
11:08:58.36 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:08:58.37 SUCCESS: test -n 161170897
11:08:58.42 SUCCESS: zfs destroy testpool/testvol
11:08:58.43 NOTE: Gathering refreservation for raidz2-4 volblocksize=8192
11:08:58.51 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:08:58.52 SUCCESS: test -n 158025169
11:08:58.58 SUCCESS: zfs destroy testpool/testvol
11:08:58.58 NOTE: Gathering refreservation for raidz2-4 volblocksize=131072
11:08:58.66 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:08:58.67 SUCCESS: test -n 106954752
11:08:58.73 SUCCESS: zfs destroy testpool/testvol
11:08:58.92 SUCCESS: zpool destroy testpool
11:08:59.89 SUCCESS: zpool create testpool raidz2 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 c0t600144F013057A3100005D0D4758006Dd0 c0t600144F013057A3100005D0D4758006Ed0 c0t600144F013057A3100005D0D4758006Fd0
11:08:59.89 NOTE: Gathering refreservation for raidz2-5 volblocksize=4096
11:08:59.96 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:08:59.97 SUCCESS: test -n 195064263
11:09:00.01 SUCCESS: zfs destroy testpool/testvol
11:09:00.01 NOTE: Gathering refreservation for raidz2-5 volblocksize=8192
11:09:00.08 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:09:00.09 SUCCESS: test -n 191918535
11:09:00.13 SUCCESS: zfs destroy testpool/testvol
11:09:00.13 NOTE: Gathering refreservation for raidz2-5 volblocksize=131072
11:09:00.21 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:09:00.22 SUCCESS: test -n 106954752
11:09:00.27 SUCCESS: zfs destroy testpool/testvol
11:09:00.44 SUCCESS: zpool destroy testpool
11:09:02.35 SUCCESS: zpool create testpool raidz3 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 c0t600144F013057A3100005D0D4758006Dd0 c0t600144F013057A3100005D0D4758006Ed0 c0t600144F013057A3100005D0D4758006Fd0 c0t600144F013057A3100005D0D47580070d0
11:09:02.35 NOTE: Gathering refreservation for raidz3-6 volblocksize=4096
11:09:02.43 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:09:02.44 SUCCESS: test -n 206029763
11:09:02.66 SUCCESS: zfs destroy testpool/testvol
11:09:02.66 NOTE: Gathering refreservation for raidz3-6 volblocksize=8192
11:09:03.24 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:09:03.25 SUCCESS: test -n 202884035
11:09:03.37 SUCCESS: zfs destroy testpool/testvol
11:09:03.37 NOTE: Gathering refreservation for raidz3-6 volblocksize=131072
11:09:03.73 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:09:03.74 SUCCESS: test -n 106954752
11:09:03.86 SUCCESS: zfs destroy testpool/testvol
11:09:04.20 SUCCESS: zpool destroy testpool
11:09:05.55 SUCCESS: zpool create testpool raidz3 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 c0t600144F013057A3100005D0D4758006Dd0 c0t600144F013057A3100005D0D4758006Ed0 c0t600144F013057A3100005D0D4758006Fd0 c0t600144F013057A3100005D0D47580070d0 c0t600144F013057A3100005D0D47580071d0
11:09:05.55 NOTE: Gathering refreservation for raidz3-7 volblocksize=4096
11:09:05.62 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:09:05.63 SUCCESS: test -n 248325266
11:09:05.70 SUCCESS: zfs destroy testpool/testvol
11:09:05.70 NOTE: Gathering refreservation for raidz3-7 volblocksize=8192
11:09:05.81 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:09:05.82 SUCCESS: test -n 245179538
11:09:05.88 SUCCESS: zfs destroy testpool/testvol
11:09:05.88 NOTE: Gathering refreservation for raidz3-7 volblocksize=131072
11:09:06.00 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:09:06.01 SUCCESS: test -n 106954752
11:09:06.29 SUCCESS: zfs destroy testpool/testvol
11:09:06.71 SUCCESS: zpool destroy testpool
11:09:06.71 NOTE: sizes=([0]='' [raidz1]=([0]='' [2]=([0]='' [131072]=106954752 [4096]=113508352 [8192]=110362624) [3]=([0]='' [131072]=106954752 [4096]=148460885 [8192]=145315157) ) [raidz2]=([0]='' [4]=([0]='' [131072]=106954752 [4096]=161170897 [8192]=158025169) [5]=([0]='' [131072]=106954752 [4096]=195064263 [8192]=191918535) ) [raidz3]=([0]='' [6]=([0]='' [131072]=106954752 [4096]=206029763 [8192]=202884035) [7]=([0]='' [131072]=106954752 [4096]=248325266 [8192]=245179538) ) )
11:09:07.36 SUCCESS: zpool create -f testpool raidz1 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 raidz1 c0t600144F013057A3100005D0D4758006Dd0 c0t600144F013057A3100005D0D4758006Ed0
11:09:07.36 NOTE: Verifying raidz1-2 raidz1-2 volblocksize=4096
11:09:07.54 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:09:07.56 SUCCESS: test -n 113508352
11:09:07.56 NOTE: Expecting refres (113508352) to match refres from raidz1-2 (113508352)
11:09:07.56 SUCCESS: test 113508352 -eq 113508352
11:09:07.62 SUCCESS: zfs destroy testpool/testvol
11:09:07.63 NOTE: Verifying raidz1-2 raidz1-2 volblocksize=8192
11:09:07.99 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:09:08.01 SUCCESS: test -n 110362624
11:09:08.01 NOTE: Expecting refres (110362624) to match refres from raidz1-2 (110362624)
11:09:08.01 SUCCESS: test 110362624 -eq 110362624
11:09:08.09 SUCCESS: zfs destroy testpool/testvol
11:09:08.09 NOTE: Verifying raidz1-2 raidz1-2 volblocksize=131072
11:09:08.24 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:09:08.25 SUCCESS: test -n 106954752
11:09:08.25 NOTE: Expecting refres (106954752) to match refres from raidz1-2 (106954752)
11:09:08.25 SUCCESS: test 106954752 -eq 106954752
11:09:08.31 SUCCESS: zfs destroy testpool/testvol
11:09:08.56 SUCCESS: zpool destroy testpool
11:09:09.29 SUCCESS: zpool create -f testpool raidz1 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 raidz1 c0t600144F013057A3100005D0D4758006Dd0 c0t600144F013057A3100005D0D4758006Ed0 c0t600144F013057A3100005D0D4758006Fd0
11:09:09.29 NOTE: Verifying raidz1-2 raidz1-3 volblocksize=4096
11:09:09.35 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:09:09.36 SUCCESS: test -n 148460885
11:09:09.36 NOTE: Expecting refres (148460885) to match refres from raidz1-2 (148460885)
11:09:09.36 SUCCESS: test 148460885 -eq 148460885
11:09:09.41 SUCCESS: zfs destroy testpool/testvol
11:09:09.41 NOTE: Verifying raidz1-2 raidz1-3 volblocksize=8192
11:09:09.48 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:09:09.49 SUCCESS: test -n 145315157
11:09:09.49 NOTE: Expecting refres (145315157) to match refres from raidz1-2 (145315157)
11:09:09.49 SUCCESS: test 145315157 -eq 145315157
11:09:09.53 SUCCESS: zfs destroy testpool/testvol
11:09:09.53 NOTE: Verifying raidz1-2 raidz1-3 volblocksize=131072
11:09:09.60 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:09:09.61 SUCCESS: test -n 106954752
11:09:09.61 NOTE: Expecting refres (106954752) to match refres from raidz1-2 (106954752)
11:09:09.61 SUCCESS: test 106954752 -eq 106954752
11:09:09.65 SUCCESS: zfs destroy testpool/testvol
11:09:09.82 SUCCESS: zpool destroy testpool
11:09:10.66 SUCCESS: zpool create -f testpool raidz1 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 c0t600144F013057A3100005D0D4758006Dd0 raidz1 c0t600144F013057A3100005D0D4758006Ed0 c0t600144F013057A3100005D0D4758006Fd0
11:09:10.66 NOTE: Verifying raidz1-3 raidz1-2 volblocksize=4096
11:09:10.73 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:09:10.74 SUCCESS: test -n 148460885
11:09:10.74 NOTE: Expecting refres (148460885) to match refres from raidz1-3 (148460885)
11:09:10.74 SUCCESS: test 148460885 -eq 148460885
11:09:10.78 SUCCESS: zfs destroy testpool/testvol
11:09:10.78 NOTE: Verifying raidz1-3 raidz1-2 volblocksize=8192
11:09:10.86 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:09:10.87 SUCCESS: test -n 145315157
11:09:10.87 NOTE: Expecting refres (145315157) to match refres from raidz1-3 (145315157)
11:09:10.87 SUCCESS: test 145315157 -eq 145315157
11:09:10.91 SUCCESS: zfs destroy testpool/testvol
11:09:10.92 NOTE: Verifying raidz1-3 raidz1-2 volblocksize=131072
11:09:10.98 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:09:10.99 SUCCESS: test -n 106954752
11:09:10.99 NOTE: Expecting refres (106954752) to match refres from raidz1-3 (106954752)
11:09:10.99 SUCCESS: test 106954752 -eq 106954752
11:09:11.03 SUCCESS: zfs destroy testpool/testvol
11:09:11.19 SUCCESS: zpool destroy testpool
11:09:12.56 SUCCESS: zpool create -f testpool raidz1 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 c0t600144F013057A3100005D0D4758006Dd0 raidz1 c0t600144F013057A3100005D0D4758006Ed0 c0t600144F013057A3100005D0D4758006Fd0 c0t600144F013057A3100005D0D47580070d0
11:09:12.56 NOTE: Verifying raidz1-3 raidz1-3 volblocksize=4096
11:09:12.61 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:09:12.62 SUCCESS: test -n 148460885
11:09:12.62 NOTE: Expecting refres (148460885) to match refres from raidz1-3 (148460885)
11:09:12.63 SUCCESS: test 148460885 -eq 148460885
11:09:12.66 SUCCESS: zfs destroy testpool/testvol
11:09:12.66 NOTE: Verifying raidz1-3 raidz1-3 volblocksize=8192
11:09:12.73 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:09:12.74 SUCCESS: test -n 145315157
11:09:12.74 NOTE: Expecting refres (145315157) to match refres from raidz1-3 (145315157)
11:09:12.74 SUCCESS: test 145315157 -eq 145315157
11:09:12.79 SUCCESS: zfs destroy testpool/testvol
11:09:12.79 NOTE: Verifying raidz1-3 raidz1-3 volblocksize=131072
11:09:12.88 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:09:12.89 SUCCESS: test -n 106954752
11:09:12.89 NOTE: Expecting refres (106954752) to match refres from raidz1-3 (106954752)
11:09:12.89 SUCCESS: test 106954752 -eq 106954752
11:09:12.92 SUCCESS: zfs destroy testpool/testvol
11:09:13.08 SUCCESS: zpool destroy testpool
11:09:14.36 SUCCESS: zpool create -f testpool raidz2 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 c0t600144F013057A3100005D0D4758006Dd0 c0t600144F013057A3100005D0D4758006Ed0 raidz2 c0t600144F013057A3100005D0D4758006Fd0 c0t600144F013057A3100005D0D47580070d0 c0t600144F013057A3100005D0D47580071d0 c0t600144F013057A3100005D0D47580072d0
11:09:14.36 NOTE: Verifying raidz2-4 raidz2-4 volblocksize=4096
11:09:14.43 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:09:14.44 SUCCESS: test -n 161170897
11:09:14.44 NOTE: Expecting refres (161170897) to match refres from raidz2-4 (161170897)
11:09:14.44 SUCCESS: test 161170897 -eq 161170897
11:09:14.57 SUCCESS: zfs destroy testpool/testvol
11:09:14.57 NOTE: Verifying raidz2-4 raidz2-4 volblocksize=8192
11:09:14.79 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:09:14.80 SUCCESS: test -n 158025169
11:09:14.80 NOTE: Expecting refres (158025169) to match refres from raidz2-4 (158025169)
11:09:14.80 SUCCESS: test 158025169 -eq 158025169
11:09:14.86 SUCCESS: zfs destroy testpool/testvol
11:09:14.86 NOTE: Verifying raidz2-4 raidz2-4 volblocksize=131072
11:09:15.29 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:09:15.36 SUCCESS: test -n 106954752
11:09:15.36 NOTE: Expecting refres (106954752) to match refres from raidz2-4 (106954752)
11:09:15.37 SUCCESS: test 106954752 -eq 106954752
11:09:15.60 SUCCESS: zfs destroy testpool/testvol
11:09:15.97 SUCCESS: zpool destroy testpool
11:09:17.43 SUCCESS: zpool create -f testpool raidz2 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 c0t600144F013057A3100005D0D4758006Dd0 c0t600144F013057A3100005D0D4758006Ed0 raidz2 c0t600144F013057A3100005D0D4758006Fd0 c0t600144F013057A3100005D0D47580070d0 c0t600144F013057A3100005D0D47580071d0 c0t600144F013057A3100005D0D47580072d0 c0t600144F013057A3100005D0D47580073d0
11:09:17.43 NOTE: Verifying raidz2-4 raidz2-5 volblocksize=4096
11:09:17.56 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:09:17.57 SUCCESS: test -n 195064263
11:09:17.57 NOTE: Expecting refres (195064263) to match refres from raidz2-4 (195064263)
11:09:17.57 SUCCESS: test 195064263 -eq 195064263
11:09:17.61 SUCCESS: zfs destroy testpool/testvol
11:09:17.62 NOTE: Verifying raidz2-4 raidz2-5 volblocksize=8192
11:09:17.69 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:09:17.70 SUCCESS: test -n 191918535
11:09:17.70 NOTE: Expecting refres (191918535) to match refres from raidz2-4 (191918535)
11:09:17.71 SUCCESS: test 191918535 -eq 191918535
11:09:17.75 SUCCESS: zfs destroy testpool/testvol
11:09:17.75 NOTE: Verifying raidz2-4 raidz2-5 volblocksize=131072
11:09:17.83 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:09:17.84 SUCCESS: test -n 106954752
11:09:17.84 NOTE: Expecting refres (106954752) to match refres from raidz2-4 (106954752)
11:09:17.85 SUCCESS: test 106954752 -eq 106954752
11:09:17.90 SUCCESS: zfs destroy testpool/testvol
11:09:18.12 SUCCESS: zpool destroy testpool
11:09:19.43 SUCCESS: zpool create -f testpool raidz2 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 c0t600144F013057A3100005D0D4758006Dd0 c0t600144F013057A3100005D0D4758006Ed0 c0t600144F013057A3100005D0D4758006Fd0 raidz2 c0t600144F013057A3100005D0D47580070d0 c0t600144F013057A3100005D0D47580071d0 c0t600144F013057A3100005D0D47580072d0 c0t600144F013057A3100005D0D47580073d0
11:09:19.43 NOTE: Verifying raidz2-5 raidz2-4 volblocksize=4096
11:09:19.66 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:09:19.67 SUCCESS: test -n 195064263
11:09:19.67 NOTE: Expecting refres (195064263) to match refres from raidz2-5 (195064263)
11:09:19.67 SUCCESS: test 195064263 -eq 195064263
11:09:19.72 SUCCESS: zfs destroy testpool/testvol
11:09:19.72 NOTE: Verifying raidz2-5 raidz2-4 volblocksize=8192
11:09:19.81 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:09:19.82 SUCCESS: test -n 191918535
11:09:19.82 NOTE: Expecting refres (191918535) to match refres from raidz2-5 (191918535)
11:09:19.82 SUCCESS: test 191918535 -eq 191918535
11:09:19.87 SUCCESS: zfs destroy testpool/testvol
11:09:19.87 NOTE: Verifying raidz2-5 raidz2-4 volblocksize=131072
11:09:19.95 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:09:19.96 SUCCESS: test -n 106954752
11:09:19.96 NOTE: Expecting refres (106954752) to match refres from raidz2-5 (106954752)
11:09:19.96 SUCCESS: test 106954752 -eq 106954752
11:09:20.01 SUCCESS: zfs destroy testpool/testvol
11:09:20.23 SUCCESS: zpool destroy testpool
11:09:21.80 SUCCESS: zpool create -f testpool raidz2 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 c0t600144F013057A3100005D0D4758006Dd0 c0t600144F013057A3100005D0D4758006Ed0 c0t600144F013057A3100005D0D4758006Fd0 raidz2 c0t600144F013057A3100005D0D47580070d0 c0t600144F013057A3100005D0D47580071d0 c0t600144F013057A3100005D0D47580072d0 c0t600144F013057A3100005D0D47580073d0 c0t600144F013057A3100005D0D47580074d0
11:09:21.80 NOTE: Verifying raidz2-5 raidz2-5 volblocksize=4096
11:09:21.87 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:09:21.88 SUCCESS: test -n 195064263
11:09:21.88 NOTE: Expecting refres (195064263) to match refres from raidz2-5 (195064263)
11:09:21.89 SUCCESS: test 195064263 -eq 195064263
11:09:21.93 SUCCESS: zfs destroy testpool/testvol
11:09:21.93 NOTE: Verifying raidz2-5 raidz2-5 volblocksize=8192
11:09:22.00 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:09:22.01 SUCCESS: test -n 191918535
11:09:22.02 NOTE: Expecting refres (191918535) to match refres from raidz2-5 (191918535)
11:09:22.02 SUCCESS: test 191918535 -eq 191918535
11:09:22.06 SUCCESS: zfs destroy testpool/testvol
11:09:22.06 NOTE: Verifying raidz2-5 raidz2-5 volblocksize=131072
11:09:22.13 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:09:22.14 SUCCESS: test -n 106954752
11:09:22.14 NOTE: Expecting refres (106954752) to match refres from raidz2-5 (106954752)
11:09:22.14 SUCCESS: test 106954752 -eq 106954752
11:09:22.19 SUCCESS: zfs destroy testpool/testvol
11:09:22.47 SUCCESS: zpool destroy testpool
11:09:24.34 SUCCESS: zpool create -f testpool raidz3 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 c0t600144F013057A3100005D0D4758006Dd0 c0t600144F013057A3100005D0D4758006Ed0 c0t600144F013057A3100005D0D4758006Fd0 c0t600144F013057A3100005D0D47580070d0 raidz3 c0t600144F013057A3100005D0D47580071d0 c0t600144F013057A3100005D0D47580072d0 c0t600144F013057A3100005D0D47580073d0 c0t600144F013057A3100005D0D47580074d0 c0t600144F013057A3100005D0D47580075d0 c0t600144F013057A3100005D0D47580076d0
11:09:24.35 NOTE: Verifying raidz3-6 raidz3-6 volblocksize=4096
11:09:24.62 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:09:24.63 SUCCESS: test -n 206029763
11:09:24.64 NOTE: Expecting refres (206029763) to match refres from raidz3-6 (206029763)
11:09:24.64 SUCCESS: test 206029763 -eq 206029763
11:09:24.76 SUCCESS: zfs destroy testpool/testvol
11:09:24.76 NOTE: Verifying raidz3-6 raidz3-6 volblocksize=8192
11:09:25.01 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:09:25.10 SUCCESS: test -n 202884035
11:09:25.10 NOTE: Expecting refres (202884035) to match refres from raidz3-6 (202884035)
11:09:25.10 SUCCESS: test 202884035 -eq 202884035
11:09:25.24 SUCCESS: zfs destroy testpool/testvol
11:09:25.24 NOTE: Verifying raidz3-6 raidz3-6 volblocksize=131072
11:09:25.63 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:09:25.74 SUCCESS: test -n 106954752
11:09:25.74 NOTE: Expecting refres (106954752) to match refres from raidz3-6 (106954752)
11:09:25.75 SUCCESS: test 106954752 -eq 106954752
11:09:25.88 SUCCESS: zfs destroy testpool/testvol
11:09:26.29 SUCCESS: zpool destroy testpool
11:09:28.25 SUCCESS: zpool create -f testpool raidz3 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 c0t600144F013057A3100005D0D4758006Dd0 c0t600144F013057A3100005D0D4758006Ed0 c0t600144F013057A3100005D0D4758006Fd0 c0t600144F013057A3100005D0D47580070d0 raidz3 c0t600144F013057A3100005D0D47580071d0 c0t600144F013057A3100005D0D47580072d0 c0t600144F013057A3100005D0D47580073d0 c0t600144F013057A3100005D0D47580074d0 c0t600144F013057A3100005D0D47580075d0 c0t600144F013057A3100005D0D47580076d0 c0t600144F013057A3100005D0D47580077d0
11:09:28.25 NOTE: Verifying raidz3-6 raidz3-7 volblocksize=4096
11:09:28.35 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:09:28.36 SUCCESS: test -n 248325266
11:09:28.36 NOTE: Expecting refres (248325266) to match refres from raidz3-6 (248325266)
11:09:28.36 SUCCESS: test 248325266 -eq 248325266
11:09:28.49 SUCCESS: zfs destroy testpool/testvol
11:09:28.49 NOTE: Verifying raidz3-6 raidz3-7 volblocksize=8192
11:09:28.61 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:09:28.62 SUCCESS: test -n 245179538
11:09:28.62 NOTE: Expecting refres (245179538) to match refres from raidz3-6 (245179538)
11:09:28.62 SUCCESS: test 245179538 -eq 245179538
11:09:28.68 SUCCESS: zfs destroy testpool/testvol
11:09:28.68 NOTE: Verifying raidz3-6 raidz3-7 volblocksize=131072
11:09:28.80 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:09:28.81 SUCCESS: test -n 106954752
11:09:28.81 NOTE: Expecting refres (106954752) to match refres from raidz3-6 (106954752)
11:09:28.81 SUCCESS: test 106954752 -eq 106954752
11:09:28.87 SUCCESS: zfs destroy testpool/testvol
11:09:29.17 SUCCESS: zpool destroy testpool
11:09:30.96 SUCCESS: zpool create -f testpool raidz3 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 c0t600144F013057A3100005D0D4758006Dd0 c0t600144F013057A3100005D0D4758006Ed0 c0t600144F013057A3100005D0D4758006Fd0 c0t600144F013057A3100005D0D47580070d0 c0t600144F013057A3100005D0D47580071d0 raidz3 c0t600144F013057A3100005D0D47580072d0 c0t600144F013057A3100005D0D47580073d0 c0t600144F013057A3100005D0D47580074d0 c0t600144F013057A3100005D0D47580075d0 c0t600144F013057A3100005D0D47580076d0 c0t600144F013057A3100005D0D47580077d0
11:09:30.96 NOTE: Verifying raidz3-7 raidz3-6 volblocksize=4096
11:09:31.05 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:09:31.06 SUCCESS: test -n 248325266
11:09:31.06 NOTE: Expecting refres (248325266) to match refres from raidz3-7 (248325266)
11:09:31.06 SUCCESS: test 248325266 -eq 248325266
11:09:31.31 SUCCESS: zfs destroy testpool/testvol
11:09:31.32 NOTE: Verifying raidz3-7 raidz3-6 volblocksize=8192
11:09:31.42 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:09:31.43 SUCCESS: test -n 245179538
11:09:31.43 NOTE: Expecting refres (245179538) to match refres from raidz3-7 (245179538)
11:09:31.44 SUCCESS: test 245179538 -eq 245179538
11:09:31.51 SUCCESS: zfs destroy testpool/testvol
11:09:31.51 NOTE: Verifying raidz3-7 raidz3-6 volblocksize=131072
11:09:31.62 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:09:31.63 SUCCESS: test -n 106954752
11:09:31.63 NOTE: Expecting refres (106954752) to match refres from raidz3-7 (106954752)
11:09:31.63 SUCCESS: test 106954752 -eq 106954752
11:09:31.70 SUCCESS: zfs destroy testpool/testvol
11:09:32.05 SUCCESS: zpool destroy testpool
11:09:34.11 SUCCESS: zpool create -f testpool raidz3 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 c0t600144F013057A3100005D0D4758006Dd0 c0t600144F013057A3100005D0D4758006Ed0 c0t600144F013057A3100005D0D4758006Fd0 c0t600144F013057A3100005D0D47580070d0 c0t600144F013057A3100005D0D47580071d0 raidz3 c0t600144F013057A3100005D0D47580072d0 c0t600144F013057A3100005D0D47580073d0 c0t600144F013057A3100005D0D47580074d0 c0t600144F013057A3100005D0D47580075d0 c0t600144F013057A3100005D0D47580076d0 c0t600144F013057A3100005D0D47580077d0 c0t600144F013057A3100005D0D47580078d0
11:09:34.11 NOTE: Verifying raidz3-7 raidz3-7 volblocksize=4096
11:09:34.41 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:09:34.42 SUCCESS: test -n 248325266
11:09:34.42 NOTE: Expecting refres (248325266) to match refres from raidz3-7 (248325266)
11:09:34.42 SUCCESS: test 248325266 -eq 248325266
11:09:34.49 SUCCESS: zfs destroy testpool/testvol
11:09:34.50 NOTE: Verifying raidz3-7 raidz3-7 volblocksize=8192
11:09:34.72 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:09:34.73 SUCCESS: test -n 245179538
11:09:34.73 NOTE: Expecting refres (245179538) to match refres from raidz3-7 (245179538)
11:09:34.73 SUCCESS: test 245179538 -eq 245179538
11:09:34.81 SUCCESS: zfs destroy testpool/testvol
11:09:34.81 NOTE: Verifying raidz3-7 raidz3-7 volblocksize=131072
11:09:35.00 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:09:35.01 SUCCESS: test -n 106954752
11:09:35.01 NOTE: Expecting refres (106954752) to match refres from raidz3-7 (106954752)
11:09:35.01 SUCCESS: test 106954752 -eq 106954752
11:09:35.30 SUCCESS: zfs destroy testpool/testvol
11:09:35.93 SUCCESS: zpool destroy testpool
11:09:35.93 NOTE: Performing local cleanup via log_onexit (cleanup)
11:09:35.96 SUCCESS: rm -rf /var/tmp/testdir
11:09:36.25 SUCCESS: zpool create -f testpool c0t600144F013057A3100005D0D4758006Bd0
11:09:36.34 SUCCESS: zfs create testpool/testfs
11:09:36.48 SUCCESS: zfs set mountpoint=/var/tmp/testdir testpool/testfs
11:09:36.48 raidz refreservation=auto picks worst raidz vdev
Test: /opt/zfs-tests/tests/functional/refreserv/refreserv_raidz (run as root) [01:50] [PASS]
11:09:36.54 ASSERTION: raidz refreservation=auto accounts for extra parity and skip blocks
11:09:36.76 SUCCESS: zpool destroy testpool
11:09:36.80 SUCCESS: test 4096 -eq 512 -o 4096 -eq 4096
11:09:36.81 NOTE: Testing in ashift=12 mode
11:09:37.75 SUCCESS: zpool create testpool raidz1 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0
11:09:37.75 NOTE: Testing raidz1-2 volblocksize=4096
11:09:37.83 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:09:42.62 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:09:42.63 SUCCESS: test -n 105332736
11:09:42.63 SUCCESS: test -n 113508352
11:09:42.63 NOTE: raidz1-2 refreservation 113508352 is 107.76% of reservation
11:09:42.64 SUCCESS: test 105332736 -le 113508352
11:09:42.64 SUCCESS: test 107.76 -le 110
11:09:42.75 SUCCESS: zfs destroy testpool/testvol
11:09:42.75 NOTE: Testing raidz1-2 volblocksize=8192
11:09:42.85 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:09:47.78 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:09:47.80 SUCCESS: test -n 105127936
11:09:47.80 SUCCESS: test -n 110362624
11:09:47.80 NOTE: raidz1-2 refreservation 110362624 is 104.98% of reservation
11:09:47.81 SUCCESS: test 105127936 -le 110362624
11:09:47.81 SUCCESS: test 104.98 -le 110
11:09:47.89 SUCCESS: zfs destroy testpool/testvol
11:09:47.89 NOTE: Testing raidz1-2 volblocksize=131072
11:09:48.02 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:09:52.27 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:09:52.31 SUCCESS: test -n 104931328
11:09:52.32 SUCCESS: test -n 106954752
11:09:52.32 NOTE: raidz1-2 refreservation 106954752 is 101.93% of reservation
11:09:52.32 SUCCESS: test 104931328 -le 106954752
11:09:52.32 SUCCESS: test 101.93 -le 110
11:09:52.40 SUCCESS: zfs destroy testpool/testvol
11:09:52.65 SUCCESS: zpool destroy testpool
11:09:53.24 SUCCESS: zpool create testpool raidz1 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 c0t600144F013057A3100005D0D4758006Dd0
11:09:53.24 NOTE: Testing raidz1-3 volblocksize=4096
11:09:53.29 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:09:57.72 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:09:57.74 SUCCESS: test -n 140306496
11:09:57.74 SUCCESS: test -n 148460885
11:09:57.74 NOTE: raidz1-3 refreservation 148460885 is 105.81% of reservation
11:09:57.74 SUCCESS: test 140306496 -le 148460885
11:09:57.75 SUCCESS: test 105.81 -le 110
11:09:57.81 SUCCESS: zfs destroy testpool/testvol
11:09:57.81 NOTE: Testing raidz1-3 volblocksize=8192
11:09:57.90 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:10:02.12 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:10:02.15 SUCCESS: test -n 140033696
11:10:02.15 SUCCESS: test -n 145315157
11:10:02.15 NOTE: raidz1-3 refreservation 145315157 is 103.77% of reservation
11:10:02.16 SUCCESS: test 140033696 -le 145315157
11:10:02.16 SUCCESS: test 103.77 -le 110
11:10:02.22 SUCCESS: zfs destroy testpool/testvol
11:10:02.22 NOTE: Testing raidz1-3 volblocksize=131072
11:10:02.27 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:10:05.51 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:10:05.55 SUCCESS: test -n 104853408
11:10:05.55 SUCCESS: test -n 106954752
11:10:05.55 NOTE: raidz1-3 refreservation 106954752 is 102.00% of reservation
11:10:05.55 SUCCESS: test 104853408 -le 106954752
11:10:05.55 SUCCESS: test 102.00 -le 110
11:10:05.60 SUCCESS: zfs destroy testpool/testvol
11:10:05.74 SUCCESS: zpool destroy testpool
11:10:07.30 SUCCESS: zpool create testpool raidz2 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 c0t600144F013057A3100005D0D4758006Dd0 c0t600144F013057A3100005D0D4758006Ed0
11:10:07.30 NOTE: Testing raidz2-4 volblocksize=4096
11:10:07.37 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:10:13.88 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:10:13.89 SUCCESS: test -n 153061632
11:10:13.90 SUCCESS: test -n 161170897
11:10:13.90 NOTE: raidz2-4 refreservation 161170897 is 105.30% of reservation
11:10:13.90 SUCCESS: test 153061632 -le 161170897
11:10:13.90 SUCCESS: test 105.30 -le 110
11:10:14.00 SUCCESS: zfs destroy testpool/testvol
11:10:14.00 NOTE: Testing raidz2-4 volblocksize=8192
11:10:14.09 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:10:19.89 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:10:19.90 SUCCESS: test -n 152764032
11:10:19.91 SUCCESS: test -n 158025169
11:10:19.91 NOTE: raidz2-4 refreservation 158025169 is 103.44% of reservation
11:10:19.91 SUCCESS: test 152764032 -le 158025169
11:10:19.91 SUCCESS: test 103.44 -le 110
11:10:19.99 SUCCESS: zfs destroy testpool/testvol
11:10:19.99 NOTE: Testing raidz2-4 volblocksize=131072
11:10:20.09 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:10:24.19 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:10:24.23 SUCCESS: test -n 104862336
11:10:24.23 SUCCESS: test -n 106954752
11:10:24.23 NOTE: raidz2-4 refreservation 106954752 is 102.00% of reservation
11:10:24.24 SUCCESS: test 104862336 -le 106954752
11:10:24.24 SUCCESS: test 102.00 -le 110
11:10:24.30 SUCCESS: zfs destroy testpool/testvol
11:10:24.50 SUCCESS: zpool destroy testpool
11:10:25.24 SUCCESS: zpool create testpool raidz2 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 c0t600144F013057A3100005D0D4758006Dd0 c0t600144F013057A3100005D0D4758006Ed0 c0t600144F013057A3100005D0D4758006Fd0
11:10:25.24 NOTE: Testing raidz2-5 volblocksize=4096
11:10:25.30 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:10:31.73 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:10:31.75 SUCCESS: test -n 187006752
11:10:31.75 SUCCESS: test -n 195064263
11:10:31.75 NOTE: raidz2-5 refreservation 195064263 is 104.31% of reservation
11:10:31.75 SUCCESS: test 187006752 -le 195064263
11:10:31.76 SUCCESS: test 104.31 -le 110
11:10:31.83 SUCCESS: zfs destroy testpool/testvol
11:10:31.83 NOTE: Testing raidz2-5 volblocksize=8192
11:10:31.91 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:10:37.70 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:10:37.72 SUCCESS: test -n 186643152
11:10:37.72 SUCCESS: test -n 191918535
11:10:37.72 NOTE: raidz2-5 refreservation 191918535 is 102.83% of reservation
11:10:37.72 SUCCESS: test 186643152 -le 191918535
11:10:37.73 SUCCESS: test 102.83 -le 110
11:10:37.79 SUCCESS: zfs destroy testpool/testvol
11:10:37.79 NOTE: Testing raidz2-5 volblocksize=131072
11:10:37.85 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:10:41.21 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:10:41.29 SUCCESS: test -n 104847696
11:10:41.29 SUCCESS: test -n 106954752
11:10:41.29 NOTE: raidz2-5 refreservation 106954752 is 102.01% of reservation
11:10:41.30 SUCCESS: test 104847696 -le 106954752
11:10:41.30 SUCCESS: test 102.01 -le 110
11:10:41.35 SUCCESS: zfs destroy testpool/testvol
11:10:41.50 SUCCESS: zpool destroy testpool
11:10:42.37 SUCCESS: zpool create testpool raidz3 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 c0t600144F013057A3100005D0D4758006Dd0 c0t600144F013057A3100005D0D4758006Ed0 c0t600144F013057A3100005D0D4758006Fd0 c0t600144F013057A3100005D0D47580070d0
11:10:42.37 NOTE: Testing raidz3-6 volblocksize=4096
11:10:42.45 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:10:51.04 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:10:51.05 SUCCESS: test -n 197498880
11:10:51.06 SUCCESS: test -n 206029763
11:10:51.06 NOTE: raidz3-6 refreservation 206029763 is 104.32% of reservation
11:10:51.06 SUCCESS: test 197498880 -le 206029763
11:10:51.06 SUCCESS: test 104.32 -le 110
11:10:51.15 SUCCESS: zfs destroy testpool/testvol
11:10:51.15 NOTE: Testing raidz3-6 volblocksize=8192
11:10:51.24 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:10:59.10 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:10:59.11 SUCCESS: test -n 197114880
11:10:59.12 SUCCESS: test -n 202884035
11:10:59.12 NOTE: raidz3-6 refreservation 202884035 is 102.93% of reservation
11:10:59.12 SUCCESS: test 197114880 -le 202884035
11:10:59.12 SUCCESS: test 102.93 -le 110
11:10:59.19 SUCCESS: zfs destroy testpool/testvol
11:10:59.19 NOTE: Testing raidz3-6 volblocksize=131072
11:10:59.28 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:11:03.54 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:11:03.59 SUCCESS: test -n 104586240
11:11:03.59 SUCCESS: test -n 106954752
11:11:03.59 NOTE: raidz3-6 refreservation 106954752 is 102.26% of reservation
11:11:03.60 SUCCESS: test 104586240 -le 106954752
11:11:03.60 SUCCESS: test 102.26 -le 110
11:11:03.65 SUCCESS: zfs destroy testpool/testvol
11:11:03.86 SUCCESS: zpool destroy testpool
11:11:05.04 SUCCESS: zpool create testpool raidz3 c0t600144F013057A3100005D0D4758006Bd0 c0t600144F013057A3100005D0D4758006Cd0 c0t600144F013057A3100005D0D4758006Dd0 c0t600144F013057A3100005D0D4758006Ed0 c0t600144F013057A3100005D0D4758006Fd0 c0t600144F013057A3100005D0D47580070d0 c0t600144F013057A3100005D0D47580071d0
11:11:05.04 NOTE: Testing raidz3-7 volblocksize=4096
11:11:05.12 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:11:13.95 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:11:13.96 SUCCESS: test -n 240290304
11:11:13.97 SUCCESS: test -n 248325266
11:11:13.97 NOTE: raidz3-7 refreservation 248325266 is 103.34% of reservation
11:11:13.97 SUCCESS: test 240290304 -le 248325266
11:11:13.97 SUCCESS: test 103.34 -le 110
11:11:14.07 SUCCESS: zfs destroy testpool/testvol
11:11:14.07 NOTE: Testing raidz3-7 volblocksize=8192
11:11:14.20 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:11:21.59 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:11:21.60 SUCCESS: test -n 239823104
11:11:21.61 SUCCESS: test -n 245179538
11:11:21.61 NOTE: raidz3-7 refreservation 245179538 is 102.23% of reservation
11:11:21.61 SUCCESS: test 239823104 -le 245179538
11:11:21.61 SUCCESS: test 102.23 -le 110
11:11:21.70 SUCCESS: zfs destroy testpool/testvol
11:11:21.70 NOTE: Testing raidz3-7 volblocksize=131072
11:11:21.81 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:11:25.49 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:11:25.52 SUCCESS: test -n 104820992
11:11:25.52 SUCCESS: test -n 106954752
11:11:25.52 NOTE: raidz3-7 refreservation 106954752 is 102.04% of reservation
11:11:25.53 SUCCESS: test 104820992 -le 106954752
11:11:25.53 SUCCESS: test 102.04 -le 110
11:11:25.61 SUCCESS: zfs destroy testpool/testvol
11:11:25.88 SUCCESS: zpool destroy testpool
11:11:25.89 NOTE: Performing local cleanup via log_onexit (cleanup)
11:11:25.91 SUCCESS: rm -rf /var/tmp/testdir
11:11:26.21 SUCCESS: zpool create -f testpool c0t600144F013057A3100005D0D4758006Bd0
11:11:26.28 SUCCESS: zfs create testpool/testfs
11:11:26.54 SUCCESS: zfs set mountpoint=/var/tmp/testdir testpool/testfs
11:11:26.54 raidz refreservation=auto accounts for extra parity and skip blocks
Test: /opt/zfs-tests/tests/functional/refreserv/cleanup (run as root) [00:00] [PASS]
11:11:26.91 SUCCESS: rm -rf /var/tmp/testdir
Test: /opt/zfs-tests/tests/functional/refreserv/setup (run as root) [00:00] [PASS]
11:06:43.87 SUCCESS: zpool create -f testpool c0t600144F013057A3100005D0D46D8006Ad0
11:06:43.94 SUCCESS: zfs create testpool/testfs
11:06:44.21 SUCCESS: zfs set mountpoint=/var/tmp/testdir testpool/testfs
Test: /opt/zfs-tests/tests/functional/refreserv/refreserv_multi_raidz (run as root) [00:03] [PASS]
11:06:44.24 ASSERTION: raidz refreservation=auto picks worst raidz vdev
11:06:44.49 SUCCESS: zpool destroy testpool
11:06:44.53 NOTE: Testing in ashift=12 mode
11:06:45.18 SUCCESS: zpool create testpool raidz1 c0t600144F013057A3100005D0D46D8006Ad0 c0t600144F013057A3100005D0D46D80069d0
11:06:45.18 NOTE: Gathering refreservation for raidz1-2 volblocksize=4096
11:06:45.28 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:06:45.29 SUCCESS: test -n 113508352
11:06:45.35 SUCCESS: zfs destroy testpool/testvol
11:06:45.35 NOTE: Gathering refreservation for raidz1-2 volblocksize=8192
11:06:45.45 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:06:45.46 SUCCESS: test -n 110362624
11:06:45.53 SUCCESS: zfs destroy testpool/testvol
11:06:45.53 NOTE: Gathering refreservation for raidz1-2 volblocksize=131072
11:06:45.64 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:06:45.65 SUCCESS: test -n 106954752
11:06:45.72 SUCCESS: zfs destroy testpool/testvol
11:06:45.94 SUCCESS: zpool destroy testpool
11:06:46.54 SUCCESS: zpool create testpool raidz1 c0t600144F013057A3100005D0D46D8006Ad0 c0t600144F013057A3100005D0D46D80069d0 c0t600144F013057A3100005D0D46D80068d0
11:06:46.54 NOTE: Gathering refreservation for raidz1-3 volblocksize=4096
11:06:46.59 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:06:46.60 SUCCESS: test -n 148460885
11:06:46.66 SUCCESS: zfs destroy testpool/testvol
11:06:46.66 NOTE: Gathering refreservation for raidz1-3 volblocksize=8192
11:06:46.74 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:06:46.75 SUCCESS: test -n 145315157
11:06:46.79 SUCCESS: zfs destroy testpool/testvol
11:06:46.79 NOTE: Gathering refreservation for raidz1-3 volblocksize=131072
11:06:46.86 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:06:46.88 SUCCESS: test -n 106954752
11:06:46.92 SUCCESS: zfs destroy testpool/testvol
11:06:47.09 SUCCESS: zpool destroy testpool
11:06:47.09 NOTE: Too few disks to test raidz2-4
11:06:47.09 NOTE: Too few disks to test raidz2-5
11:06:47.09 NOTE: Too few disks to test raidz3-6
11:06:47.09 NOTE: Too few disks to test raidz3-7
11:06:47.09 NOTE: sizes=([0]='' [raidz1]=([0]='' [2]=([0]='' [131072]=106954752 [4096]=113508352 [8192]=110362624) [3]=([0]='' [131072]=106954752 [4096]=148460885 [8192]=145315157) ) [raidz2]=([0]='') [raidz3]=([0]='') )
11:06:47.09 NOTE: Too few disks to test raidz1-2 + raidz1=2
11:06:47.09 NOTE: Too few disks to test raidz1-2 + raidz1=3
11:06:47.09 NOTE: Too few disks to test raidz1-3 + raidz1=2
11:06:47.09 NOTE: Too few disks to test raidz1-3 + raidz1=3
11:06:47.09 NOTE: Performing local cleanup via log_onexit (cleanup)
11:06:47.12 SUCCESS: rm -rf /var/tmp/testdir
11:06:47.41 SUCCESS: zpool create -f testpool c0t600144F013057A3100005D0D46D8006Ad0
11:06:47.50 SUCCESS: zfs create testpool/testfs
11:06:47.77 SUCCESS: zfs set mountpoint=/var/tmp/testdir testpool/testfs
11:06:47.77 raidz refreservation=auto picks worst raidz vdev
Test: /opt/zfs-tests/tests/functional/refreserv/refreserv_raidz (run as root) [00:33] [PASS]
11:06:47.81 ASSERTION: raidz refreservation=auto accounts for extra parity and skip blocks
11:06:48.06 SUCCESS: zpool destroy testpool
11:06:48.10 SUCCESS: test 4096 -eq 512 -o 4096 -eq 4096
11:06:48.10 NOTE: Testing in ashift=12 mode
11:06:48.51 SUCCESS: zpool create testpool raidz1 c0t600144F013057A3100005D0D46D8006Ad0 c0t600144F013057A3100005D0D46D80069d0
11:06:48.51 NOTE: Testing raidz1-2 volblocksize=4096
11:06:48.62 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:06:53.83 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:06:53.85 SUCCESS: test -n 105332736
11:06:53.86 SUCCESS: test -n 113508352
11:06:53.86 NOTE: raidz1-2 refreservation 113508352 is 107.76% of reservation
11:06:53.86 SUCCESS: test 105332736 -le 113508352
11:06:53.86 SUCCESS: test 107.76 -le 110
11:06:53.98 SUCCESS: zfs destroy testpool/testvol
11:06:53.98 NOTE: Testing raidz1-2 volblocksize=8192
11:06:54.09 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:06:59.26 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:06:59.27 SUCCESS: test -n 105127936
11:06:59.28 SUCCESS: test -n 110362624
11:06:59.28 NOTE: raidz1-2 refreservation 110362624 is 104.98% of reservation
11:06:59.28 SUCCESS: test 105127936 -le 110362624
11:06:59.28 SUCCESS: test 104.98 -le 110
11:06:59.39 SUCCESS: zfs destroy testpool/testvol
11:06:59.39 NOTE: Testing raidz1-2 volblocksize=131072
11:06:59.54 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:07:04.41 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:07:04.43 SUCCESS: test -n 104931328
11:07:04.44 SUCCESS: test -n 106954752
11:07:04.44 NOTE: raidz1-2 refreservation 106954752 is 101.93% of reservation
11:07:04.44 SUCCESS: test 104931328 -le 106954752
11:07:04.44 SUCCESS: test 101.93 -le 110
11:07:04.53 SUCCESS: zfs destroy testpool/testvol
11:07:04.76 SUCCESS: zpool destroy testpool
11:07:06.12 SUCCESS: zpool create testpool raidz1 c0t600144F013057A3100005D0D46D8006Ad0 c0t600144F013057A3100005D0D46D80069d0 c0t600144F013057A3100005D0D46D80068d0
11:07:06.12 NOTE: Testing raidz1-3 volblocksize=4096
11:07:06.20 SUCCESS: zfs create -V 100m -o volblocksize=4096 testpool/testvol
11:07:11.33 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:07:11.35 SUCCESS: test -n 140306496
11:07:11.35 SUCCESS: test -n 148460885
11:07:11.35 NOTE: raidz1-3 refreservation 148460885 is 105.81% of reservation
11:07:11.35 SUCCESS: test 140306496 -le 148460885
11:07:11.36 SUCCESS: test 105.81 -le 110
11:07:11.44 SUCCESS: zfs destroy testpool/testvol
11:07:11.44 NOTE: Testing raidz1-3 volblocksize=8192
11:07:11.51 SUCCESS: zfs create -V 100m -o volblocksize=8192 testpool/testvol
11:07:16.41 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:07:16.42 SUCCESS: test -n 140033696
11:07:16.43 SUCCESS: test -n 145315157
11:07:16.43 NOTE: raidz1-3 refreservation 145315157 is 103.77% of reservation
11:07:16.43 SUCCESS: test 140033696 -le 145315157
11:07:16.43 SUCCESS: test 103.77 -le 110
11:07:16.50 SUCCESS: zfs destroy testpool/testvol
11:07:16.50 NOTE: Testing raidz1-3 volblocksize=131072
11:07:16.57 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:07:20.14 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:07:20.18 SUCCESS: test -n 104853408
11:07:20.19 SUCCESS: test -n 106954752
11:07:20.19 NOTE: raidz1-3 refreservation 106954752 is 102.00% of reservation
11:07:20.19 SUCCESS: test 104853408 -le 106954752
11:07:20.19 SUCCESS: test 102.00 -le 110
11:07:20.23 SUCCESS: zfs destroy testpool/testvol
11:07:20.40 SUCCESS: zpool destroy testpool
11:07:20.40 NOTE: Too few disks to test raidz2-4
11:07:20.40 NOTE: Too few disks to test raidz2-5
11:07:20.40 NOTE: Too few disks to test raidz3-6
11:07:20.40 NOTE: Too few disks to test raidz3-7
11:07:20.40 NOTE: Performing local cleanup via log_onexit (cleanup)
11:07:20.43 SUCCESS: rm -rf /var/tmp/testdir
11:07:20.70 SUCCESS: zpool create -f testpool c0t600144F013057A3100005D0D46D8006Ad0
11:07:20.77 SUCCESS: zfs create testpool/testfs
11:07:21.09 SUCCESS: zfs set mountpoint=/var/tmp/testdir testpool/testfs
11:07:21.09 raidz refreservation=auto accounts for extra parity and skip blocks
Test: /opt/zfs-tests/tests/functional/refreserv/cleanup (run as root) [00:00] [PASS]
11:07:21.46 SUCCESS: rm -rf /var/tmp/testdir
Test: /opt/zfs-tests/tests/functional/refreserv/setup (run as root) [00:00] [PASS]
11:12:41.98 SUCCESS: zpool create -f testpool c0t600144F013057A3100005D0D483A0092d0
11:12:42.03 SUCCESS: zfs create testpool/testfs
11:12:42.26 SUCCESS: zfs set mountpoint=/var/tmp/testdir testpool/testfs
Test: /opt/zfs-tests/tests/functional/refreserv/refreserv_multi_raidz (run as root) [00:40] [PASS]
11:12:42.30 ASSERTION: raidz refreservation=auto picks worst raidz vdev
11:12:42.46 SUCCESS: zpool destroy testpool
11:12:42.50 NOTE: Testing in ashift=9 mode
11:12:43.24 SUCCESS: zpool create testpool raidz1 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0
11:12:43.24 NOTE: Gathering refreservation for raidz1-2 volblocksize=512
11:12:43.46 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:12:43.47 SUCCESS: test -n 159383552
11:12:43.52 SUCCESS: zfs destroy testpool/testvol
11:12:43.52 NOTE: Gathering refreservation for raidz1-2 volblocksize=1024
11:12:43.71 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:12:43.72 SUCCESS: test -n 133169152
11:12:43.78 SUCCESS: zfs destroy testpool/testvol
11:12:43.79 NOTE: Gathering refreservation for raidz1-2 volblocksize=131072
11:12:43.97 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:12:43.98 SUCCESS: test -n 106954752
11:12:44.09 SUCCESS: zfs destroy testpool/testvol
11:12:44.44 SUCCESS: zpool destroy testpool
11:12:44.98 SUCCESS: zpool create testpool raidz1 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 c0t600144F013057A3100005D0D483A0090d0
11:12:44.98 NOTE: Gathering refreservation for raidz1-3 volblocksize=512
11:12:45.10 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:12:45.11 SUCCESS: test -n 194336085
11:12:45.15 SUCCESS: zfs destroy testpool/testvol
11:12:45.15 NOTE: Gathering refreservation for raidz1-3 volblocksize=1024
11:12:45.20 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:12:45.21 SUCCESS: test -n 168121685
11:12:45.24 SUCCESS: zfs destroy testpool/testvol
11:12:45.24 NOTE: Gathering refreservation for raidz1-3 volblocksize=131072
11:12:45.29 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:12:45.30 SUCCESS: test -n 106954752
11:12:45.34 SUCCESS: zfs destroy testpool/testvol
11:12:45.49 SUCCESS: zpool destroy testpool
11:12:46.16 SUCCESS: zpool create testpool raidz2 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 c0t600144F013057A3100005D0D483A0090d0 c0t600144F013057A3100005D0D483A008Fd0
11:12:46.16 NOTE: Gathering refreservation for raidz2-4 volblocksize=512
11:12:46.40 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:12:46.41 SUCCESS: test -n 211505750
11:12:46.45 SUCCESS: zfs destroy testpool/testvol
11:12:46.45 NOTE: Gathering refreservation for raidz2-4 volblocksize=1024
11:12:46.58 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:12:46.59 SUCCESS: test -n 185291350
11:12:46.64 SUCCESS: zfs destroy testpool/testvol
11:12:46.64 NOTE: Gathering refreservation for raidz2-4 volblocksize=131072
11:12:46.77 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:12:46.78 SUCCESS: test -n 106954752
11:12:46.89 SUCCESS: zfs destroy testpool/testvol
11:12:47.04 SUCCESS: zpool destroy testpool
11:12:47.90 SUCCESS: zpool create testpool raidz2 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 c0t600144F013057A3100005D0D483A0090d0 c0t600144F013057A3100005D0D483A008Fd0 c0t600144F013057A3100005D0D483A008Ed0
11:12:47.90 NOTE: Gathering refreservation for raidz2-5 volblocksize=512
11:12:48.01 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:12:48.02 SUCCESS: test -n 242243054
11:12:48.09 SUCCESS: zfs destroy testpool/testvol
11:12:48.09 NOTE: Gathering refreservation for raidz2-5 volblocksize=1024
11:12:48.13 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:12:48.14 SUCCESS: test -n 216028654
11:12:48.17 SUCCESS: zfs destroy testpool/testvol
11:12:48.17 NOTE: Gathering refreservation for raidz2-5 volblocksize=131072
11:12:48.21 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:12:48.22 SUCCESS: test -n 106954752
11:12:48.25 SUCCESS: zfs destroy testpool/testvol
11:12:48.37 SUCCESS: zpool destroy testpool
11:12:49.47 SUCCESS: zpool create testpool raidz3 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 c0t600144F013057A3100005D0D483A0090d0 c0t600144F013057A3100005D0D483A008Fd0 c0t600144F013057A3100005D0D483A008Ed0 c0t600144F013057A3100005D0D483A008Dd0
11:12:49.47 NOTE: Gathering refreservation for raidz3-6 volblocksize=512
11:12:49.56 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:12:49.57 SUCCESS: test -n 262615452
11:12:49.60 SUCCESS: zfs destroy testpool/testvol
11:12:49.60 NOTE: Gathering refreservation for raidz3-6 volblocksize=1024
11:12:49.65 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:12:49.66 SUCCESS: test -n 236401052
11:12:49.68 SUCCESS: zfs destroy testpool/testvol
11:12:49.69 NOTE: Gathering refreservation for raidz3-6 volblocksize=131072
11:12:49.73 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:12:49.74 SUCCESS: test -n 106954752
11:12:49.77 SUCCESS: zfs destroy testpool/testvol
11:12:49.96 SUCCESS: zpool destroy testpool
11:12:51.05 SUCCESS: zpool create testpool raidz3 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 c0t600144F013057A3100005D0D483A0090d0 c0t600144F013057A3100005D0D483A008Fd0 c0t600144F013057A3100005D0D483A008Ed0 c0t600144F013057A3100005D0D483A008Dd0 c0t600144F013057A3100005D0D483A008Cd0
11:12:51.05 NOTE: Gathering refreservation for raidz3-7 volblocksize=512
11:12:51.19 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:12:51.20 SUCCESS: test -n 294200466
11:12:51.24 SUCCESS: zfs destroy testpool/testvol
11:12:51.24 NOTE: Gathering refreservation for raidz3-7 volblocksize=1024
11:12:51.37 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:12:51.38 SUCCESS: test -n 267986066
11:12:51.42 SUCCESS: zfs destroy testpool/testvol
11:12:51.42 NOTE: Gathering refreservation for raidz3-7 volblocksize=131072
11:12:51.54 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:12:51.55 SUCCESS: test -n 106954752
11:12:51.59 SUCCESS: zfs destroy testpool/testvol
11:12:51.76 SUCCESS: zpool destroy testpool
11:12:51.76 NOTE: sizes=([0]='' [raidz1]=([0]='' [2]=([0]='' [1024]=133169152 [131072]=106954752 [512]=159383552) [3]=([0]='' [1024]=168121685 [131072]=106954752 [512]=194336085) ) [raidz2]=([0]='' [4]=([0]='' [1024]=185291350 [131072]=106954752 [512]=211505750) [5]=([0]='' [1024]=216028654 [131072]=106954752 [512]=242243054) ) [raidz3]=([0]='' [6]=([0]='' [1024]=236401052 [131072]=106954752 [512]=262615452) [7]=([0]='' [1024]=267986066 [131072]=106954752 [512]=294200466) ) )
11:12:52.47 SUCCESS: zpool create -f testpool raidz1 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 raidz1 c0t600144F013057A3100005D0D483A0090d0 c0t600144F013057A3100005D0D483A008Fd0
11:12:52.47 NOTE: Verifying raidz1-2 raidz1-2 volblocksize=512
11:12:52.85 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:12:52.86 SUCCESS: test -n 159383552
11:12:52.86 NOTE: Expecting refres (159383552) to match refres from raidz1-2 (159383552)
11:12:52.86 SUCCESS: test 159383552 -eq 159383552
11:12:52.91 SUCCESS: zfs destroy testpool/testvol
11:12:52.91 NOTE: Verifying raidz1-2 raidz1-2 volblocksize=1024
11:12:53.17 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:12:53.18 SUCCESS: test -n 133169152
11:12:53.19 NOTE: Expecting refres (133169152) to match refres from raidz1-2 (133169152)
11:12:53.19 SUCCESS: test 133169152 -eq 133169152
11:12:53.25 SUCCESS: zfs destroy testpool/testvol
11:12:53.25 NOTE: Verifying raidz1-2 raidz1-2 volblocksize=131072
11:12:53.59 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:12:53.60 SUCCESS: test -n 106954752
11:12:53.60 NOTE: Expecting refres (106954752) to match refres from raidz1-2 (106954752)
11:12:53.60 SUCCESS: test 106954752 -eq 106954752
11:12:53.76 SUCCESS: zfs destroy testpool/testvol
11:12:54.32 SUCCESS: zpool destroy testpool
11:12:55.11 SUCCESS: zpool create -f testpool raidz1 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 raidz1 c0t600144F013057A3100005D0D483A0090d0 c0t600144F013057A3100005D0D483A008Fd0 c0t600144F013057A3100005D0D483A008Ed0
11:12:55.11 NOTE: Verifying raidz1-2 raidz1-3 volblocksize=512
11:12:55.21 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:12:55.22 SUCCESS: test -n 194336085
11:12:55.23 NOTE: Expecting refres (194336085) to match refres from raidz1-2 (194336085)
11:12:55.23 SUCCESS: test 194336085 -eq 194336085
11:12:55.26 SUCCESS: zfs destroy testpool/testvol
11:12:55.26 NOTE: Verifying raidz1-2 raidz1-3 volblocksize=1024
11:12:55.38 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:12:55.39 SUCCESS: test -n 168121685
11:12:55.39 NOTE: Expecting refres (168121685) to match refres from raidz1-2 (168121685)
11:12:55.39 SUCCESS: test 168121685 -eq 168121685
11:12:55.42 SUCCESS: zfs destroy testpool/testvol
11:12:55.42 NOTE: Verifying raidz1-2 raidz1-3 volblocksize=131072
11:12:55.53 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:12:55.54 SUCCESS: test -n 106954752
11:12:55.54 NOTE: Expecting refres (106954752) to match refres from raidz1-2 (106954752)
11:12:55.54 SUCCESS: test 106954752 -eq 106954752
11:12:55.57 SUCCESS: zfs destroy testpool/testvol
11:12:55.76 SUCCESS: zpool destroy testpool
11:12:56.48 SUCCESS: zpool create -f testpool raidz1 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 c0t600144F013057A3100005D0D483A0090d0 raidz1 c0t600144F013057A3100005D0D483A008Fd0 c0t600144F013057A3100005D0D483A008Ed0
11:12:56.48 NOTE: Verifying raidz1-3 raidz1-2 volblocksize=512
11:12:56.64 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:12:56.65 SUCCESS: test -n 194336085
11:12:56.65 NOTE: Expecting refres (194336085) to match refres from raidz1-3 (194336085)
11:12:56.65 SUCCESS: test 194336085 -eq 194336085
11:12:56.69 SUCCESS: zfs destroy testpool/testvol
11:12:56.69 NOTE: Verifying raidz1-3 raidz1-2 volblocksize=1024
11:12:56.81 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:12:56.82 SUCCESS: test -n 168121685
11:12:56.82 NOTE: Expecting refres (168121685) to match refres from raidz1-3 (168121685)
11:12:56.82 SUCCESS: test 168121685 -eq 168121685
11:12:56.85 SUCCESS: zfs destroy testpool/testvol
11:12:56.85 NOTE: Verifying raidz1-3 raidz1-2 volblocksize=131072
11:12:57.02 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:12:57.03 SUCCESS: test -n 106954752
11:12:57.04 NOTE: Expecting refres (106954752) to match refres from raidz1-3 (106954752)
11:12:57.04 SUCCESS: test 106954752 -eq 106954752
11:12:57.07 SUCCESS: zfs destroy testpool/testvol
11:12:57.26 SUCCESS: zpool destroy testpool
11:12:58.11 SUCCESS: zpool create -f testpool raidz1 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 c0t600144F013057A3100005D0D483A0090d0 raidz1 c0t600144F013057A3100005D0D483A008Fd0 c0t600144F013057A3100005D0D483A008Ed0 c0t600144F013057A3100005D0D483A008Dd0
11:12:58.11 NOTE: Verifying raidz1-3 raidz1-3 volblocksize=512
11:12:58.15 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:12:58.16 SUCCESS: test -n 194336085
11:12:58.16 NOTE: Expecting refres (194336085) to match refres from raidz1-3 (194336085)
11:12:58.17 SUCCESS: test 194336085 -eq 194336085
11:12:58.23 SUCCESS: zfs destroy testpool/testvol
11:12:58.23 NOTE: Verifying raidz1-3 raidz1-3 volblocksize=1024
11:12:58.27 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:12:58.28 SUCCESS: test -n 168121685
11:12:58.28 NOTE: Expecting refres (168121685) to match refres from raidz1-3 (168121685)
11:12:58.29 SUCCESS: test 168121685 -eq 168121685
11:12:58.38 SUCCESS: zfs destroy testpool/testvol
11:12:58.39 NOTE: Verifying raidz1-3 raidz1-3 volblocksize=131072
11:12:58.48 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:12:58.49 SUCCESS: test -n 106954752
11:12:58.49 NOTE: Expecting refres (106954752) to match refres from raidz1-3 (106954752)
11:12:58.49 SUCCESS: test 106954752 -eq 106954752
11:12:58.52 SUCCESS: zfs destroy testpool/testvol
11:12:58.65 SUCCESS: zpool destroy testpool
11:13:00.44 SUCCESS: zpool create -f testpool raidz2 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 c0t600144F013057A3100005D0D483A0090d0 c0t600144F013057A3100005D0D483A008Fd0 raidz2 c0t600144F013057A3100005D0D483A008Ed0 c0t600144F013057A3100005D0D483A008Dd0 c0t600144F013057A3100005D0D483A008Cd0 c0t600144F013057A3100005D0D483A008Bd0
11:13:00.45 NOTE: Verifying raidz2-4 raidz2-4 volblocksize=512
11:13:00.77 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:13:00.78 SUCCESS: test -n 211505750
11:13:00.78 NOTE: Expecting refres (211505750) to match refres from raidz2-4 (211505750)
11:13:00.78 SUCCESS: test 211505750 -eq 211505750
11:13:00.82 SUCCESS: zfs destroy testpool/testvol
11:13:00.82 NOTE: Verifying raidz2-4 raidz2-4 volblocksize=1024
11:13:01.12 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:13:01.13 SUCCESS: test -n 185291350
11:13:01.13 NOTE: Expecting refres (185291350) to match refres from raidz2-4 (185291350)
11:13:01.14 SUCCESS: test 185291350 -eq 185291350
11:13:01.24 SUCCESS: zfs destroy testpool/testvol
11:13:01.24 NOTE: Verifying raidz2-4 raidz2-4 volblocksize=131072
11:13:01.80 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:13:01.82 SUCCESS: test -n 106954752
11:13:01.82 NOTE: Expecting refres (106954752) to match refres from raidz2-4 (106954752)
11:13:01.82 SUCCESS: test 106954752 -eq 106954752
11:13:02.02 SUCCESS: zfs destroy testpool/testvol
11:13:02.56 SUCCESS: zpool destroy testpool
11:13:03.91 SUCCESS: zpool create -f testpool raidz2 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 c0t600144F013057A3100005D0D483A0090d0 c0t600144F013057A3100005D0D483A008Fd0 raidz2 c0t600144F013057A3100005D0D483A008Ed0 c0t600144F013057A3100005D0D483A008Dd0 c0t600144F013057A3100005D0D483A008Cd0 c0t600144F013057A3100005D0D483A008Bd0 c0t600144F013057A3100005D0D483A008Ad0
11:13:03.91 NOTE: Verifying raidz2-4 raidz2-5 volblocksize=512
11:13:04.02 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:13:04.03 SUCCESS: test -n 242243054
11:13:04.03 NOTE: Expecting refres (242243054) to match refres from raidz2-4 (242243054)
11:13:04.03 SUCCESS: test 242243054 -eq 242243054
11:13:04.07 SUCCESS: zfs destroy testpool/testvol
11:13:04.07 NOTE: Verifying raidz2-4 raidz2-5 volblocksize=1024
11:13:04.19 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:13:04.20 SUCCESS: test -n 216028654
11:13:04.20 NOTE: Expecting refres (216028654) to match refres from raidz2-4 (216028654)
11:13:04.20 SUCCESS: test 216028654 -eq 216028654
11:13:04.23 SUCCESS: zfs destroy testpool/testvol
11:13:04.23 NOTE: Verifying raidz2-4 raidz2-5 volblocksize=131072
11:13:04.34 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:13:04.35 SUCCESS: test -n 106954752
11:13:04.35 NOTE: Expecting refres (106954752) to match refres from raidz2-4 (106954752)
11:13:04.35 SUCCESS: test 106954752 -eq 106954752
11:13:04.39 SUCCESS: zfs destroy testpool/testvol
11:13:04.68 SUCCESS: zpool destroy testpool
11:13:05.88 SUCCESS: zpool create -f testpool raidz2 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 c0t600144F013057A3100005D0D483A0090d0 c0t600144F013057A3100005D0D483A008Fd0 c0t600144F013057A3100005D0D483A008Ed0 raidz2 c0t600144F013057A3100005D0D483A008Dd0 c0t600144F013057A3100005D0D483A008Cd0 c0t600144F013057A3100005D0D483A008Bd0 c0t600144F013057A3100005D0D483A008Ad0
11:13:05.88 NOTE: Verifying raidz2-5 raidz2-4 volblocksize=512
11:13:06.04 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:13:06.05 SUCCESS: test -n 242243054
11:13:06.05 NOTE: Expecting refres (242243054) to match refres from raidz2-5 (242243054)
11:13:06.05 SUCCESS: test 242243054 -eq 242243054
11:13:06.19 SUCCESS: zfs destroy testpool/testvol
11:13:06.19 NOTE: Verifying raidz2-5 raidz2-4 volblocksize=1024
11:13:06.37 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:13:06.38 SUCCESS: test -n 216028654
11:13:06.38 NOTE: Expecting refres (216028654) to match refres from raidz2-5 (216028654)
11:13:06.38 SUCCESS: test 216028654 -eq 216028654
11:13:06.42 SUCCESS: zfs destroy testpool/testvol
11:13:06.42 NOTE: Verifying raidz2-5 raidz2-4 volblocksize=131072
11:13:06.54 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:13:06.55 SUCCESS: test -n 106954752
11:13:06.55 NOTE: Expecting refres (106954752) to match refres from raidz2-5 (106954752)
11:13:06.55 SUCCESS: test 106954752 -eq 106954752
11:13:06.59 SUCCESS: zfs destroy testpool/testvol
11:13:06.77 SUCCESS: zpool destroy testpool
11:13:08.21 SUCCESS: zpool create -f testpool raidz2 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 c0t600144F013057A3100005D0D483A0090d0 c0t600144F013057A3100005D0D483A008Fd0 c0t600144F013057A3100005D0D483A008Ed0 raidz2 c0t600144F013057A3100005D0D483A008Dd0 c0t600144F013057A3100005D0D483A008Cd0 c0t600144F013057A3100005D0D483A008Bd0 c0t600144F013057A3100005D0D483A008Ad0 c0t600144F013057A3100005D0D48390089d0
11:13:08.22 NOTE: Verifying raidz2-5 raidz2-5 volblocksize=512
11:13:08.26 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:13:08.28 SUCCESS: test -n 242243054
11:13:08.28 NOTE: Expecting refres (242243054) to match refres from raidz2-5 (242243054)
11:13:08.28 SUCCESS: test 242243054 -eq 242243054
11:13:08.31 SUCCESS: zfs destroy testpool/testvol
11:13:08.31 NOTE: Verifying raidz2-5 raidz2-5 volblocksize=1024
11:13:08.35 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:13:08.36 SUCCESS: test -n 216028654
11:13:08.37 NOTE: Expecting refres (216028654) to match refres from raidz2-5 (216028654)
11:13:08.37 SUCCESS: test 216028654 -eq 216028654
11:13:08.43 SUCCESS: zfs destroy testpool/testvol
11:13:08.43 NOTE: Verifying raidz2-5 raidz2-5 volblocksize=131072
11:13:08.48 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:13:08.49 SUCCESS: test -n 106954752
11:13:08.49 NOTE: Expecting refres (106954752) to match refres from raidz2-5 (106954752)
11:13:08.49 SUCCESS: test 106954752 -eq 106954752
11:13:08.52 SUCCESS: zfs destroy testpool/testvol
11:13:08.78 SUCCESS: zpool destroy testpool
11:13:10.45 SUCCESS: zpool create -f testpool raidz3 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 c0t600144F013057A3100005D0D483A0090d0 c0t600144F013057A3100005D0D483A008Fd0 c0t600144F013057A3100005D0D483A008Ed0 c0t600144F013057A3100005D0D483A008Dd0 raidz3 c0t600144F013057A3100005D0D483A008Cd0 c0t600144F013057A3100005D0D483A008Bd0 c0t600144F013057A3100005D0D483A008Ad0 c0t600144F013057A3100005D0D48390089d0 c0t600144F013057A3100005D0D48390088d0 c0t600144F013057A3100005D0D48390087d0
11:13:10.45 NOTE: Verifying raidz3-6 raidz3-6 volblocksize=512
11:13:10.50 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:13:10.51 SUCCESS: test -n 262615452
11:13:10.51 NOTE: Expecting refres (262615452) to match refres from raidz3-6 (262615452)
11:13:10.52 SUCCESS: test 262615452 -eq 262615452
11:13:10.60 SUCCESS: zfs destroy testpool/testvol
11:13:10.60 NOTE: Verifying raidz3-6 raidz3-6 volblocksize=1024
11:13:10.65 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:13:10.66 SUCCESS: test -n 236401052
11:13:10.66 NOTE: Expecting refres (236401052) to match refres from raidz3-6 (236401052)
11:13:10.66 SUCCESS: test 236401052 -eq 236401052
11:13:10.74 SUCCESS: zfs destroy testpool/testvol
11:13:10.74 NOTE: Verifying raidz3-6 raidz3-6 volblocksize=131072
11:13:10.88 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:13:10.89 SUCCESS: test -n 106954752
11:13:10.90 NOTE: Expecting refres (106954752) to match refres from raidz3-6 (106954752)
11:13:10.90 SUCCESS: test 106954752 -eq 106954752
11:13:10.94 SUCCESS: zfs destroy testpool/testvol
11:13:11.15 SUCCESS: zpool destroy testpool
11:13:13.06 SUCCESS: zpool create -f testpool raidz3 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 c0t600144F013057A3100005D0D483A0090d0 c0t600144F013057A3100005D0D483A008Fd0 c0t600144F013057A3100005D0D483A008Ed0 c0t600144F013057A3100005D0D483A008Dd0 raidz3 c0t600144F013057A3100005D0D483A008Cd0 c0t600144F013057A3100005D0D483A008Bd0 c0t600144F013057A3100005D0D483A008Ad0 c0t600144F013057A3100005D0D48390089d0 c0t600144F013057A3100005D0D48390088d0 c0t600144F013057A3100005D0D48390087d0 c0t600144F013057A3100005D0D48390086d0
11:13:13.06 NOTE: Verifying raidz3-6 raidz3-7 volblocksize=512
11:13:13.13 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:13:13.14 SUCCESS: test -n 294200466
11:13:13.14 NOTE: Expecting refres (294200466) to match refres from raidz3-6 (294200466)
11:13:13.15 SUCCESS: test 294200466 -eq 294200466
11:13:13.19 SUCCESS: zfs destroy testpool/testvol
11:13:13.19 NOTE: Verifying raidz3-6 raidz3-7 volblocksize=1024
11:13:13.40 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:13:13.41 SUCCESS: test -n 267986066
11:13:13.41 NOTE: Expecting refres (267986066) to match refres from raidz3-6 (267986066)
11:13:13.41 SUCCESS: test 267986066 -eq 267986066
11:13:13.45 SUCCESS: zfs destroy testpool/testvol
11:13:13.45 NOTE: Verifying raidz3-6 raidz3-7 volblocksize=131072
11:13:13.66 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:13:13.67 SUCCESS: test -n 106954752
11:13:13.67 NOTE: Expecting refres (106954752) to match refres from raidz3-6 (106954752)
11:13:13.67 SUCCESS: test 106954752 -eq 106954752
11:13:13.71 SUCCESS: zfs destroy testpool/testvol
11:13:14.10 SUCCESS: zpool destroy testpool
11:13:15.80 SUCCESS: zpool create -f testpool raidz3 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 c0t600144F013057A3100005D0D483A0090d0 c0t600144F013057A3100005D0D483A008Fd0 c0t600144F013057A3100005D0D483A008Ed0 c0t600144F013057A3100005D0D483A008Dd0 c0t600144F013057A3100005D0D483A008Cd0 raidz3 c0t600144F013057A3100005D0D483A008Bd0 c0t600144F013057A3100005D0D483A008Ad0 c0t600144F013057A3100005D0D48390089d0 c0t600144F013057A3100005D0D48390088d0 c0t600144F013057A3100005D0D48390087d0 c0t600144F013057A3100005D0D48390086d0
11:13:15.80 NOTE: Verifying raidz3-7 raidz3-6 volblocksize=512
11:13:15.94 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:13:15.95 SUCCESS: test -n 294200466
11:13:15.95 NOTE: Expecting refres (294200466) to match refres from raidz3-7 (294200466)
11:13:15.96 SUCCESS: test 294200466 -eq 294200466
11:13:15.99 SUCCESS: zfs destroy testpool/testvol
11:13:16.00 NOTE: Verifying raidz3-7 raidz3-6 volblocksize=1024
11:13:16.25 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:13:16.26 SUCCESS: test -n 267986066
11:13:16.27 NOTE: Expecting refres (267986066) to match refres from raidz3-7 (267986066)
11:13:16.27 SUCCESS: test 267986066 -eq 267986066
11:13:16.31 SUCCESS: zfs destroy testpool/testvol
11:13:16.31 NOTE: Verifying raidz3-7 raidz3-6 volblocksize=131072
11:13:16.46 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:13:16.47 SUCCESS: test -n 106954752
11:13:16.47 NOTE: Expecting refres (106954752) to match refres from raidz3-7 (106954752)
11:13:16.47 SUCCESS: test 106954752 -eq 106954752
11:13:16.52 SUCCESS: zfs destroy testpool/testvol
11:13:16.82 SUCCESS: zpool destroy testpool
11:13:18.89 SUCCESS: zpool create -f testpool raidz3 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 c0t600144F013057A3100005D0D483A0090d0 c0t600144F013057A3100005D0D483A008Fd0 c0t600144F013057A3100005D0D483A008Ed0 c0t600144F013057A3100005D0D483A008Dd0 c0t600144F013057A3100005D0D483A008Cd0 raidz3 c0t600144F013057A3100005D0D483A008Bd0 c0t600144F013057A3100005D0D483A008Ad0 c0t600144F013057A3100005D0D48390089d0 c0t600144F013057A3100005D0D48390088d0 c0t600144F013057A3100005D0D48390087d0 c0t600144F013057A3100005D0D48390086d0 c0t600144F013057A3100005D0D48390085d0
11:13:18.89 NOTE: Verifying raidz3-7 raidz3-7 volblocksize=512
11:13:19.50 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:13:19.62 SUCCESS: test -n 294200466
11:13:19.62 NOTE: Expecting refres (294200466) to match refres from raidz3-7 (294200466)
11:13:19.62 SUCCESS: test 294200466 -eq 294200466
11:13:19.83 SUCCESS: zfs destroy testpool/testvol
11:13:19.83 NOTE: Verifying raidz3-7 raidz3-7 volblocksize=1024
11:13:20.49 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:13:20.61 SUCCESS: test -n 267986066
11:13:20.61 NOTE: Expecting refres (267986066) to match refres from raidz3-7 (267986066)
11:13:20.61 SUCCESS: test 267986066 -eq 267986066
11:13:20.75 SUCCESS: zfs destroy testpool/testvol
11:13:20.75 NOTE: Verifying raidz3-7 raidz3-7 volblocksize=131072
11:13:21.26 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:13:21.37 SUCCESS: test -n 106954752
11:13:21.38 NOTE: Expecting refres (106954752) to match refres from raidz3-7 (106954752)
11:13:21.38 SUCCESS: test 106954752 -eq 106954752
11:13:21.58 SUCCESS: zfs destroy testpool/testvol
11:13:22.16 SUCCESS: zpool destroy testpool
11:13:22.16 NOTE: Performing local cleanup via log_onexit (cleanup)
11:13:22.19 SUCCESS: rm -rf /var/tmp/testdir
11:13:22.47 SUCCESS: zpool create -f testpool c0t600144F013057A3100005D0D483A0092d0
11:13:22.64 SUCCESS: zfs create testpool/testfs
11:13:22.76 SUCCESS: zfs set mountpoint=/var/tmp/testdir testpool/testfs
11:13:22.76 raidz refreservation=auto picks worst raidz vdev
Test: /opt/zfs-tests/tests/functional/refreserv/refreserv_raidz (run as root) [02:30] [PASS]
11:13:22.82 ASSERTION: raidz refreservation=auto accounts for extra parity and skip blocks
11:13:22.97 SUCCESS: zpool destroy testpool
11:13:23.02 SUCCESS: test 512 -eq 512 -o 512 -eq 4096
11:13:23.02 NOTE: Testing in ashift=9 mode
11:13:23.99 SUCCESS: zpool create testpool raidz1 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0
11:13:24.00 NOTE: Testing raidz1-2 volblocksize=512
11:13:24.17 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:13:33.92 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:13:33.93 SUCCESS: test -n 107142144
11:13:33.94 SUCCESS: test -n 159383552
11:13:33.94 NOTE: raidz1-2 refreservation 159383552 is 148.76% of reservation
11:13:33.94 SUCCESS: test 107142144 -le 159383552
11:13:33.94 SUCCESS: test 148.76 -le 151
11:13:34.20 SUCCESS: zfs destroy testpool/testvol
11:13:34.21 NOTE: Testing raidz1-2 volblocksize=1024
11:13:34.54 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:13:40.56 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:13:40.57 SUCCESS: test -n 106006528
11:13:40.57 SUCCESS: test -n 133169152
11:13:40.57 NOTE: raidz1-2 refreservation 133169152 is 125.62% of reservation
11:13:40.58 SUCCESS: test 106006528 -le 133169152
11:13:40.58 SUCCESS: test 125.62 -le 151
11:13:40.73 SUCCESS: zfs destroy testpool/testvol
11:13:40.73 NOTE: Testing raidz1-2 volblocksize=131072
11:13:41.06 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:13:45.35 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:13:45.41 SUCCESS: test -n 104879104
11:13:45.42 SUCCESS: test -n 106954752
11:13:45.42 NOTE: raidz1-2 refreservation 106954752 is 101.98% of reservation
11:13:45.42 SUCCESS: test 104879104 -le 106954752
11:13:45.42 SUCCESS: test 101.98 -le 151
11:13:45.47 SUCCESS: zfs destroy testpool/testvol
11:13:45.78 SUCCESS: zpool destroy testpool
11:13:46.26 SUCCESS: zpool create testpool raidz1 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 c0t600144F013057A3100005D0D483A0090d0
11:13:46.26 NOTE: Testing raidz1-3 volblocksize=512
11:13:46.37 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:13:53.39 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:13:53.40 SUCCESS: test -n 142165628
11:13:53.41 SUCCESS: test -n 194336085
11:13:53.41 NOTE: raidz1-3 refreservation 194336085 is 136.70% of reservation
11:13:53.41 SUCCESS: test 142165628 -le 194336085
11:13:53.41 SUCCESS: test 136.70 -le 151
11:13:53.66 SUCCESS: zfs destroy testpool/testvol
11:13:53.66 NOTE: Testing raidz1-3 volblocksize=1024
11:13:53.72 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:13:59.48 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:13:59.49 SUCCESS: test -n 140928480
11:13:59.50 SUCCESS: test -n 168121685
11:13:59.50 NOTE: raidz1-3 refreservation 168121685 is 119.30% of reservation
11:13:59.50 SUCCESS: test 140928480 -le 168121685
11:13:59.50 SUCCESS: test 119.30 -le 151
11:13:59.63 SUCCESS: zfs destroy testpool/testvol
11:13:59.63 NOTE: Testing raidz1-3 volblocksize=131072
11:13:59.73 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:14:02.66 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:14:02.70 SUCCESS: test -n 104782480
11:14:02.71 SUCCESS: test -n 106954752
11:14:02.71 NOTE: raidz1-3 refreservation 106954752 is 102.07% of reservation
11:14:02.71 SUCCESS: test 104782480 -le 106954752
11:14:02.71 SUCCESS: test 102.07 -le 151
11:14:02.77 SUCCESS: zfs destroy testpool/testvol
11:14:02.88 SUCCESS: zpool destroy testpool
11:14:03.54 SUCCESS: zpool create testpool raidz2 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 c0t600144F013057A3100005D0D483A0090d0 c0t600144F013057A3100005D0D483A008Fd0
11:14:03.54 NOTE: Testing raidz2-4 volblocksize=512
11:14:03.75 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:14:13.12 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:14:13.14 SUCCESS: test -n 159158250
11:14:13.14 SUCCESS: test -n 211505750
11:14:13.14 NOTE: raidz2-4 refreservation 211505750 is 132.89% of reservation
11:14:13.14 SUCCESS: test 159158250 -le 211505750
11:14:13.14 SUCCESS: test 132.89 -le 151
11:14:13.43 SUCCESS: zfs destroy testpool/testvol
11:14:13.43 NOTE: Testing raidz2-4 volblocksize=1024
11:14:13.58 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:14:22.17 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:14:22.20 SUCCESS: test -n 157925070
11:14:22.21 SUCCESS: test -n 185291350
11:14:22.21 NOTE: raidz2-4 refreservation 185291350 is 117.33% of reservation
11:14:22.21 SUCCESS: test 157925070 -le 185291350
11:14:22.22 SUCCESS: test 117.33 -le 151
11:14:22.38 SUCCESS: zfs destroy testpool/testvol
11:14:22.38 NOTE: Testing raidz2-4 volblocksize=131072
11:14:22.64 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:14:26.53 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:14:26.56 SUCCESS: test -n 104682600
11:14:26.57 SUCCESS: test -n 106954752
11:14:26.57 NOTE: raidz2-4 refreservation 106954752 is 102.17% of reservation
11:14:26.57 SUCCESS: test 104682600 -le 106954752
11:14:26.57 SUCCESS: test 102.17 -le 151
11:14:26.61 SUCCESS: zfs destroy testpool/testvol
11:14:26.95 SUCCESS: zpool destroy testpool
11:14:27.72 SUCCESS: zpool create testpool raidz2 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 c0t600144F013057A3100005D0D483A0090d0 c0t600144F013057A3100005D0D483A008Fd0 c0t600144F013057A3100005D0D483A008Ed0
11:14:27.72 NOTE: Testing raidz2-5 volblocksize=512
11:14:27.83 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:14:37.63 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:14:37.64 SUCCESS: test -n 189996090
11:14:37.64 SUCCESS: test -n 242243054
11:14:37.64 NOTE: raidz2-5 refreservation 242243054 is 127.50% of reservation
11:14:37.65 SUCCESS: test 189996090 -le 242243054
11:14:37.65 SUCCESS: test 127.50 -le 151
11:14:37.90 SUCCESS: zfs destroy testpool/testvol
11:14:37.91 NOTE: Testing raidz2-5 volblocksize=1024
11:14:37.95 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:14:46.43 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:14:46.49 SUCCESS: test -n 188705940
11:14:46.49 SUCCESS: test -n 216028654
11:14:46.49 NOTE: raidz2-5 refreservation 216028654 is 114.48% of reservation
11:14:46.49 SUCCESS: test 188705940 -le 216028654
11:14:46.50 SUCCESS: test 114.48 -le 151
11:14:46.64 SUCCESS: zfs destroy testpool/testvol
11:14:46.64 NOTE: Testing raidz2-5 volblocksize=131072
11:14:46.73 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:14:50.03 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:14:50.09 SUCCESS: test -n 104708940
11:14:50.09 SUCCESS: test -n 106954752
11:14:50.09 NOTE: raidz2-5 refreservation 106954752 is 102.14% of reservation
11:14:50.10 SUCCESS: test 104708940 -le 106954752
11:14:50.10 SUCCESS: test 102.14 -le 151
11:14:50.14 SUCCESS: zfs destroy testpool/testvol
11:14:50.28 SUCCESS: zpool destroy testpool
11:14:51.20 SUCCESS: zpool create testpool raidz3 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 c0t600144F013057A3100005D0D483A0090d0 c0t600144F013057A3100005D0D483A008Fd0 c0t600144F013057A3100005D0D483A008Ed0 c0t600144F013057A3100005D0D483A008Dd0
11:14:51.20 NOTE: Testing raidz3-6 volblocksize=512
11:14:51.32 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:15:03.98 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:15:03.99 SUCCESS: test -n 210559904
11:15:04.00 SUCCESS: test -n 262615452
11:15:04.00 NOTE: raidz3-6 refreservation 262615452 is 124.72% of reservation
11:15:04.00 SUCCESS: test 210559904 -le 262615452
11:15:04.00 SUCCESS: test 124.72 -le 151
11:15:04.34 SUCCESS: zfs destroy testpool/testvol
11:15:04.34 NOTE: Testing raidz3-6 volblocksize=1024
11:15:04.43 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:15:16.10 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:15:16.12 SUCCESS: test -n 209332576
11:15:16.12 SUCCESS: test -n 236401052
11:15:16.13 NOTE: raidz3-6 refreservation 236401052 is 112.93% of reservation
11:15:16.13 SUCCESS: test 209332576 -le 236401052
11:15:16.13 SUCCESS: test 112.93 -le 151
11:15:16.30 SUCCESS: zfs destroy testpool/testvol
11:15:16.30 NOTE: Testing raidz3-6 volblocksize=131072
11:15:16.43 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:15:20.49 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:15:20.52 SUCCESS: test -n 104887776
11:15:20.53 SUCCESS: test -n 106954752
11:15:20.53 NOTE: raidz3-6 refreservation 106954752 is 101.97% of reservation
11:15:20.53 SUCCESS: test 104887776 -le 106954752
11:15:20.53 SUCCESS: test 101.97 -le 151
11:15:20.57 SUCCESS: zfs destroy testpool/testvol
11:15:20.71 SUCCESS: zpool destroy testpool
11:15:21.79 SUCCESS: zpool create testpool raidz3 c0t600144F013057A3100005D0D483A0092d0 c0t600144F013057A3100005D0D483A0091d0 c0t600144F013057A3100005D0D483A0090d0 c0t600144F013057A3100005D0D483A008Fd0 c0t600144F013057A3100005D0D483A008Ed0 c0t600144F013057A3100005D0D483A008Dd0 c0t600144F013057A3100005D0D483A008Cd0
11:15:21.79 NOTE: Testing raidz3-7 volblocksize=512
11:15:21.93 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:15:35.03 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:15:35.05 SUCCESS: test -n 241591456
11:15:35.05 SUCCESS: test -n 294200466
11:15:35.05 NOTE: raidz3-7 refreservation 294200466 is 121.78% of reservation
11:15:35.05 SUCCESS: test 241591456 -le 294200466
11:15:35.06 SUCCESS: test 121.78 -le 151
11:15:35.35 SUCCESS: zfs destroy testpool/testvol
11:15:35.35 NOTE: Testing raidz3-7 volblocksize=1024
11:15:35.49 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:15:47.36 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:15:47.38 SUCCESS: test -n 240414112
11:15:47.38 SUCCESS: test -n 267986066
11:15:47.39 NOTE: raidz3-7 refreservation 267986066 is 111.47% of reservation
11:15:47.39 SUCCESS: test 240414112 -le 267986066
11:15:47.39 SUCCESS: test 111.47 -le 151
11:15:47.58 SUCCESS: zfs destroy testpool/testvol
11:15:47.58 NOTE: Testing raidz3-7 volblocksize=131072
11:15:47.74 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:15:51.74 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:15:51.78 SUCCESS: test -n 104692512
11:15:51.79 SUCCESS: test -n 106954752
11:15:51.79 NOTE: raidz3-7 refreservation 106954752 is 102.16% of reservation
11:15:51.79 SUCCESS: test 104692512 -le 106954752
11:15:51.79 SUCCESS: test 102.16 -le 151
11:15:51.84 SUCCESS: zfs destroy testpool/testvol
11:15:52.15 SUCCESS: zpool destroy testpool
11:15:52.15 NOTE: Performing local cleanup via log_onexit (cleanup)
11:15:52.18 SUCCESS: rm -rf /var/tmp/testdir
11:15:52.49 SUCCESS: zpool create -f testpool c0t600144F013057A3100005D0D483A0092d0
11:15:52.56 SUCCESS: zfs create testpool/testfs
11:15:52.81 SUCCESS: zfs set mountpoint=/var/tmp/testdir testpool/testfs
11:15:52.81 raidz refreservation=auto accounts for extra parity and skip blocks
Test: /opt/zfs-tests/tests/functional/refreserv/cleanup (run as root) [00:00] [PASS]
11:15:53.10 SUCCESS: rm -rf /var/tmp/testdir
Test: /opt/zfs-tests/tests/functional/refreserv/setup (run as root) [00:00] [PASS]
11:04:29.86 SUCCESS: zpool create -f testpool c0t600144F013057A3100005D0D1D9C0065d0
11:04:30.02 SUCCESS: zfs create testpool/testfs
11:04:30.15 SUCCESS: zfs set mountpoint=/var/tmp/testdir testpool/testfs
Test: /opt/zfs-tests/tests/functional/refreserv/refreserv_multi_raidz (run as root) [00:04] [PASS]
11:04:30.19 ASSERTION: raidz refreservation=auto picks worst raidz vdev
11:04:30.36 SUCCESS: zpool destroy testpool
11:04:30.40 NOTE: Testing in ashift=9 mode
11:04:30.83 SUCCESS: zpool create testpool raidz1 c0t600144F013057A3100005D0D1D9C0065d0 c0t600144F013057A3100005D0D1D9C0066d0
11:04:30.83 NOTE: Gathering refreservation for raidz1-2 volblocksize=512
11:04:31.07 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:04:31.08 SUCCESS: test -n 159383552
11:04:31.29 SUCCESS: zfs destroy testpool/testvol
11:04:31.29 NOTE: Gathering refreservation for raidz1-2 volblocksize=1024
11:04:31.53 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:04:31.54 SUCCESS: test -n 133169152
11:04:31.61 SUCCESS: zfs destroy testpool/testvol
11:04:31.61 NOTE: Gathering refreservation for raidz1-2 volblocksize=131072
11:04:31.85 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:04:31.85 SUCCESS: test -n 106954752
11:04:31.91 SUCCESS: zfs destroy testpool/testvol
11:04:32.10 SUCCESS: zpool destroy testpool
11:04:32.58 SUCCESS: zpool create testpool raidz1 c0t600144F013057A3100005D0D1D9C0065d0 c0t600144F013057A3100005D0D1D9C0066d0 c0t600144F013057A3100005D0D1D9C0067d0
11:04:32.58 NOTE: Gathering refreservation for raidz1-3 volblocksize=512
11:04:32.70 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:04:32.71 SUCCESS: test -n 194336085
11:04:32.75 SUCCESS: zfs destroy testpool/testvol
11:04:32.75 NOTE: Gathering refreservation for raidz1-3 volblocksize=1024
11:04:32.80 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:04:32.80 SUCCESS: test -n 168121685
11:04:32.84 SUCCESS: zfs destroy testpool/testvol
11:04:32.84 NOTE: Gathering refreservation for raidz1-3 volblocksize=131072
11:04:32.90 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:04:32.91 SUCCESS: test -n 106954752
11:04:32.95 SUCCESS: zfs destroy testpool/testvol
11:04:33.07 SUCCESS: zpool destroy testpool
11:04:33.07 NOTE: Too few disks to test raidz2-4
11:04:33.07 NOTE: Too few disks to test raidz2-5
11:04:33.07 NOTE: Too few disks to test raidz3-6
11:04:33.07 NOTE: Too few disks to test raidz3-7
11:04:33.07 NOTE: sizes=([0]='' [raidz1]=([0]='' [2]=([0]='' [1024]=133169152 [131072]=106954752 [512]=159383552) [3]=([0]='' [1024]=168121685 [131072]=106954752 [512]=194336085) ) [raidz2]=([0]='') [raidz3]=([0]='') )
11:04:33.07 NOTE: Too few disks to test raidz1-2 + raidz1=2
11:04:33.07 NOTE: Too few disks to test raidz1-2 + raidz1=3
11:04:33.07 NOTE: Too few disks to test raidz1-3 + raidz1=2
11:04:33.07 NOTE: Too few disks to test raidz1-3 + raidz1=3
11:04:33.07 NOTE: Performing local cleanup via log_onexit (cleanup)
11:04:33.09 SUCCESS: rm -rf /var/tmp/testdir
11:04:34.00 SUCCESS: zpool create -f testpool c0t600144F013057A3100005D0D1D9C0065d0
11:04:34.15 SUCCESS: zfs create testpool/testfs
11:04:34.27 SUCCESS: zfs set mountpoint=/var/tmp/testdir testpool/testfs
11:04:34.27 raidz refreservation=auto picks worst raidz vdev
Test: /opt/zfs-tests/tests/functional/refreserv/refreserv_raidz (run as root) [00:44] [PASS]
11:04:34.31 ASSERTION: raidz refreservation=auto accounts for extra parity and skip blocks
11:04:34.45 SUCCESS: zpool destroy testpool
11:04:34.50 SUCCESS: test 512 -eq 512 -o 512 -eq 4096
11:04:34.50 NOTE: Testing in ashift=9 mode
11:04:34.99 SUCCESS: zpool create testpool raidz1 c0t600144F013057A3100005D0D1D9C0065d0 c0t600144F013057A3100005D0D1D9C0066d0
11:04:34.99 NOTE: Testing raidz1-2 volblocksize=512
11:04:35.19 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:04:46.63 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:04:46.65 SUCCESS: test -n 107142144
11:04:46.66 SUCCESS: test -n 159383552
11:04:46.66 NOTE: raidz1-2 refreservation 159383552 is 148.76% of reservation
11:04:46.66 SUCCESS: test 107142144 -le 159383552
11:04:46.67 SUCCESS: test 148.76 -le 151
11:04:46.94 SUCCESS: zfs destroy testpool/testvol
11:04:46.94 NOTE: Testing raidz1-2 volblocksize=1024
11:04:47.27 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:04:53.65 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:04:53.66 SUCCESS: test -n 106007552
11:04:53.66 SUCCESS: test -n 133169152
11:04:53.67 NOTE: raidz1-2 refreservation 133169152 is 125.62% of reservation
11:04:53.67 SUCCESS: test 106007552 -le 133169152
11:04:53.67 SUCCESS: test 125.62 -le 151
11:04:53.82 SUCCESS: zfs destroy testpool/testvol
11:04:53.82 NOTE: Testing raidz1-2 volblocksize=131072
11:04:54.02 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:04:58.41 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:04:58.44 SUCCESS: test -n 104881152
11:04:58.44 SUCCESS: test -n 106954752
11:04:58.44 NOTE: raidz1-2 refreservation 106954752 is 101.98% of reservation
11:04:58.45 SUCCESS: test 104881152 -le 106954752
11:04:58.45 SUCCESS: test 101.98 -le 151
11:04:58.51 SUCCESS: zfs destroy testpool/testvol
11:04:58.90 SUCCESS: zpool destroy testpool
11:04:59.36 SUCCESS: zpool create testpool raidz1 c0t600144F013057A3100005D0D1D9C0065d0 c0t600144F013057A3100005D0D1D9C0066d0 c0t600144F013057A3100005D0D1D9C0067d0
11:04:59.36 NOTE: Testing raidz1-3 volblocksize=512
11:04:59.45 SUCCESS: zfs create -V 100m -o volblocksize=512 testpool/testvol
11:05:06.93 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:05:06.95 SUCCESS: test -n 142165628
11:05:06.95 SUCCESS: test -n 194336085
11:05:06.95 NOTE: raidz1-3 refreservation 194336085 is 136.70% of reservation
11:05:06.95 SUCCESS: test 142165628 -le 194336085
11:05:06.95 SUCCESS: test 136.70 -le 151
11:05:07.21 SUCCESS: zfs destroy testpool/testvol
11:05:07.21 NOTE: Testing raidz1-3 volblocksize=1024
11:05:07.28 SUCCESS: zfs create -V 100m -o volblocksize=1024 testpool/testvol
11:05:14.82 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:05:14.84 SUCCESS: test -n 140928480
11:05:14.84 SUCCESS: test -n 168121685
11:05:14.84 NOTE: raidz1-3 refreservation 168121685 is 119.30% of reservation
11:05:14.84 SUCCESS: test 140928480 -le 168121685
11:05:14.84 SUCCESS: test 119.30 -le 151
11:05:15.00 SUCCESS: zfs destroy testpool/testvol
11:05:15.00 NOTE: Testing raidz1-3 volblocksize=131072
11:05:15.09 SUCCESS: zfs create -V 100m -o volblocksize=131072 testpool/testvol
11:05:18.13 SUCCESS: dd if=/dev/zero of=/dev/zvol/dsk/testpool/testvol bs=1024k count=100
11:05:18.17 SUCCESS: test -n 104782480
11:05:18.17 SUCCESS: test -n 106954752
11:05:18.17 NOTE: raidz1-3 refreservation 106954752 is 102.07% of reservation
11:05:18.18 SUCCESS: test 104782480 -le 106954752
11:05:18.18 SUCCESS: test 102.07 -le 151
11:05:18.21 SUCCESS: zfs destroy testpool/testvol
11:05:18.33 SUCCESS: zpool destroy testpool
11:05:18.34 NOTE: Too few disks to test raidz2-4
11:05:18.34 NOTE: Too few disks to test raidz2-5
11:05:18.34 NOTE: Too few disks to test raidz3-6
11:05:18.34 NOTE: Too few disks to test raidz3-7
11:05:18.34 NOTE: Performing local cleanup via log_onexit (cleanup)
11:05:18.36 SUCCESS: rm -rf /var/tmp/testdir
11:05:18.63 SUCCESS: zpool create -f testpool c0t600144F013057A3100005D0D1D9C0065d0
11:05:18.69 SUCCESS: zfs create testpool/testfs
11:05:18.95 SUCCESS: zfs set mountpoint=/var/tmp/testdir testpool/testfs
11:05:18.95 raidz refreservation=auto accounts for extra parity and skip blocks
Test: /opt/zfs-tests/tests/functional/refreserv/cleanup (run as root) [00:00] [PASS]
11:05:19.25 SUCCESS: rm -rf /var/tmp/testdir
[PASS] /opt/zfs-tests/tests/functional/acl/cifs/setup
[PASS] /opt/zfs-tests/tests/functional/acl/cifs/cifs_attr_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/cifs/cifs_attr_002_pos
[PASS] /opt/zfs-tests/tests/functional/acl/cifs/cifs_attr_003_pos
[PASS] /opt/zfs-tests/tests/functional/acl/cifs/cifs_attr_004_pos
[PASS] /opt/zfs-tests/tests/functional/acl/cifs/cleanup
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/setup
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_chmod_001_neg
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_chmod_002_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_chmod_aclmode_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_chmod_compact_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_chmod_delete_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_chmod_inherit_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_chmod_inherit_002_neg
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_chmod_inherit_002_pos
[FAIL] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_chmod_inherit_003_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_chmod_inherit_004_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_chmod_owner_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_chmod_rwacl_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_chmod_rwx_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_chmod_rwx_002_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_chmod_rwx_003_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_chmod_rwx_004_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_chmod_xattr_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_chmod_xattr_002_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_cp_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_cp_002_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_cpio_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_cpio_002_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_find_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_ls_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_mv_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_tar_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_tar_002_pos
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_aclmode_restricted_001_neg
[PASS] /opt/zfs-tests/tests/functional/acl/nontrivial/cleanup
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/setup
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/zfs_acl_chmod_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/zfs_acl_compress_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/zfs_acl_cp_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/zfs_acl_cp_002_neg
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/zfs_acl_cp_003_neg
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/zfs_acl_find_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/zfs_acl_find_002_neg
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/zfs_acl_ls_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/zfs_acl_ls_002_neg
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/zfs_acl_mv_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/zfs_acl_pack_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/zfs_acl_pax_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/zfs_acl_pax_002_pos
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/zfs_acl_pax_003_pos
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/zfs_acl_pax_004_pos
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/zfs_acl_pax_005_pos
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/zfs_acl_pax_006_pos
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/zfs_acl_tar_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/zfs_acl_tar_002_neg
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/zfs_acl_aclmode_restricted_001_pos
[PASS] /opt/zfs-tests/tests/functional/acl/trivial/cleanup
[PASS] /opt/zfs-tests/tests/functional/alloc_class/setup
[PASS] /opt/zfs-tests/tests/functional/alloc_class/alloc_class_001_pos
[PASS] /opt/zfs-tests/tests/functional/alloc_class/alloc_class_002_neg
[PASS] /opt/zfs-tests/tests/functional/alloc_class/alloc_class_003_pos
[PASS] /opt/zfs-tests/tests/functional/alloc_class/alloc_class_004_pos
[PASS] /opt/zfs-tests/tests/functional/alloc_class/alloc_class_005_pos
[PASS] /opt/zfs-tests/tests/functional/alloc_class/alloc_class_006_pos
[PASS] /opt/zfs-tests/tests/functional/alloc_class/alloc_class_007_pos
[PASS] /opt/zfs-tests/tests/functional/alloc_class/alloc_class_008_pos
[PASS] /opt/zfs-tests/tests/functional/alloc_class/alloc_class_009_pos
[PASS] /opt/zfs-tests/tests/functional/alloc_class/alloc_class_010_pos
[PASS] /opt/zfs-tests/tests/functional/alloc_class/alloc_class_011_neg
[PASS] /opt/zfs-tests/tests/functional/alloc_class/alloc_class_012_pos
[PASS] /opt/zfs-tests/tests/functional/alloc_class/alloc_class_013_pos
[PASS] /opt/zfs-tests/tests/functional/alloc_class/cleanup
[PASS] /opt/zfs-tests/tests/functional/atime/setup
[PASS] /opt/zfs-tests/tests/functional/atime/atime_001_pos
[PASS] /opt/zfs-tests/tests/functional/atime/atime_002_neg
[PASS] /opt/zfs-tests/tests/functional/atime/cleanup
[PASS] /opt/zfs-tests/tests/functional/bootfs/setup
[PASS] /opt/zfs-tests/tests/functional/bootfs/bootfs_001_pos
[PASS] /opt/zfs-tests/tests/functional/bootfs/bootfs_002_neg
[PASS] /opt/zfs-tests/tests/functional/bootfs/bootfs_003_pos
[PASS] /opt/zfs-tests/tests/functional/bootfs/bootfs_004_neg
[PASS] /opt/zfs-tests/tests/functional/bootfs/bootfs_005_neg
[PASS] /opt/zfs-tests/tests/functional/bootfs/bootfs_006_pos
[PASS] /opt/zfs-tests/tests/functional/bootfs/bootfs_007_pos
[PASS] /opt/zfs-tests/tests/functional/bootfs/bootfs_008_pos
[PASS] /opt/zfs-tests/tests/functional/bootfs/cleanup
[PASS] /opt/zfs-tests/tests/functional/cache/setup
[PASS] /opt/zfs-tests/tests/functional/cache/cache_001_pos
[PASS] /opt/zfs-tests/tests/functional/cache/cache_002_pos
[PASS] /opt/zfs-tests/tests/functional/cache/cache_003_pos
[PASS] /opt/zfs-tests/tests/functional/cache/cache_004_neg
[PASS] /opt/zfs-tests/tests/functional/cache/cache_005_neg
[PASS] /opt/zfs-tests/tests/functional/cache/cache_006_pos
[PASS] /opt/zfs-tests/tests/functional/cache/cache_007_neg
[PASS] /opt/zfs-tests/tests/functional/cache/cache_008_neg
[PASS] /opt/zfs-tests/tests/functional/cache/cache_009_pos
[FAIL] /opt/zfs-tests/tests/functional/cache/cache_010_neg
[PASS] /opt/zfs-tests/tests/functional/cache/cache_011_pos
[PASS] /opt/zfs-tests/tests/functional/cache/cleanup
[PASS] /opt/zfs-tests/tests/functional/cachefile/cachefile_001_pos
[PASS] /opt/zfs-tests/tests/functional/cachefile/cachefile_002_pos
[PASS] /opt/zfs-tests/tests/functional/cachefile/cachefile_003_pos
[PASS] /opt/zfs-tests/tests/functional/cachefile/cachefile_004_pos
[PASS] /opt/zfs-tests/tests/functional/casenorm/setup
[PASS] /opt/zfs-tests/tests/functional/casenorm/case_all_values
[PASS] /opt/zfs-tests/tests/functional/casenorm/norm_all_values
[PASS] /opt/zfs-tests/tests/functional/casenorm/sensitive_none_lookup
[PASS] /opt/zfs-tests/tests/functional/casenorm/sensitive_none_delete
[PASS] /opt/zfs-tests/tests/functional/casenorm/sensitive_formd_lookup
[PASS] /opt/zfs-tests/tests/functional/casenorm/sensitive_formd_delete
[PASS] /opt/zfs-tests/tests/functional/casenorm/insensitive_none_lookup
[PASS] /opt/zfs-tests/tests/functional/casenorm/insensitive_none_delete
[PASS] /opt/zfs-tests/tests/functional/casenorm/insensitive_formd_lookup
[PASS] /opt/zfs-tests/tests/functional/casenorm/insensitive_formd_delete
[PASS] /opt/zfs-tests/tests/functional/casenorm/mixed_none_lookup
[PASS] /opt/zfs-tests/tests/functional/casenorm/mixed_none_lookup_ci
[PASS] /opt/zfs-tests/tests/functional/casenorm/mixed_none_delete
[PASS] /opt/zfs-tests/tests/functional/casenorm/mixed_formd_lookup
[PASS] /opt/zfs-tests/tests/functional/casenorm/mixed_formd_lookup_ci
[PASS] /opt/zfs-tests/tests/functional/casenorm/mixed_formd_delete
[PASS] /opt/zfs-tests/tests/functional/casenorm/cleanup
[PASS] /opt/zfs-tests/tests/functional/channel_program/lua_core/setup
[PASS] /opt/zfs-tests/tests/functional/channel_program/lua_core/tst.args_to_lua
[PASS] /opt/zfs-tests/tests/functional/channel_program/lua_core/tst.divide_by_zero
[PASS] /opt/zfs-tests/tests/functional/channel_program/lua_core/tst.exists
[PASS] /opt/zfs-tests/tests/functional/channel_program/lua_core/tst.integer_illegal
[PASS] /opt/zfs-tests/tests/functional/channel_program/lua_core/tst.integer_overflow
[PASS] /opt/zfs-tests/tests/functional/channel_program/lua_core/tst.language_functions_neg
[PASS] /opt/zfs-tests/tests/functional/channel_program/lua_core/tst.language_functions_pos
[PASS] /opt/zfs-tests/tests/functional/channel_program/lua_core/tst.large_prog
[PASS] /opt/zfs-tests/tests/functional/channel_program/lua_core/tst.memory_limit
[PASS] /opt/zfs-tests/tests/functional/channel_program/lua_core/tst.nested_neg
[PASS] /opt/zfs-tests/tests/functional/channel_program/lua_core/tst.nested_pos
[PASS] /opt/zfs-tests/tests/functional/channel_program/lua_core/tst.nvlist_to_lua
[PASS] /opt/zfs-tests/tests/functional/channel_program/lua_core/tst.recursive_neg
[PASS] /opt/zfs-tests/tests/functional/channel_program/lua_core/tst.recursive_pos
[PASS] /opt/zfs-tests/tests/functional/channel_program/lua_core/tst.return_large
[PASS] /opt/zfs-tests/tests/functional/channel_program/lua_core/tst.return_nvlist_neg
[PASS] /opt/zfs-tests/tests/functional/channel_program/lua_core/tst.return_nvlist_pos
[PASS] /opt/zfs-tests/tests/functional/channel_program/lua_core/tst.return_recursive_table
[PASS] /opt/zfs-tests/tests/functional/channel_program/lua_core/tst.timeout
[PASS] /opt/zfs-tests/tests/functional/channel_program/lua_core/cleanup
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/setup
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.destroy_fs
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.destroy_snap
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.get_count_and_limit
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.get_index_props
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.get_mountpoint
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.get_neg
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.get_number_props
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.get_string_props
[FAIL] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.get_type
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.get_userquota
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.get_written
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.list_children
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.list_clones
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.list_snapshots
[FAIL] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.list_system_props
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.parse_args_neg
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.promote_conflict
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.promote_multiple
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.promote_simple
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.rollback_mult
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.rollback_one
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.snapshot_destroy
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.snapshot_neg
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.snapshot_recursive
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.snapshot_simple
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/tst.terminate_by_signal
[PASS] /opt/zfs-tests/tests/functional/channel_program/synctask_core/cleanup
[PASS] /opt/zfs-tests/tests/functional/checksum/run_edonr_test
[PASS] /opt/zfs-tests/tests/functional/checksum/run_sha2_test
[PASS] /opt/zfs-tests/tests/functional/checksum/run_skein_test
[PASS] /opt/zfs-tests/tests/functional/clean_mirror/setup
[PASS] /opt/zfs-tests/tests/functional/clean_mirror/clean_mirror_001_pos
[PASS] /opt/zfs-tests/tests/functional/clean_mirror/clean_mirror_002_pos
[PASS] /opt/zfs-tests/tests/functional/clean_mirror/clean_mirror_003_pos
[PASS] /opt/zfs-tests/tests/functional/clean_mirror/clean_mirror_004_pos
[PASS] /opt/zfs-tests/tests/functional/clean_mirror/cleanup
[PASS] /opt/zfs-tests/tests/functional/cli_root/zdb/zdb_001_neg
[PASS] /opt/zfs-tests/tests/functional/cli_root/zdb/zdb_002_pos
[PASS] /opt/zfs-tests/tests/functional/cli_root/zfs/setup
[PASS] /opt/zfs-tests/tests/functional/cli_root/zfs/zfs_001_neg
[PASS] /opt/zfs-tests/tests/functional/cli_root/zfs/zfs_002_pos
[PASS] /opt/zfs-tests/tests/functional/cli_root/zfs/zfs_003_neg
[PASS] /opt/zfs-tests/tests/functional/cli_root/zfs/cleanup
[PASS] /opt/zfs-tests/tests/functional/cli_root/zfs_clone/setup
[PASS] /opt/zfs-tests/tests/functional/cli_root/zfs_clone/zfs_clone_001_neg
[PASS] /opt/zfs-tests/tests/functional/cli_root/zfs_clone/zfs_clone_002_pos