svn commit: r367052 - head/sys/kern
Alexander Motin
mav at FreeBSD.org
Mon Oct 26 04:04:07 UTC 2020
Author: mav
Date: Mon Oct 26 04:04:06 2020
New Revision: 367052
URL: https://svnweb.freebsd.org/changeset/base/367052
Log:
Enable bioq 'car limit' added at r335066 at 128 bios.
Without the 'car limit' enabled (before this), running sequential ZFS scrub
on HDD without command queuing support, I've measured latency on concurrent
random reads reaching 4 seconds (surprised that not more). Enabling this
reduced the latency to 65 milliseconds, while scrub still doing ~180MB/s.
For disks with command queuing this does not make much difference (if any),
since most time all the requests are queued down to the disk or HBA, leaving
nothing in the queue to sort. And even if something does not fit, staying on
the queue, it is likely not for long. To not limit sorting in such bursty
scenarios I've added batched counter zeroing when the queue is getting empty.
The internal scheduler of the SAS HDD I was testing seems to be even more
loyal to random I/O, reducing the scrub speed to ~120MB/s. So in case
somebody worried this is limit is too strict -- it actually looks relaxed.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
Modified:
head/sys/kern/subr_disk.c
Modified: head/sys/kern/subr_disk.c
==============================================================================
--- head/sys/kern/subr_disk.c Mon Oct 26 03:26:18 2020 (r367051)
+++ head/sys/kern/subr_disk.c Mon Oct 26 04:04:06 2020 (r367052)
@@ -26,7 +26,7 @@ __FBSDID("$FreeBSD$");
#include <sys/sysctl.h>
#include <geom/geom_disk.h>
-static int bioq_batchsize = 0;
+static int bioq_batchsize = 128;
SYSCTL_INT(_debug, OID_AUTO, bioq_batchsize, CTLFLAG_RW,
&bioq_batchsize, 0, "BIOQ batch size");
@@ -172,6 +172,8 @@ bioq_remove(struct bio_queue_head *head, struct bio *b
head->insert_point = NULL;
TAILQ_REMOVE(&head->queue, bp, bio_queue);
+ if (TAILQ_EMPTY(&head->queue))
+ head->batched = 0;
head->total--;
}
More information about the svn-src-all
mailing list