SSDs peformance on head/freebsd-10 stable using FIO
Kashyap Desai
kashyap.desai at avagotech.com
Thu Jul 10 13:36:15 UTC 2014
> -----Original Message-----
> From: Alexander Motin [mailto:mavbsd at gmail.com] On Behalf Of Alexander
> Motin
> Sent: Thursday, July 10, 2014 6:01 PM
> To: Kashyap Desai
> Cc: FreeBSD-scsi
> Subject: Re: SSDs peformance on head/freebsd-10 stable using FIO
>
> Hi, Kashyap.
>
> On 10.07.2014 15:00, Kashyap Desai wrote:
> > I am trying to collect IOPs and throughput using FIO on
> > FreeBSD-10-stable as below post mentioned that CAM can reach upto
> > 1,000,000 IOPS using Fine-Grained CAM locking.
> >
> > http://www.freebsd.org/news/status/report-2013-07-2013-
> 09.html#GEOM-Di
> > rect-Dispatch-and-Fine-Grained-CAM-Locking
> >
> > I am using below FIO parameter.
> >
> > [global]
> > ioengine=posixaio
> > buffered=0
> > rw=randread
> > bs=4K
> > iodepth=32
> > numjobs=2
> > direct=1
> > runtime=60s
> > thread
> > group_reporting=1
> > [job1]
> > filename=/dev/da0
> > [job2]
> > filename=/dev/da1
> > [job3]
> > filename=/dev/da2
> > [job4]
> > filename=/dev/da3
> > [job4]
> > filename=/dev/da4
> > ..
> >
> > I have 8 SSDs in my setup and all 8 SSDs are behind LSI’s 12Gp/s
> > MegaRaid Controller as JBOD. I also found FIO can be used in Async
> > mode after loading “aio” kernel module.
> >
> > Using single SSD, I am able to see 110K-130K IOPs. This IOPs counts
> > are matching with what I see on Linux machine.
> >
> > Now, I am not able to scale IOPs on my machine after 200K. I see CPU
> > is almost occupied and no idle time after IOPs reach to 200K.
> >
> > If you have any pointers to try with, I can do some experiment on my
> setup.
>
> Getting such results I would immediately start doing profiling with
> pmcstat.
> Quite likely you are hitting some new lock congestion. Start with simple
> `pmcstat -n 100000000 -TS unhalted-cycles`. It it hard to say for sure
> what
> went wrong there without more data, so just couple
I have attached profile output for the command mentioned above. I will dig
further and see if this is what we have theoretical limit for CAM attached
HBA.
I am trying to isolate tools, tuning or Driver/FW issue at very first level.
> thoughts:
>
> First of all, I've never tried aio in my benchmarks, only synchronous
> ones. Try
> to run 8 instances of `dd if=/dev/daX of=/dev/null bs=512` per each SSD
> same time, just as I did. You may vary number of dd's, but keep total
> below
> 256, or you mad to increase nswbuf limit in kern_vfs_bio_buffer_alloc().
I ran multiple dd instance also and seeing IOPs throttle somewhere ~200K .
Do we have any mechanism to check CAM layer's max IOPs support without
involving actual Device ? Something like _null_ device driver which just
send the command back to CAM layer ?
>
> For second, you are using single HBA, that should create significant
> congestion around its CAM SIM lock. Proper solution would be to add
> multiple queues support to the driver, and we discussed it with Scott Long
> for quite some time, but that requires more work (I hope you may be
> interested in it ;) ). Or you may just insert 3-4 HBAs. My million IOPS I
> was
> reaching with four 2008/2308 6Gbps HBAs and 16 SATA SSDs.
I remember this part and really good to contribute for this work. As part
of this we have initiated multiple MSIx implementation in <mrsas>, which
will have multiple reply queue per MSI-x.
Do we really require to have multiple Submission queue at low level driver ?
I thought it will be a CAM interface for multi queue which _all_ low level
drivers need to hook into .
I just started gathering performance numbers on FreeBSD with more SSDs on
LSI's 12Gbp/s card, so just getting familiar with tools and tunings.
Earlier I used just one SSDs and some HDDs so no issue found w.r.t IOPs
etc.. As part of this activity, I am trying to see if mrsas driver is able
to meet expected performance without considering any other component as
bottleneck.
I mean, are we able to meet IOPs which CAM layer can handle.
Kashyap
>
> Please tell me what you get, so we could continue investigation.
>
> --
> Alexander Motin
-------------- next part --------------
A non-text attachment was scrubbed...
Name: profile.graph
Type: application/octet-stream
Size: 186755 bytes
Desc: not available
URL: <http://lists.freebsd.org/pipermail/freebsd-scsi/attachments/20140710/26358c9c/attachment-0001.obj>
More information about the freebsd-scsi
mailing list