TRIM support for UFS?
Julian Elischer
julian at freebsd.org
Wed Dec 8 17:30:39 UTC 2010
kirk does have some trim patches
which he sent to me once..
let me look... hmmm
ah here it is..
this may or may not be out of date.
I'll let Kirk chime in if he thinks it 's worth it..
I include the email from him as an attachment,
hopefully it wont get stripped by the list, but you should both get it..
julian
On 12/8/10 8:58 AM, Oliver Fromme wrote:
> Pawel Jakub Dawidek wrote:
> > On Tue, Dec 07, 2010 at 04:31:14PM +0100, Oliver Fromme wrote:
> > > I've bought an OCZ Vertex2 E (120 GB SSD) and installed
> > > FreeBSD i386 stable/8 on it, using UFS (UFS2, to be exact).
> > > I've made sure that the partitions are aligned properly,
> > > and used newfs with 4k fragsize and 32k blocksize.
> > > It works very well so far.
>
> (I should also mention that I mounted all filesystems from
> the SSD with the "noatime" option, to reduce writes during
> normal operation.)
>
> > > So, my question is, are there plans to add TRIM support
> > > to UFS? Is anyone working on it? Or is it already there
> > > and I just overlooked it?
> >
> > I hacked up this patch mostly for Kris and md(4) memory-backed UFS, so
> > on file remove space can be returned to the system.
>
> I see.
>
> > I think you should ask Kirk what to do about that, but I'm afraid my
> > patch can break SU - what if we TRIM, but then panic and on fsck decide
> > to actually use the block?
>
> Oh, you're right. That could be a problem.
>
> Maybe it would be better to write a separate tool that
> performs TRIM commands on areas of the file system that
> are unused for a while.
>
> I also remember that mav@ wrote that the TRIM command is
> very slow. So, it's probably not feasible to execute it
> each time some blocks are freed, because it would make the
> file system much slower and nullify all advantages of the
> SSD.
>
> Just found his comment from r201139:
> "I have no idea whether it is normal, but for some reason it takes 200ms
> to handle any TRIM command on this drive, that was making delete extremely
> slow. But TRIM command is able to accept long list of LBAs and the length of
> that list seems doesn't affect it's execution time. Implemented request
> clusting algorithm allowed me to rise delete rate up to reasonable numbers,
> when many parallel DELETE requests running."
>
> > BTW. Have you actually observed any performance degradation without
> > TRIM?
>
> Not yet. My SSD is still very new. It carries only the
> base system (/home is on a normal 1TB disk), so not many
> writes happened so far. But as soon as I start doing more
> write access (buildworld + installworld, updating ports
> and so on), I expect that performance will degrade over
> time.
>
> I've also heard from several people on various mailing lists
> that the performance of their SSD drives got worse after
> some time.
>
> That performance degradation is caused by so-called "static
> wear leveling". The drive will have to move the contents
> of blocks that are never (or rarely) written to to other
> blocks, so they can be overwritten, in order to distribute
> wear equally over all blocks. If a block is known to be
> unused (which is the case when the drive is new, or after
> a TRIM command), the contents don't have to be moved, so
> the write operation is much faster. I think all modern
> SSD drives use static wear leveling.
>
> Without TRIM support in the file system, a work-around is
> to "newfs -E" the file system when the performance gets
> too bad. This requires a backup-restore cycle, of course,
> so it's a somewhat annoying.
>
> Another work-around is to leave some space unused, i.e.
> don't use 20% at the end of the SSD for any file systems,
> for example. Since those 20% are never written to, they
> are known to be unused to the SSD's firmware, so it can
> use them for wear leveling. This will postpone the
> performance degradation somewhat, but it won't completely
> avoid it, ultimately. And wasting some space is not a
> very satisfying solution either.
>
> > I've similar SSDs and from what I tested it somehow can handle
> > wear leveling internally. You can to TRIM entire disk using this simple
> > program below, newfs it and test it.
>
> It does basically the same as "newfs -E", right?
>
> > Then fill it with random data, newfs it again, test it and compare
> > results.
>
> Filling it just once will probably not have much of an
> effect. In fact, wear leveling will probably not kick
> in if you just fill the whole disk, because all blocks
> are used equally anyway.
>
> The performance degradation will only start to occur
> after a while (weeks or months) when some blocks are
> written much more often than others. In this situation,
> (static) wear leveling will kick in and start moving
> data in order to re-use seldom-written-to blocks.
>
> Best regards
> Oliver
>
-------------- next part --------------
An embedded message was scrubbed...
From: Kirk McKusick <mckusick at mckusick.com>
Subject: Re: UFS2 and TRIM command
Date: Tue, 03 Nov 2009 20:48:05 -0800
Size: 3300
Url: http://lists.freebsd.org/pipermail/freebsd-fs/attachments/20101208/9eaa55dc/AttachedMessage.eml
More information about the freebsd-fs
mailing list