My project wish-list for the next 12 months
Andre Oppermann
andre at freebsd.org
Thu Dec 2 09:41:58 PST 2004
Sam wrote:
> On Thu, 2 Dec 2004, Andre Oppermann wrote:
>
>> Scott Long wrote:
>>
>>> 5. Clustered FS support. SANs are all the rage these days, and
>>> clustered filesystems that allow data to be distributed across many
>>> storage enpoints and accessed concurrently through the SAN are very
>>> powerful. RedHat recently bought Sistina and re-opened the GFS source
>>> code, so exploring this would be very interesting.
>>
>> There are certain steps that can be be taken one at a time. For example
>> it should be relatively easy to mount snapshots (ro) from more than one
>> machine. Next step would be to mount a full 'rw' filesystem as 'ro' on
>> other boxes. This would require cache and sector invalidation
>> broadcasting
>> from the 'rw' box to the 'ro' mounts. The holy grail of course is to
>> mount
>> the same filesystem 'rw' on more than one box, preferrably more than two.
>> This requires some more involved synchronization and locking on top of
>> the
>> cache invalidation. And make sure that the multi-'rw' cluster stays
>> alive
>> if one of the participants freezes and doesn't respond anymore.
>>
>> Scrolling through the UFS/FFS code I think the first one is 2-3 days of
>> work. The second 2-4 weeks and the third 2-3 month to get it right.
>> If someone would throw up the money...
>
> You might also design in consideration for data redundancy. Right now
> GFS largely relies on the SAN box to export already redundant RAID
> disks. GFS sits on a "cluster aware" lvm layer that is supposed to
> be able to do mirroring and striping, but I'm told it's not
> stable enough for "production" use.
Data redundancy would require a UFS/FFS redesign. I'm 'only' talking
about enhancing UFS/FFS but keeping anything ondisk the same (plus
some more elements).
--
Andre
More information about the freebsd-hackers
mailing list