Re: RFC reviews for ggate and hastd
- In reply to: Johannes Totz via freebsd-geom : "Re: RFC reviews for ggate and hastd"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Mon, 20 Sep 2021 22:36:31 UTC
Johannes Totz wrote this message on Sun, Sep 19, 2021 at 17:27 +0100: > On 14/09/2021 22:21, John-Mark Gurney wrote: > > Johannes Totz wrote this message on Mon, Sep 13, 2021 at 02:00 +0100: > >> On 09/09/2021 23:33, John-Mark Gurney wrote: > >>> Johannes Totz via freebsd-geom wrote this message on Thu, Sep 02, 2021 at 21:55 +0100: > >>>> (looks like gmane swallowed my 1st message, trying again) > >>>> > >>>> Hey folks, > >>>> > >>>> any ggate or hastd users here? I've got some code reviews for you. > >>>> Please take a look if you get a chance: > >>>> > >>>> https://reviews.freebsd.org/D31727 > >>>> Fix potential out-of-bounds read in the geom-gate kernel module. > >>>> > >>>> https://reviews.freebsd.org/D31722 > >>>> Dynamically alloc buffers in ggatec, instead of assuming a fixed size on > >>>> the stack. > >>>> > >>>> https://reviews.freebsd.org/D31709 > >>>> Simple rc script to start ggated. > >>> > >>> I'll try to look at them. > >>> > >>> I've broken out the ggate code to: https://www.funkthat.com/gitea/jmg/ggate > >> > >> Nice, thanks! > >> > >> I noticed the http branch. One weekend toy project idea I wanted to get > >> around to was to write a ggated impl that talks to Backblaze. > > > > I looked at the Backblaze B2 API, and I don't see a way to do partial > > updates of a file. All the API that I see require you to upload the entire > > file, so I don't think that it'll work. > > > > I abanded http as a solution, because of issues w/ WebDAV and partial > > updates and the IETF not being very sane about it: > > https://blog.sphere.chronosempire.org.uk/2012/11/21/webdav-and-the-http-patch-nightmare > > Oh I was just gonna store each block as a separate file on the backend. > No need to keep it all together as one huge image. Sure we'd end up with > literally billions of small files. But I'd ignore that until it becomes > an actual problem. Ahh, guess that'd work. It'd be interesting to see what the performance would be storing them as 512 byte files/blocks, or 4k files (and presenting a 4k block size), or 64k byte files, but still doing a 4k block size, and doing a RMW style caching for the larger blocks. gcache may be able save the work of writing a proper caching layer for it... -- John-Mark Gurney Voice: +1 415 225 5579 "All that I will do, has been done, All that I have, has not."