Re: Proposal: Disable compression of newsyslog by default

From: Sulev-Madis Silber <madis555_at_hot.ee>
Date: Sun, 24 Dec 2023 04:22:52 UTC
On 23 December 2023 09:18:23 EET, Xin Li <delphij@delphij.net> wrote:
>Hi,
>
>Inspired by D42961, I propose that we move forward with disabling the compression by default in newsyslog, as implemented in https://reviews.freebsd.org/D43169
>
>Historically, newsyslog has compressed rotated log files to save disk space. This approach was valuable in the early days where storage space was limited.

it's still limited

>However, the landscape has changed significantly.  Modern file systems, such as ZFS, now offer native compression capabilities.

not everyone uses them

>Additionally, the widespread availability of larger hard drives has diminished the necessity for additional compression.

but data sizes also have increased massively

>Notably, the need to decompress log files for pattern searches poses a significant inconvenience, further questioning the utility of this legacy feature.

should be up to each admin to cba decompression vs. plain speed/size/etc

>In commit 906748d208d3, flags J, X, Y, Z can now indicate that a log file is eligible for compression rather than directly enforcing it. It allows for a more flexible approach, wherein the actual compression method can be set to "none" or specified as one among bzip2, gzip, xz, or zstd.

that's good approach

>Therefore I would propose that we change the default compression setting to "none" in FreeBSD 15.0.  This change reflects our adaptation to the evolving technological environment and user needs.  It also aligns with the broader initiative to modernize our systems while maintaining flexibility and efficiency.

unsure about this. generic zroot install would be fine with this i guess, and usual log sizes? other custom installs need tuning anyway

>I look forward to your thoughts and feedback on this proposal.
>
>Cheers,

indeed. we have large disks now. but we fill them all. i started with 1.2g one. was too small. needed to compress for space. now i have 12t. it's still too small. i compress for space. they make them up to 22t nowadays. this is about 20000 times larger but still feels small. how did this happen? we just did this to ourselves. data sizes have kept up with storage and bandwidth. gamer might get 1gbit/s connection at home so (s)he only needs to wait for one hour to download new game. just as dialup user once did, wait for hours. or it could be photographer, graphics designer or architect who works. both cases still use compressed data as cpu and ram permits it and it saves a lot of time and space. now, it's also related to servers as those things don't disappear to or appear from just thin air. they come from machines, some of them hopefully running fbsd, where admins wonder how to deal with large log sizes. they need them for audit purposes. or statistics. hardware allows, so they compress it. "write-only-read-never" data benefits from, eg, xz a lot. as others already have told

so yeah, from (only!) 25+ years of experience, i can confirm that humankind has developed AND used everything at max. internet, first for military and educational uses, now for connecting washing machines. oh and, first hdd, state of art device then, can only store *part* of *compressed* photo now

now, this might not be related to default fbsd installs in common usage where default base syslog creates tiny amount of data per week

but one of reasons was given how everything fits uncompressed nowdays. to our disks and pipes. which it really doesn't