What is the "better / best " method to multi-boot different OSes natively WITHOUT VirtualBox(es) ?
Valeri Galtsev
galtsev at kicp.uchicago.edu
Mon Oct 26 16:19:01 UTC 2020
On 10/26/20 10:30 AM, RW via freebsd-questions wrote:
> On Sun, 25 Oct 2020 17:33:21 +0100
> Polytropon wrote:
>
>> On Sun, 25 Oct 2020 06:50:25 +0100, Ralf Mardorf wrote:
>>> I also want to add for consideration, if reboots between operating
>>> systems are often wanted and HDDs are used, it's way better when all
>>> drives, even the unused drives are spinning all the time. Parking
>>> and releasing heads very often, does shorten the life span the
>>> most.
>
> I think this is mostly a myth.
>
> Manufactures specify a figure for this of, IIRC, around 150k cycles.
> Drives that are switched-off a few time a day never reach anything like
> that.
>
> A few year ago Western Digital made some green drives, with extremely
> aggressive power saving, that parked within seconds. With some usage
> patters these could fail in months. I think it was around this time
> that people started talking about heads as if they were like sledge
> hammers.
>
Hm, not sledge hammers, but still. Heads are relatively massive
attachment GLUED to the end of head arm. The last is spring loaded to be
returned to "parking" track, and when the drive is powered off, current
in DC magnet that moves arm to necessary track goes to 0, arm is
released, and moved by spring to track0 till arm bangs against arm
stopper. That mechanical thing is the reason for the finite number of
power off's of the drive (which even is in the drive specs). I have seen
drives that "lost their heads": the heads were no more secured to the
arm and freely flying inside drive enclosure.
>
>> I don't know if this is still true, but in ye olden times,
>> there was a distinction between "home PC disks" and "server
>> disks"...
>>
>> dislikes ... running all the time
>
[spinning] Hard drive being the most failure prone part of the computer,
I am always was picky about what drive I have in _my_ machine. Only the
best ("enterprise" level) drives, and only from best manufacturers, and
the most reliable of their lines. This (price difference, not much of
battery saving, etc.) pays off by reliability.
> And they aren't designed to take the same levels of reads and writes.
>
This actually is not true, as far as I understand. Magnetic medium with
solid carrier of magnetic layer (which hard drives are) has no physical
mechanism that restricts the number of re-magnetizations of magnetic
layer. This is different from tape and floppy drives. The last have two
mechanisms of medium deterioration: flexing medium, finite life of
"flexible glue" carrying magnetic layer particles, and mechanical wear
as to the contrary to hard drives tape and floppy heads are in
(abrasive) direct mechanical contact with magnetic surface (and are not
protected from external dust). Hard drive heads are "flying" above the
surface, never (ideally) touching it.
>
>> Probably modern disks tend to be more like server disks,
>> even when being sold for and used in home PCs... :-)
>
>
> These days home drives at 2TB or bigger are usually shingled - often
> without any mention, even on the data sheets. Typically there's a more
> expensive version aimed for use in RAID that isn't shingled.
>
> Another difference is that home drives try very much harder to recover
> data, whereas a drive intended for RAID is programmed to fail quickly
> and leave it to the redundancy.
>
I never heard of that, would you mind to elaborate on that.
My knowledge (based on really old design of drive firmware) is: when
checksum of read block doesn't match, drive re-reads block multiple
times, and attempts to superimpose read results till the check sum
matches or max number of attempts (really large number) is reached. At
this point the drive sends out block read result (or read failure), and
if the number of attempts is larger than small threshold number (really
few, like 3 or 5), then it declares block a bad block, writes block
content (the same as was sent as block read result) onto bad block
reallocation area on drive platters, and adds block number (address) to
bad block reallocation table.
What I described is what I remember from drive firmware way back, over
20 years ago. If dealing with bad blocks changed fundamentally, I'd like
to hear what it is now, or some pointer for reading would be great. One
thing that is not relevant to bad blocks did change since then: if back
then it was digital signal that was coming from drive head, these days
it is rather weird looking analog signal (which is basically compared to
digital signal passed through low pass filter, and if they coincide,
then that digital signal resembles digital signal the drive holds, and
is the read result). Apart from that (digital encoded for drive signal
fouled by lower frequency analog equipment writing it to platters and
mostly by then reading it from them), the rest of drive firmware has not
much reason to change (yes, I know, there exist "green drives", I just
dismiss that "green" part[y] ;-) as I stay away from them).
Valeri
> _______________________________________________
> freebsd-questions at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "freebsd-questions-unsubscribe at freebsd.org"
>
--
++++++++++++++++++++++++++++++++++++++++
Valeri Galtsev
Sr System Administrator
Department of Astronomy and Astrophysics
Kavli Institute for Cosmological Physics
University of Chicago
Phone: 773-702-4247
++++++++++++++++++++++++++++++++++++++++
More information about the freebsd-questions
mailing list