Re: Periodic rant about SCHED_ULE
- Reply: Gary Jennejohn : "Re: Periodic rant about SCHED_ULE"
- In reply to: Miroslav Lachman : "Re: Periodic rant about SCHED_ULE"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Wed, 07 Jul 2021 21:49:19 UTC
On Wed, Jul 07, 2021 at 10:56:55PM +0200, Miroslav Lachman wrote: > On 07/07/2021 20:18, Gary Jennejohn wrote: > > On Wed, 7 Jul 2021 13:47:47 -0400 > > George Mitchell <george+freebsd@m5p.com> wrote: > > [..] > > > > I've been ranting about this for years now, and I've had my say -- but > > > no one has ever answered my question about what workload SCHED_ULE is > > > best for, though numerous people have claimed that it's better than > > > SCHED_4BSD for -- some rumored workload or other. -- George > > > > > > > IIRC there was talk about making the scheduler loadable in the early > > days. But that was years ago and I may be misrembering. > > > > I have a Ryzen 5 1600 with 6 cores, so older tech and "only" 3200MHz. > > > > I can do a clean buildworld on FreeBSD-14 using only 10 of the 12 SMTs > > in about 40 minutes using SCHED_4BSD. While still browsing the > > interwebs or watching a film etc. with no noticeable lags in > > performance. > > > > So, for my normal desktop usage SCHED_4BSD is the only way to go. > > I had some performance problems with VirtualBox as hypervisor on somewhat > older Intel Xeon with 4 cores 8 threads. So I tested 4BSD and ULE - > SCHED_4BSD had slightly better results than SCHED_ULE. > I am also curious why ULE is the default. Where are some real world > performance results for comparing the two FreeBSD schedulers. > I made those measurements more than a decade ago, and reported my findings in either freebsd-hackers or freebsd-current mailing list. Write a classic boss-worker MPI numerical simulation, where the workers are compute bound. Start the MPI simulation requesting NCPU+1 images with NCPU being the number of available cpus. Each worker will be assigned to a cpu. Then this leads to a worker and the boss image sharing a cpu. Due to cpu affinity, these two then ping-pong on that cpu. I haven't repeated these measurement in a long time. -- Steve