Re: Installing openAI's GPT-2 Ada AI Language Model
Date: Fri, 21 Apr 2023 12:30:30 UTC
If you don't want to use the GPU,the commands should be more or less the following : sudo touch /usr/local/etc/rc.d/ubuntu && chmod +x /usr/local/etc/rc.d/ubuntu # Make it have this content: #!/bin/sh # # PROVIDE: ubuntu # REQUIRE: archdep mountlate # KEYWORD: nojail # # This is a modified version of /etc/rc.d/linux # Based on the script by mrclksr: # https://github.com/mrclksr/linux-browser-installer/blob/main/rc.d/ubuntu.in # . /etc/rc.subr name="ubuntu" desc="Enable Ubuntu chroot, and Linux ABI" rcvar="ubuntu_enable" start_cmd="${name}_start" stop_cmd=":" unmounted() { [ `stat -f "%d" "$1"` == `stat -f "%d" "$1/.."` -a \ `stat -f "%i" "$1"` != `stat -f "%i" "$1/.."` ] } ubuntu_start() { local _emul_path _tmpdir load_kld -e 'linux(aout|elf)' linux case `sysctl -n hw.machine_arch` in amd64) load_kld -e 'linux64elf' linux64 ;; esac if [ -x /compat/ubuntu/sbin/ldconfigDisabled ]; then _tmpdir=`mktemp -d -t linux-ldconfig` /compat/ubuntu/sbin/ldconfig -C ${_tmpdir}/ld.so.cache if ! cmp -s ${_tmpdir}/ld.so.cache /compat/ubuntu/etc/ld.so.cache; then cat ${_tmpdir}/ld.so.cache > /compat/ubuntu/etc/ld.so.cache fi rm -rf ${_tmpdir} fi # Linux uses the pre-pts(4) tty naming scheme. load_kld pty # Handle unbranded ELF executables by defaulting to ELFOSABI_LINUX. if [ `sysctl -ni kern.elf64.fallback_brand` -eq "-1" ]; then sysctl kern.elf64.fallback_brand=3 > /dev/null fi if [ `sysctl -ni kern.elf32.fallback_brand` -eq "-1" ]; then sysctl kern.elf32.fallback_brand=3 > /dev/null fi sysctl compat.linux.emul_path=/compat/ubuntu _emul_path="/compat/ubuntu" unmounted "${_emul_path}/dev" && (mount -o nocover -t devfs devfs "${_emul_path}/dev" || exit 1) unmounted "${_emul_path}/dev/fd" && (mount -o nocover,linrdlnk -t fdescfs fdescfs "${_emul_path}/dev/fd" || exit 1) unmounted "${_emul_path}/dev/shm" && (mount -o nocover,mode=1777 -t tmpfs tmpfs "${_emul_path}/dev/shm" || exit 1) unmounted "${_emul_path}/home" && (mount -t nullfs /home "${_emul_path}/home" || exit 1) unmounted "${_emul_path}/proc" && (mount -o nocover -t linprocfs linprocfs "${_emul_path}/proc" || exit 1) unmounted "${_emul_path}/sys" && (mount -o nocover -t linsysfs linsysfs "${_emul_path}/sys" || exit 1) unmounted "${_emul_path}/tmp" && (mount -t nullfs /tmp "${_emul_path}/tmp" || exit 1) unmounted /dev/fd && (mount -o nocover -t fdescfs fdescfs /dev/fd || exit 1) unmounted /proc && (mount -o nocover -t procfs procfs /proc || exit 1) true } load_rc_config $name run_rc_command "$1" sysrc ubuntu_enable=YES # Create necessary mount points for a working Linuxulator: mkdir -p {/compat/ubuntu/dev/fd,/compat/ubuntu/dev/shm,/compat/ubuntu/home,/compat/ubuntu/tmp,/compat/ubuntu/proc,/compat/ubuntu/sys} # Start Ubuntu service: service ubuntu start # Install needed packages: pkg install debootstrap pulseaudio # Install Ubuntu 20.04 into /compat/ubuntu: debootstrap --arch=amd64 --no-check-gpg focal /compat/ubuntu # Restart Ubuntu service to make sure everything is properly mounted: service ubuntu restart # Fix broken symlink: cd /compat/ubuntu/lib64/ && rm ./ld-linux-x86-64.so.2 ; ln -s ../lib/x86_64-linux-gnu/ld-2.31.so ld-linux-x86-64.so.2 # Chroot into your Linux environment: chroot /compat/ubuntu /bin/bash # Set correct timezone inside your chroot: printf "%b\n" "0.0 0 0.0\n0\nUTC" > /etc/adjtime sudo dpkg-reconfigure tzdata # For some reason sudo is necessary here, otherwise it fails. # Fix APT package manager: printf "APT::Cache-Start 251658240;" > /etc/apt/apt.conf.d/00aptitude # Enable more repositories: printf "deb http://archive.ubuntu.com/ubuntu/ focal main restricted universe multiverse" > /etc/apt/sources.list # Install required programs: apt update ; apt install -y apt-transport-https curl fonts-symbola gnupg pulseaudio build-essential gcc gfortran # Exit out of chroot exit # Fix x86_64-linux-gnu libraries path between ubuntu and freebsd cp -r /compat/ubuntu/usr/lib/x86_64-linux-gnu /lib --> Installing PyTorch and your chatgpt github fork on FreeBSD # fetch https://gist.githubusercontent.com/shkhln/40ef290463e78fb2b0000c60f4ad797e/raw/f640983249607e38af405c95c457ce4afc85c608/uvm_ioctl_override.c # /compat/ubuntu/bin/gcc --sysroot=/compat/ubuntu -m64 -std=c99 -Wall -ldl -fPIC -shared -o dummy-uvm.so uvm_ioctl_override.c # pkg install linux-miniconda-installer # miniconda-installer # bash # source /home/marietto/miniconda3/etc/profile.d/conda.sh # conda activate (base) # conda activate pytorch (pytorch) # conda activate (base) # conda activate (base) # git clone your chatgpt github fork On Fri, Apr 21, 2023 at 1:34 PM Aryeh Friedman <aryeh.friedman@gmail.com> wrote: > On Fri, Apr 21, 2023 at 6:40 AM Mario Marietto <marietto2008@gmail.com> > wrote: > > > > You don't need bhyve to make that,but only the Linuxulator. I don't > think the type of cpu will make any difference. I've used the Intel I9 cpu. > I tried FreeBSD 13.1 and I haven't found problems. For sure using python > env is tricky,but if you have the thinkering attitude,you will have some > fun. > > I already have a VM allocated on it (the same host has 3 other VM's on > it). I guess that is one thing I missed was the linuxulator (the > linsuckslator is more like it) and as far tinkering goes it is fine to > a point but not when it has taken *DAYS* away from paying projects (I > am a freelancer). > > BTW I will be trying your specific command lines tomorrow (I must get > back to paid work). > > P.S. The point of this project is to start a new project called > babySpock that is a personal assistant for me and my programming > partner/wife in all but legal detail. babySpock will hopefully be > able to help with pair programming/design brainstorming, general > clerical office tasks, be a halfway decent conversation partner > (roughly on or near the level of chatGPT which inspired this project > in the first place in order to increase the amount of "context" that > it could store and to slice and dice context as needed to feed the > more expensive models just relevant context as well use multiple > models.... basically a DIY hobbyist AM (artificial mind with the final > goal being artificial mature minds [AMM] instead of AGI due to the > rule making paradox) lab that has to "earn its own way in life" (i.e. > if it is not useful it will likely die from misuse). > > > > > On Fri, Apr 21, 2023 at 12:26 PM Aryeh Friedman < > aryeh.friedman@gmail.com> wrote: > >> > >> The more I am fighting with it in linux (only thing there is docs for) > >> the more obvious it just doesn't work on > >> > >> On Fri, Apr 21, 2023 at 6:19 AM Mario Marietto <marietto2008@gmail.com> > wrote: > >> > > >> > Can't you install pytorch using the linux miniconda installer like > below ? > >> > > >> > # fetch > https://gist.githubusercontent.com/shkhln/40ef290463e78fb2b0000c60f4ad797e/raw/f640983249607e38af405c95c457ce4afc85c608/uvm_ioctl_override.c > >> > > >> > # /compat/ubuntu/bin/gcc --sysroot=/compat/ubuntu -m64 -std=c99 -Wall > -ldl -fPIC -shared -o dummy-uvm.so uvm_ioctl_override.c > >> > > >> > # pkg install linux-miniconda-installer > >> > # miniconda-installer > >> > # bash > >> > # source /home/marietto/miniconda3/etc/profile.d/conda.sh > >> > # conda activate > >> > > >> > (base) # conda activate pytorch > >> > > >> Will this work a bhyve on an AMD Ryzen 5 host? After playing with it > >> in several linux instances I always get stuck when it can't find a > >> compatible version > >> > >> -- > >> Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org > > > > > > > > -- > > Mario. > > > > -- > Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org > -- Mario.