From nobody Fri Apr 21 10:18:45 2023 X-Original-To: freebsd-questions@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4Q2r9S392xz46D44; Fri, 21 Apr 2023 10:19:24 +0000 (UTC) (envelope-from marietto2008@gmail.com) Received: from mail-yw1-x1129.google.com (mail-yw1-x1129.google.com [IPv6:2607:f8b0:4864:20::1129]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1D4" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4Q2r9R2DlLz3R2Q; Fri, 21 Apr 2023 10:19:23 +0000 (UTC) (envelope-from marietto2008@gmail.com) Authentication-Results: mx1.freebsd.org; dkim=pass header.d=gmail.com header.s=20221208 header.b=kC6TNbbV; spf=pass (mx1.freebsd.org: domain of marietto2008@gmail.com designates 2607:f8b0:4864:20::1129 as permitted sender) smtp.mailfrom=marietto2008@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-yw1-x1129.google.com with SMTP id 00721157ae682-54fb8a8a597so16820707b3.0; Fri, 21 Apr 2023 03:19:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682072362; x=1684664362; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=b1apWy1xDfp8aISm7dYP4tpfNaXVO/iFXq72a5ysgZQ=; b=kC6TNbbVQkaat7N0BLZVh1mTVww1/Yd0sSd970HTdns9lU5j+OweIgKWgyEbuWbOnC 775+PhjWfgb2FXC8RxksL+wIRXWlykJ9yCLWxzszbX0zCv1sfgftx0FNj+mx6oW9IaYq LitVVrACZJbJ5WjMi8Uk+ivfooKVGz56JAPrgfgj0YKauA5zufiDIKsO0lCGuOmiAi6c c38p9xNqoXC3bAi9t2i8t3VaS9N1XUrSMyKK+rSTl8925rHjQeTl0rgdKznHDDT2UK6p twLlPulQMmydGfi46Cux1HqlUwp6HYLnzF9/RFcr+3odGW7TBUXvctlFxE45iJKwgNjQ 6Xnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682072362; x=1684664362; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=b1apWy1xDfp8aISm7dYP4tpfNaXVO/iFXq72a5ysgZQ=; b=f5QLb2bXd+rQD32GZZ2QCrp2rabJOisOM5XMR/EY9PAPlrsC7sOdM/XY1HQveXyRbA gwaIa/Ch3Zn3x62xqlBIH/4T4oA9qPBwyyxmlQV5Eogar2PUlTaul6LrPkcvulKbl/xh qIJZhQj4truB6kXbOM7G7reIknWm6BtZTPc0bKAZAbeGp1FP1a31OF8fxDkBooHLtCNF PP20doaaJ34y/hA15kRVyeJA5ag+dUvPey9M9GLwkGJANYOU2/MZToZQaqsq7//FU5J9 rFNTcLlspeUSY6HN7A6NjoN2MjKf5eDoIMDF0SZgcnaG0FjVdrz183qlb9f8deqSoVcf 5+TA== X-Gm-Message-State: AAQBX9fYViD4xZ4ZLqO1/3YpNjQm+QVsIglVOfplTfpeKp0Fb2ZtNL3M OsHwT3c+qhHvU8dcXaxuUl1RgsTnrUDiPX3MGS2dLvnfz1YvsA== X-Google-Smtp-Source: AKy350ZjEK4o1y1lkt/gU3KzI4x/vOyoJu2wX2aVOTQ7jw2ZpUe3SNOTya4GCaNJxU2a4UrXW5vI3wqpcluHTlgj0HA= X-Received: by 2002:a81:ab46:0:b0:54f:b6b3:b5e5 with SMTP id d6-20020a81ab46000000b0054fb6b3b5e5mr1571335ywk.18.1682072361893; Fri, 21 Apr 2023 03:19:21 -0700 (PDT) List-Id: User questions List-Archive: https://lists.freebsd.org/archives/freebsd-questions List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-questions@freebsd.org X-BeenThere: freebsd-questions@freebsd.org MIME-Version: 1.0 References: In-Reply-To: From: Mario Marietto Date: Fri, 21 Apr 2023 12:18:45 +0200 Message-ID: Subject: Re: Installing openAI's GPT-2 Ada AI Language Model To: Aryeh Friedman Cc: FreeBSD Mailing List , FreeBSD Mailing List , Yuri Victorovich Content-Type: multipart/alternative; boundary="0000000000009e07eb05f9d5fdf2" X-Spamd-Result: default: False [-2.76 / 15.00]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_MEDIUM(-1.00)[-0.999]; DMARC_POLICY_ALLOW(-0.50)[gmail.com,none]; NEURAL_SPAM_SHORT(0.24)[0.243]; R_SPF_ALLOW(-0.20)[+ip6:2607:f8b0:4000::/36]; R_DKIM_ALLOW(-0.20)[gmail.com:s=20221208]; MIME_GOOD(-0.10)[multipart/alternative,text/plain]; FREEMAIL_FROM(0.00)[gmail.com]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[4]; TAGGED_RCPT(0.00)[]; RCVD_IN_DNSWL_NONE(0.00)[2607:f8b0:4864:20::1129:from]; TO_MATCH_ENVRCPT_SOME(0.00)[]; ARC_NA(0.00)[]; DWL_DNSWL_NONE(0.00)[gmail.com:dkim]; ASN(0.00)[asn:15169, ipnet:2607:f8b0::/32, country:US]; DKIM_TRACE(0.00)[gmail.com:+]; TO_DN_ALL(0.00)[]; MID_RHS_MATCH_FROMTLD(0.00)[]; MLMMJ_DEST(0.00)[freebsd-hackers@freebsd.org,freebsd-questions@freebsd.org]; FREEMAIL_TO(0.00)[gmail.com]; RCVD_TLS_LAST(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; FREEMAIL_ENVFROM(0.00)[gmail.com]; MIME_TRACE(0.00)[0:+,1:+,2:~]; RCVD_COUNT_TWO(0.00)[2] X-Rspamd-Queue-Id: 4Q2r9R2DlLz3R2Q X-Spamd-Bar: -- X-ThisMailContainsUnwantedMimeParts: N --0000000000009e07eb05f9d5fdf2 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Can't you install pytorch using the linux miniconda installer like below ? # fetch https://gist.githubusercontent.com/shkhln/40ef290463e78fb2b0000c60f= 4ad797e/raw/f640983249607e38af405c95c457ce4afc85c608/uvm_ioctl_override.c # /compat/ubuntu/bin/gcc --sysroot=3D/compat/ubuntu -m64 -std=3Dc99 -Wall -ldl -fPIC -shared -o dummy-uvm.so uvm_ioctl_override.c # pkg install linux-miniconda-installer # miniconda-installer # bash # source /home/marietto/miniconda3/etc/profile.d/conda.sh # conda activate (base) # conda activate pytorch On Fri, Apr 21, 2023 at 2:38=E2=80=AFAM Aryeh Friedman wrote: > On Thu, Apr 20, 2023 at 12:24=E2=80=AFPM Mario Marietto > wrote: > > > > try to copy and paste the commands that you have issued on pastebin...i > need to understand the scenario > > After saving the patch from the bug report to PORT/files and running > portmaster -P misc/pytourch (brand new machine except for installing > portmaster): > > c/ATen/UfuncCPUKernel_add.cpp.AVX2.cpp.o -c > > /usr/ports/misc/pytorch/work/.build/aten/src/ATen/UfuncCPUKernel_add.cpp.= AVX2.cpp > In file included from > > /usr/ports/misc/pytorch/work/.build/aten/src/ATen/UfuncCPUKernel_add.cpp.= AVX2.cpp:1: > In file included from > /usr/ports/misc/pytorch/work/.build/aten/src/ATen/UfuncCPUKernel_add.cpp:= 3: > In file included from > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/native/ufunc/a= dd.h:6: > In file included from > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/functi= onal.h:3: > In file included from > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/functi= onal_base.h:6: > In file included from > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec.h:= 6: > In file included from > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256.h:12: > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:253:16: > error: cannot initialize a parameter of type 'const __m256 > (*)(__m256)' with an lvalue of type '__m256 (__m256)': different > return type ('const __m256' (vector of 8 'float' values) vs '__m256' > (vector of 8 'float' values)) > return map(Sleef_acosf8_u10); > ^~~~~~~~~~~~~~~~ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:209:49: > note: passing argument to parameter 'vop' here > Vectorized map(const __m256 (*const vop)(__m256)) const { > ^ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:256:16: > error: cannot initialize a parameter of type 'const __m256 > (*)(__m256)' with an lvalue of type '__m256 (__m256)': different > return type ('const __m256' (vector of 8 'float' values) vs '__m256' > (vector of 8 'float' values)) > return map(Sleef_asinf8_u10); > ^~~~~~~~~~~~~~~~ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:209:49: > note: passing argument to parameter 'vop' here > Vectorized map(const __m256 (*const vop)(__m256)) const { > ^ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:259:16: > error: cannot initialize a parameter of type 'const __m256 > (*)(__m256)' with an lvalue of type '__m256 (__m256)': different > return type ('const __m256' (vector of 8 'float' values) vs '__m256' > (vector of 8 'float' values)) > return map(Sleef_atanf8_u10); > ^~~~~~~~~~~~~~~~ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:209:49: > note: passing argument to parameter 'vop' here > Vectorized map(const __m256 (*const vop)(__m256)) const { > ^ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:280:16: > error: cannot initialize a parameter of type 'const __m256 > (*)(__m256)' with an lvalue of type '__m256 (__m256)': different > return type ('const __m256' (vector of 8 'float' values) vs '__m256' > (vector of 8 'float' values)) > return map(Sleef_erff8_u10); > ^~~~~~~~~~~~~~~ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:209:49: > note: passing argument to parameter 'vop' here > Vectorized map(const __m256 (*const vop)(__m256)) const { > ^ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:283:16: > error: cannot initialize a parameter of type 'const __m256 > (*)(__m256)' with an lvalue of type '__m256 (__m256)': different > return type ('const __m256' (vector of 8 'float' values) vs '__m256' > (vector of 8 'float' values)) > return map(Sleef_erfcf8_u15); > ^~~~~~~~~~~~~~~~ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:209:49: > note: passing argument to parameter 'vop' here > Vectorized map(const __m256 (*const vop)(__m256)) const { > ^ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:300:16: > error: cannot initialize a parameter of type 'const __m256 > (*)(__m256)' with an lvalue of type '__m256 (__m256)': different > return type ('const __m256' (vector of 8 'float' values) vs '__m256' > (vector of 8 'float' values)) > return map(Sleef_expf8_u10); > ^~~~~~~~~~~~~~~ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:209:49: > note: passing argument to parameter 'vop' here > Vectorized map(const __m256 (*const vop)(__m256)) const { > ^ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:303:16: > error: cannot initialize a parameter of type 'const __m256 > (*)(__m256)' with an lvalue of type '__m256 (__m256)': different > return type ('const __m256' (vector of 8 'float' values) vs '__m256' > (vector of 8 'float' values)) > return map(Sleef_expm1f8_u10); > ^~~~~~~~~~~~~~~~~ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:209:49: > note: passing argument to parameter 'vop' here > Vectorized map(const __m256 (*const vop)(__m256)) const { > ^ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:393:16: > error: cannot initialize a parameter of type 'const __m256 > (*)(__m256)' with an lvalue of type '__m256 (__m256)': different > return type ('const __m256' (vector of 8 'float' values) vs '__m256' > (vector of 8 'float' values)) > return map(Sleef_logf8_u10); > ^~~~~~~~~~~~~~~ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:209:49: > note: passing argument to parameter 'vop' here > Vectorized map(const __m256 (*const vop)(__m256)) const { > ^ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:396:16: > error: cannot initialize a parameter of type 'const __m256 > (*)(__m256)' with an lvalue of type '__m256 (__m256)': different > return type ('const __m256' (vector of 8 'float' values) vs '__m256' > (vector of 8 'float' values)) > return map(Sleef_log2f8_u10); > ^~~~~~~~~~~~~~~~ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:209:49: > note: passing argument to parameter 'vop' here > Vectorized map(const __m256 (*const vop)(__m256)) const { > ^ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:399:16: > error: cannot initialize a parameter of type 'const __m256 > (*)(__m256)' with an lvalue of type '__m256 (__m256)': different > return type ('const __m256' (vector of 8 'float' values) vs '__m256' > (vector of 8 'float' values)) > return map(Sleef_log10f8_u10); > ^~~~~~~~~~~~~~~~~ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:209:49: > note: passing argument to parameter 'vop' here > Vectorized map(const __m256 (*const vop)(__m256)) const { > ^ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:402:16: > error: cannot initialize a parameter of type 'const __m256 > (*)(__m256)' with an lvalue of type '__m256 (__m256)': different > return type ('const __m256' (vector of 8 'float' values) vs '__m256' > (vector of 8 'float' values)) > return map(Sleef_log1pf8_u10); > ^~~~~~~~~~~~~~~~~ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:209:49: > note: passing argument to parameter 'vop' here > Vectorized map(const __m256 (*const vop)(__m256)) const { > ^ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:406:16: > error: cannot initialize a parameter of type 'const __m256 > (*)(__m256)' with an lvalue of type '__m256 (__m256)': different > return type ('const __m256' (vector of 8 'float' values) vs '__m256' > (vector of 8 'float' values)) > return map(Sleef_sinf8_u10); > ^~~~~~~~~~~~~~~ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:209:49: > note: passing argument to parameter 'vop' here > Vectorized map(const __m256 (*const vop)(__m256)) const { > ^ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:409:16: > error: cannot initialize a parameter of type 'const __m256 > (*)(__m256)' with an lvalue of type '__m256 (__m256)': different > return type ('const __m256' (vector of 8 'float' values) vs '__m256' > (vector of 8 'float' values)) > return map(Sleef_sinhf8_u10); > ^~~~~~~~~~~~~~~~ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:209:49: > note: passing argument to parameter 'vop' here > Vectorized map(const __m256 (*const vop)(__m256)) const { > ^ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:412:16: > error: cannot initialize a parameter of type 'const __m256 > (*)(__m256)' with an lvalue of type '__m256 (__m256)': different > return type ('const __m256' (vector of 8 'float' values) vs '__m256' > (vector of 8 'float' values)) > return map(Sleef_cosf8_u10); > ^~~~~~~~~~~~~~~ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:209:49: > note: passing argument to parameter 'vop' here > Vectorized map(const __m256 (*const vop)(__m256)) const { > ^ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:415:16: > error: cannot initialize a parameter of type 'const __m256 > (*)(__m256)' with an lvalue of type '__m256 (__m256)': different > return type ('const __m256' (vector of 8 'float' values) vs '__m256' > (vector of 8 'float' values)) > return map(Sleef_coshf8_u10); > ^~~~~~~~~~~~~~~~ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:209:49: > note: passing argument to parameter 'vop' here > Vectorized map(const __m256 (*const vop)(__m256)) const { > ^ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:447:16: > error: cannot initialize a parameter of type 'const __m256 > (*)(__m256)' with an lvalue of type '__m256 (__m256)': different > return type ('const __m256' (vector of 8 'float' values) vs '__m256' > (vector of 8 'float' values)) > return map(Sleef_tanf8_u10); > ^~~~~~~~~~~~~~~ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:209:49: > note: passing argument to parameter 'vop' here > Vectorized map(const __m256 (*const vop)(__m256)) const { > ^ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:450:16: > error: cannot initialize a parameter of type 'const __m256 > (*)(__m256)' with an lvalue of type '__m256 (__m256)': different > return type ('const __m256' (vector of 8 'float' values) vs '__m256' > (vector of 8 'float' values)) > return map(Sleef_tanhf8_u10); > ^~~~~~~~~~~~~~~~ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:209:49: > note: passing argument to parameter 'vop' here > Vectorized map(const __m256 (*const vop)(__m256)) const { > ^ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:460:16: > error: cannot initialize a parameter of type 'const __m256 > (*)(__m256)' with an lvalue of type '__m256 (__m256)': different > return type ('const __m256' (vector of 8 'float' values) vs '__m256' > (vector of 8 'float' values)) > return map(Sleef_lgammaf8_u10); > ^~~~~~~~~~~~~~~~~~ > > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256= /vec256_bfloat16.h:209:49: > note: passing argument to parameter 'vop' here > Vectorized map(const __m256 (*const vop)(__m256)) const { > ^ > 18 errors generated. > [ 80% 1035/1283] /usr/bin/c++ -DAT_PER_OPERATOR_HEADERS > -DCPUINFO_SUPPORTED_PLATFORM=3D0 -DFMT_HEADER_ONLY=3D1 > -DHAVE_MALLOC_USABLE_SIZE=3D1 -DHAVE_MMAP=3D1 -DHAVE_SHM_OPEN=3D1 > -DHAVE_SHM_UNLINK=3D1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS > -DONNXIFI_ENABLE_EXT=3D1 -DONNX_ML=3D1 -DONNX_NAMESPACE=3Donnx > -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=3D64 -Dtorch_cpu_EXPORTS > -I/usr/ports/misc/pytorch/work/.build/aten/src > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src > -I/usr/ports/misc/pytorch/work/.build > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1 > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/foxi > -I/usr/ports/misc/pytorch/work/.build/third_party/foxi > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api/include > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/caffe2/aten/src/TH > -I/usr/ports/misc/pytorch/work/.build/caffe2/aten/src/TH > -I/usr/ports/misc/pytorch/work/.build/caffe2/aten/src > -I/usr/ports/misc/pytorch/work/.build/caffe2/../aten/src > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/miniz-2.1.0 > > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/kineto/libkine= to/include > > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/kineto/libkine= to/src > > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/../third_party/catch/= single_include > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/.. > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/c10/.. > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/cpuinfo/includ= e > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/FP16/include > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/fmt/include > > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/flatbuffers/in= clude > -isystem > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/cmake/../third_party/eigen > -isystem /usr/ports/misc/pytorch/work/pytorch-v1.13.1/caffe2 -O2 -pipe > -fstack-protector-strong -isystem /usr/local/include > -fno-strict-aliasing -isystem /usr/local/include -Wno-deprecated > -fvisibility-inlines-hidden -fopenmp=3Dlibomp -DNDEBUG -DUSE_KINETO > -DLIBKINETO_NOCUPTI -DSYMBOLICATE_MOBILE_DEBUG_HANDLE > -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra > -Werror=3Dreturn-type -Werror=3Dnon-virtual-dtor > -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds > -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter > -Wno-unused-function -Wno-unused-result -Wno-strict-overflow > -Wno-strict-aliasing -Wno-error=3Ddeprecated-declarations > -Wvla-extension -Wno-range-loop-analysis -Wno-pass-failed > -Wno-error=3Dpedantic -Wno-error=3Dredundant-decls > -Wno-error=3Dold-style-cast -Wconstant-conversion > -Wno-invalid-partial-specialization -Wno-typedef-redefinition > -Wno-unused-private-field -Wno-inconsistent-missing-override > -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces > -Wunused-lambda-capture -Wunused-local-typedef -Qunused-arguments > -fcolor-diagnostics -fdiagnostics-color=3Dalways > -Wno-unused-but-set-variable -fno-math-errno -fno-trapping-math > -Werror=3Dformat -Werror=3Dcast-function-type -DHAVE_AVX512_CPU_DEFINITIO= N > -DHAVE_AVX2_CPU_DEFINITION -O2 -pipe -fstack-protector-strong -isystem > /usr/local/include -fno-strict-aliasing -isystem /usr/local/include > -DNDEBUG -DNDEBUG -std=3Dgnu++14 -fPIC -DTH_HAVE_THREAD -Wall -Wextra > -Wno-unused-parameter -Wno-unused-function -Wno-unused-result > -Wno-missing-field-initializers -Wno-write-strings > -Wno-unknown-pragmas -Wno-type-limits -Wno-array-bounds > -Wno-sign-compare -Wno-strict-overflow -Wno-strict-aliasing > -Wno-error=3Ddeprecated-declarations -Wno-missing-braces > -Wno-range-loop-analysis -fvisibility=3Dhidden -O2 -fopenmp=3Dlibomp > -DCAFFE2_BUILD_MAIN_LIB -pthread -MD -MT > > caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/input-arc= hive.cpp.o > -MF > caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/input-arc= hive.cpp.o.d > -o > caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/input-arc= hive.cpp.o > -c > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api/src/serialize= /input-archive.cpp > [ 80% 1035/1283] /usr/bin/c++ -DAT_PER_OPERATOR_HEADERS > -DCPUINFO_SUPPORTED_PLATFORM=3D0 -DFMT_HEADER_ONLY=3D1 > -DHAVE_MALLOC_USABLE_SIZE=3D1 -DHAVE_MMAP=3D1 -DHAVE_SHM_OPEN=3D1 > -DHAVE_SHM_UNLINK=3D1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS > -DONNXIFI_ENABLE_EXT=3D1 -DONNX_ML=3D1 -DONNX_NAMESPACE=3Donnx > -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=3D64 -Dtorch_cpu_EXPORTS > -I/usr/ports/misc/pytorch/work/.build/aten/src > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src > -I/usr/ports/misc/pytorch/work/.build > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1 > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/foxi > -I/usr/ports/misc/pytorch/work/.build/third_party/foxi > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api/include > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/caffe2/aten/src/TH > -I/usr/ports/misc/pytorch/work/.build/caffe2/aten/src/TH > -I/usr/ports/misc/pytorch/work/.build/caffe2/aten/src > -I/usr/ports/misc/pytorch/work/.build/caffe2/../aten/src > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/miniz-2.1.0 > > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/kineto/libkine= to/include > > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/kineto/libkine= to/src > > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/../third_party/catch/= single_include > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/.. > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/c10/.. > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/cpuinfo/includ= e > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/FP16/include > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/fmt/include > > -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/flatbuffers/in= clude > -isystem > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/cmake/../third_party/eigen > -isystem /usr/ports/misc/pytorch/work/pytorch-v1.13.1/caffe2 -O2 -pipe > -fstack-protector-strong -isystem /usr/local/include > -fno-strict-aliasing -isystem /usr/local/include -Wno-deprecated > -fvisibility-inlines-hidden -fopenmp=3Dlibomp -DNDEBUG -DUSE_KINETO > -DLIBKINETO_NOCUPTI -DSYMBOLICATE_MOBILE_DEBUG_HANDLE > -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra > -Werror=3Dreturn-type -Werror=3Dnon-virtual-dtor > -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds > -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter > -Wno-unused-function -Wno-unused-result -Wno-strict-overflow > -Wno-strict-aliasing -Wno-error=3Ddeprecated-declarations > -Wvla-extension -Wno-range-loop-analysis -Wno-pass-failed > -Wno-error=3Dpedantic -Wno-error=3Dredundant-decls > -Wno-error=3Dold-style-cast -Wconstant-conversion > -Wno-invalid-partial-specialization -Wno-typedef-redefinition > -Wno-unused-private-field -Wno-inconsistent-missing-override > -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces > -Wunused-lambda-capture -Wunused-local-typedef -Qunused-arguments > -fcolor-diagnostics -fdiagnostics-color=3Dalways > -Wno-unused-but-set-variable -fno-math-errno -fno-trapping-math > -Werror=3Dformat -Werror=3Dcast-function-type -DHAVE_AVX512_CPU_DEFINITIO= N > -DHAVE_AVX2_CPU_DEFINITION -O2 -pipe -fstack-protector-strong -isystem > /usr/local/include -fno-strict-aliasing -isystem /usr/local/include > -DNDEBUG -DNDEBUG -std=3Dgnu++14 -fPIC -DTH_HAVE_THREAD -Wall -Wextra > -Wno-unused-parameter -Wno-unused-function -Wno-unused-result > -Wno-missing-field-initializers -Wno-write-strings > -Wno-unknown-pragmas -Wno-type-limits -Wno-array-bounds > -Wno-sign-compare -Wno-strict-overflow -Wno-strict-aliasing > -Wno-error=3Ddeprecated-declarations -Wno-missing-braces > -Wno-range-loop-analysis -fvisibility=3Dhidden -O2 -fopenmp=3Dlibomp > -DCAFFE2_BUILD_MAIN_LIB -pthread -MD -MT > > caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/output-ar= chive.cpp.o > -MF > caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/output-ar= chive.cpp.o.d > -o > caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/output-ar= chive.cpp.o > -c > /usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api/src/serialize= /output-archive.cpp > ninja: build stopped: subcommand failed. > =3D=3D=3D> Compilation failed unexpectedly. > Try to set MAKE_JOBS_UNSAFE=3Dyes and rebuild before reporting the failur= e to > the maintainer. > *** Error code 1 > > Stop. > make: stopped in /usr/ports/misc/pytorch > > > > > Il gio 20 apr 2023, 17:51 Aryeh Friedman ha > scritto: > >> > >> On Thu, Apr 20, 2023 at 7:52=E2=80=AFAM Thierry Thomas > wrote: > >> > > >> > Le jeu. 20 avr. 23 =C3=A0 12:53:05 +0200, Aryeh Friedman < > aryeh.friedman@gmail.com> > >> > =C3=A9crivait : > >> > > >> > > Running without GPU (for now) on a bhyve vm (3 CPU, 2 GB RAM and 1= 00 > >> > > GB of disk) which I intend for determining if it is worse going ou= t > >> > > and getting the hardware to do GPU. The problem I had was gettin= g > >> > > pytorch to work since it appears I have to build it from source an= d > it > >> > > blows up in that build. > >> > > >> > Have you seen > >> > ? > >> > >> This seems to be true for all OS's I guess I will have to find an > >> intel machine... this is as bad as the motivation that led me to do > >> petitecloud in the first place (openstack not running on AMD period). > >> Is there just no way to run a ANN in pytorch data format in any other > >> way that is not python (like Java?!!?) note the tensorflow port > >> required pytorch > >> > >> > >> -- > >> Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org > >> > > > -- > Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org > --=20 Mario. --0000000000009e07eb05f9d5fdf2 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Can't you install pytorch using the =
linux miniconda installer like below ? 

# fetch https://gist.gith= ubusercontent.com/shkhln/40ef290463e78fb2b0000c60f4ad797e/raw/f640983249607= e38af405c95c457ce4afc85c608/uvm_ioctl_override.c # /compat/ubuntu/bin/gcc --sysroot=3D/compat/ubuntu -m64 -std=3Dc99 -Wall -= ldl -fPIC -shared -o dummy-uvm.so uvm_ioctl_override.c # pkg install linux-miniconda-installer # miniconda-installer # bash # source /home/marietto/miniconda3/etc/profile.d/conda.sh # conda activate (base) # conda activate pytorch

On Fri, Apr 21, 2023 at 2:38=E2=80=AFAM Aryeh Friedman <aryeh.friedman@gmail.com> w= rote:
On Thu, Ap= r 20, 2023 at 12:24=E2=80=AFPM Mario Marietto <marietto2008@gmail.com> wrote: >
> try to copy and paste the commands that you have issued on pastebin...= i need to understand the scenario

After saving the patch from the bug report to PORT/files and running
portmaster -P misc/pytourch (brand new machine except for=C2=A0 installing<= br> portmaster):

c/ATen/UfuncCPUKernel_add.cpp.AVX2.cpp.o -c
/usr/ports/misc/pytorch/work/.build/aten/src/ATen/UfuncCPUKernel_add.cpp.AV= X2.cpp
In file included from
/usr/ports/misc/pytorch/work/.build/aten/src/ATen/UfuncCPUKernel_add.cpp.AV= X2.cpp:1:
In file included from
/usr/ports/misc/pytorch/work/.build/aten/src/ATen/UfuncCPUKernel_add.cpp:3:=
In file included from
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/native/ufunc/add= .h:6:
In file included from
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/function= al.h:3:
In file included from
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/function= al_base.h:6:
In file included from
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec.h:6:=
In file included from
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256.h:12:
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:253:16:
error: cannot initialize a parameter of type 'const __m256
(*)(__m256)' with an lvalue of type '__m256 (__m256)': differen= t
return type ('const __m256' (vector of 8 'float' values) vs= '__m256'
(vector of 8 'float' values))
=C2=A0 =C2=A0 return map(Sleef_acosf8_u10);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49:
note: passing argument to parameter 'vop' here
=C2=A0 Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) co= nst {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 ^
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:256:16:
error: cannot initialize a parameter of type 'const __m256
(*)(__m256)' with an lvalue of type '__m256 (__m256)': differen= t
return type ('const __m256' (vector of 8 'float' values) vs= '__m256'
(vector of 8 'float' values))
=C2=A0 =C2=A0 return map(Sleef_asinf8_u10);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49:
note: passing argument to parameter 'vop' here
=C2=A0 Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) co= nst {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 ^
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:259:16:
error: cannot initialize a parameter of type 'const __m256
(*)(__m256)' with an lvalue of type '__m256 (__m256)': differen= t
return type ('const __m256' (vector of 8 'float' values) vs= '__m256'
(vector of 8 'float' values))
=C2=A0 =C2=A0 return map(Sleef_atanf8_u10);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49:
note: passing argument to parameter 'vop' here
=C2=A0 Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) co= nst {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 ^
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:280:16:
error: cannot initialize a parameter of type 'const __m256
(*)(__m256)' with an lvalue of type '__m256 (__m256)': differen= t
return type ('const __m256' (vector of 8 'float' values) vs= '__m256'
(vector of 8 'float' values))
=C2=A0 =C2=A0 return map(Sleef_erff8_u10);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49:
note: passing argument to parameter 'vop' here
=C2=A0 Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) co= nst {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 ^
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:283:16:
error: cannot initialize a parameter of type 'const __m256
(*)(__m256)' with an lvalue of type '__m256 (__m256)': differen= t
return type ('const __m256' (vector of 8 'float' values) vs= '__m256'
(vector of 8 'float' values))
=C2=A0 =C2=A0 return map(Sleef_erfcf8_u15);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49:
note: passing argument to parameter 'vop' here
=C2=A0 Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) co= nst {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 ^
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:300:16:
error: cannot initialize a parameter of type 'const __m256
(*)(__m256)' with an lvalue of type '__m256 (__m256)': differen= t
return type ('const __m256' (vector of 8 'float' values) vs= '__m256'
(vector of 8 'float' values))
=C2=A0 =C2=A0 return map(Sleef_expf8_u10);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49:
note: passing argument to parameter 'vop' here
=C2=A0 Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) co= nst {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 ^
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:303:16:
error: cannot initialize a parameter of type 'const __m256
(*)(__m256)' with an lvalue of type '__m256 (__m256)': differen= t
return type ('const __m256' (vector of 8 'float' values) vs= '__m256'
(vector of 8 'float' values))
=C2=A0 =C2=A0 return map(Sleef_expm1f8_u10);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49:
note: passing argument to parameter 'vop' here
=C2=A0 Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) co= nst {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 ^
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:393:16:
error: cannot initialize a parameter of type 'const __m256
(*)(__m256)' with an lvalue of type '__m256 (__m256)': differen= t
return type ('const __m256' (vector of 8 'float' values) vs= '__m256'
(vector of 8 'float' values))
=C2=A0 =C2=A0 return map(Sleef_logf8_u10);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49:
note: passing argument to parameter 'vop' here
=C2=A0 Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) co= nst {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 ^
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:396:16:
error: cannot initialize a parameter of type 'const __m256
(*)(__m256)' with an lvalue of type '__m256 (__m256)': differen= t
return type ('const __m256' (vector of 8 'float' values) vs= '__m256'
(vector of 8 'float' values))
=C2=A0 =C2=A0 return map(Sleef_log2f8_u10);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49:
note: passing argument to parameter 'vop' here
=C2=A0 Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) co= nst {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 ^
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:399:16:
error: cannot initialize a parameter of type 'const __m256
(*)(__m256)' with an lvalue of type '__m256 (__m256)': differen= t
return type ('const __m256' (vector of 8 'float' values) vs= '__m256'
(vector of 8 'float' values))
=C2=A0 =C2=A0 return map(Sleef_log10f8_u10);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49:
note: passing argument to parameter 'vop' here
=C2=A0 Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) co= nst {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 ^
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:402:16:
error: cannot initialize a parameter of type 'const __m256
(*)(__m256)' with an lvalue of type '__m256 (__m256)': differen= t
return type ('const __m256' (vector of 8 'float' values) vs= '__m256'
(vector of 8 'float' values))
=C2=A0 =C2=A0 return map(Sleef_log1pf8_u10);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49:
note: passing argument to parameter 'vop' here
=C2=A0 Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) co= nst {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 ^
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:406:16:
error: cannot initialize a parameter of type 'const __m256
(*)(__m256)' with an lvalue of type '__m256 (__m256)': differen= t
return type ('const __m256' (vector of 8 'float' values) vs= '__m256'
(vector of 8 'float' values))
=C2=A0 =C2=A0 return map(Sleef_sinf8_u10);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49:
note: passing argument to parameter 'vop' here
=C2=A0 Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) co= nst {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 ^
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:409:16:
error: cannot initialize a parameter of type 'const __m256
(*)(__m256)' with an lvalue of type '__m256 (__m256)': differen= t
return type ('const __m256' (vector of 8 'float' values) vs= '__m256'
(vector of 8 'float' values))
=C2=A0 =C2=A0 return map(Sleef_sinhf8_u10);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49:
note: passing argument to parameter 'vop' here
=C2=A0 Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) co= nst {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 ^
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:412:16:
error: cannot initialize a parameter of type 'const __m256
(*)(__m256)' with an lvalue of type '__m256 (__m256)': differen= t
return type ('const __m256' (vector of 8 'float' values) vs= '__m256'
(vector of 8 'float' values))
=C2=A0 =C2=A0 return map(Sleef_cosf8_u10);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49:
note: passing argument to parameter 'vop' here
=C2=A0 Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) co= nst {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 ^
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:415:16:
error: cannot initialize a parameter of type 'const __m256
(*)(__m256)' with an lvalue of type '__m256 (__m256)': differen= t
return type ('const __m256' (vector of 8 'float' values) vs= '__m256'
(vector of 8 'float' values))
=C2=A0 =C2=A0 return map(Sleef_coshf8_u10);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49:
note: passing argument to parameter 'vop' here
=C2=A0 Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) co= nst {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 ^
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:447:16:
error: cannot initialize a parameter of type 'const __m256
(*)(__m256)' with an lvalue of type '__m256 (__m256)': differen= t
return type ('const __m256' (vector of 8 'float' values) vs= '__m256'
(vector of 8 'float' values))
=C2=A0 =C2=A0 return map(Sleef_tanf8_u10);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49:
note: passing argument to parameter 'vop' here
=C2=A0 Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) co= nst {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 ^
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:450:16:
error: cannot initialize a parameter of type 'const __m256
(*)(__m256)' with an lvalue of type '__m256 (__m256)': differen= t
return type ('const __m256' (vector of 8 'float' values) vs= '__m256'
(vector of 8 'float' values))
=C2=A0 =C2=A0 return map(Sleef_tanhf8_u10);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49:
note: passing argument to parameter 'vop' here
=C2=A0 Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) co= nst {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 ^
/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:460:16:
error: cannot initialize a parameter of type 'const __m256
(*)(__m256)' with an lvalue of type '__m256 (__m256)': differen= t
return type ('const __m256' (vector of 8 'float' values) vs= '__m256'
(vector of 8 'float' values))
=C2=A0 =C2=A0 return map(Sleef_lgammaf8_u10);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~~~~~~~~ /usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/cpu/vec/vec256/v= ec256_bfloat16.h:209:49:
note: passing argument to parameter 'vop' here
=C2=A0 Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) co= nst {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 ^
18 errors generated.
[ 80% 1035/1283] /usr/bin/c++ -DAT_PER_OPERATOR_HEADERS
-DCPUINFO_SUPPORTED_PLATFORM=3D0 -DFMT_HEADER_ONLY=3D1
-DHAVE_MALLOC_USABLE_SIZE=3D1 -DHAVE_MMAP=3D1 -DHAVE_SHM_OPEN=3D1
-DHAVE_SHM_UNLINK=3D1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS
-DONNXIFI_ENABLE_EXT=3D1 -DONNX_ML=3D1 -DONNX_NAMESPACE=3Donnx
-DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=3D64 -Dtorch_cpu_EXPORTS
-I/usr/ports/misc/pytorch/work/.build/aten/src
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src
-I/usr/ports/misc/pytorch/work/.build
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/foxi
-I/usr/ports/misc/pytorch/work/.build/third_party/foxi
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api/include
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/caffe2/aten/src/TH
-I/usr/ports/misc/pytorch/work/.build/caffe2/aten/src/TH
-I/usr/ports/misc/pytorch/work/.build/caffe2/aten/src
-I/usr/ports/misc/pytorch/work/.build/caffe2/../aten/src
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/miniz-2.1.0
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/kineto/libkineto= /include
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/kineto/libkineto= /src
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/../third_party/catch/si= ngle_include
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/..
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/c10/..
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/cpuinfo/include<= br> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/FP16/include
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/fmt/include
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/flatbuffers/incl= ude
-isystem /usr/ports/misc/pytorch/work/pytorch-v1.13.1/cmake/../third_party/= eigen
-isystem /usr/ports/misc/pytorch/work/pytorch-v1.13.1/caffe2 -O2 -pipe
-fstack-protector-strong -isystem /usr/local/include
-fno-strict-aliasing=C2=A0 -isystem /usr/local/include -Wno-deprecated
-fvisibility-inlines-hidden -fopenmp=3Dlibomp -DNDEBUG -DUSE_KINETO
-DLIBKINETO_NOCUPTI -DSYMBOLICATE_MOBILE_DEBUG_HANDLE
-DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra
-Werror=3Dreturn-type -Werror=3Dnon-virtual-dtor
-Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds
-Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter
-Wno-unused-function -Wno-unused-result -Wno-strict-overflow
-Wno-strict-aliasing -Wno-error=3Ddeprecated-declarations
-Wvla-extension -Wno-range-loop-analysis -Wno-pass-failed
-Wno-error=3Dpedantic -Wno-error=3Dredundant-decls
-Wno-error=3Dold-style-cast -Wconstant-conversion
-Wno-invalid-partial-specialization -Wno-typedef-redefinition
-Wno-unused-private-field -Wno-inconsistent-missing-override
-Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces
-Wunused-lambda-capture -Wunused-local-typedef -Qunused-arguments
-fcolor-diagnostics -fdiagnostics-color=3Dalways
-Wno-unused-but-set-variable -fno-math-errno -fno-trapping-math
-Werror=3Dformat -Werror=3Dcast-function-type -DHAVE_AVX512_CPU_DEFINITION<= br> -DHAVE_AVX2_CPU_DEFINITION -O2 -pipe -fstack-protector-strong -isystem
/usr/local/include -fno-strict-aliasing=C2=A0 -isystem /usr/local/include -DNDEBUG -DNDEBUG -std=3Dgnu++14 -fPIC -DTH_HAVE_THREAD -Wall -Wextra
-Wno-unused-parameter -Wno-unused-function -Wno-unused-result
-Wno-missing-field-initializers -Wno-write-strings
-Wno-unknown-pragmas -Wno-type-limits -Wno-array-bounds
-Wno-sign-compare -Wno-strict-overflow -Wno-strict-aliasing
-Wno-error=3Ddeprecated-declarations -Wno-missing-braces
-Wno-range-loop-analysis -fvisibility=3Dhidden -O2 -fopenmp=3Dlibomp
-DCAFFE2_BUILD_MAIN_LIB -pthread -MD -MT
caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/input-archi= ve.cpp.o
-MF caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/input-a= rchive.cpp.o.d
-o caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/input-ar= chive.cpp.o
-c /usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api/src/serializ= e/input-archive.cpp
[ 80% 1035/1283] /usr/bin/c++ -DAT_PER_OPERATOR_HEADERS
-DCPUINFO_SUPPORTED_PLATFORM=3D0 -DFMT_HEADER_ONLY=3D1
-DHAVE_MALLOC_USABLE_SIZE=3D1 -DHAVE_MMAP=3D1 -DHAVE_SHM_OPEN=3D1
-DHAVE_SHM_UNLINK=3D1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS
-DONNXIFI_ENABLE_EXT=3D1 -DONNX_ML=3D1 -DONNX_NAMESPACE=3Donnx
-DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=3D64 -Dtorch_cpu_EXPORTS
-I/usr/ports/misc/pytorch/work/.build/aten/src
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src
-I/usr/ports/misc/pytorch/work/.build
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/foxi
-I/usr/ports/misc/pytorch/work/.build/third_party/foxi
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api/include
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/caffe2/aten/src/TH
-I/usr/ports/misc/pytorch/work/.build/caffe2/aten/src/TH
-I/usr/ports/misc/pytorch/work/.build/caffe2/aten/src
-I/usr/ports/misc/pytorch/work/.build/caffe2/../aten/src
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/miniz-2.1.0
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/kineto/libkineto= /include
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/kineto/libkineto= /src
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/../third_party/catch/si= ngle_include
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/aten/src/ATen/..
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/c10/..
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/cpuinfo/include<= br> -I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/FP16/include
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/fmt/include
-I/usr/ports/misc/pytorch/work/pytorch-v1.13.1/third_party/flatbuffers/incl= ude
-isystem /usr/ports/misc/pytorch/work/pytorch-v1.13.1/cmake/../third_party/= eigen
-isystem /usr/ports/misc/pytorch/work/pytorch-v1.13.1/caffe2 -O2 -pipe
-fstack-protector-strong -isystem /usr/local/include
-fno-strict-aliasing=C2=A0 -isystem /usr/local/include -Wno-deprecated
-fvisibility-inlines-hidden -fopenmp=3Dlibomp -DNDEBUG -DUSE_KINETO
-DLIBKINETO_NOCUPTI -DSYMBOLICATE_MOBILE_DEBUG_HANDLE
-DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra
-Werror=3Dreturn-type -Werror=3Dnon-virtual-dtor
-Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds
-Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter
-Wno-unused-function -Wno-unused-result -Wno-strict-overflow
-Wno-strict-aliasing -Wno-error=3Ddeprecated-declarations
-Wvla-extension -Wno-range-loop-analysis -Wno-pass-failed
-Wno-error=3Dpedantic -Wno-error=3Dredundant-decls
-Wno-error=3Dold-style-cast -Wconstant-conversion
-Wno-invalid-partial-specialization -Wno-typedef-redefinition
-Wno-unused-private-field -Wno-inconsistent-missing-override
-Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces
-Wunused-lambda-capture -Wunused-local-typedef -Qunused-arguments
-fcolor-diagnostics -fdiagnostics-color=3Dalways
-Wno-unused-but-set-variable -fno-math-errno -fno-trapping-math
-Werror=3Dformat -Werror=3Dcast-function-type -DHAVE_AVX512_CPU_DEFINITION<= br> -DHAVE_AVX2_CPU_DEFINITION -O2 -pipe -fstack-protector-strong -isystem
/usr/local/include -fno-strict-aliasing=C2=A0 -isystem /usr/local/include -DNDEBUG -DNDEBUG -std=3Dgnu++14 -fPIC -DTH_HAVE_THREAD -Wall -Wextra
-Wno-unused-parameter -Wno-unused-function -Wno-unused-result
-Wno-missing-field-initializers -Wno-write-strings
-Wno-unknown-pragmas -Wno-type-limits -Wno-array-bounds
-Wno-sign-compare -Wno-strict-overflow -Wno-strict-aliasing
-Wno-error=3Ddeprecated-declarations -Wno-missing-braces
-Wno-range-loop-analysis -fvisibility=3Dhidden -O2 -fopenmp=3Dlibomp
-DCAFFE2_BUILD_MAIN_LIB -pthread -MD -MT
caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/output-arch= ive.cpp.o
-MF caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/output-= archive.cpp.o.d
-o caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/output-a= rchive.cpp.o
-c /usr/ports/misc/pytorch/work/pytorch-v1.13.1/torch/csrc/api/src/serializ= e/output-archive.cpp
ninja: build stopped: subcommand failed.
=3D=3D=3D> Compilation failed unexpectedly.
Try to set MAKE_JOBS_UNSAFE=3Dyes and rebuild before reporting the failure = to
the maintainer.
*** Error code 1

Stop.
make: stopped in /usr/ports/misc/pytorch

>
> Il gio 20 apr 2023, 17:51 Aryeh Friedman <aryeh.friedman@gmail.com> ha sc= ritto:
>>
>> On Thu, Apr 20, 2023 at 7:52=E2=80=AFAM Thierry Thomas <thierry@freebsd.org&g= t; wrote:
>> >
>> > Le jeu. 20 avr. 23 =C3=A0 12:53:05 +0200, Aryeh Friedman <= aryeh.friedma= n@gmail.com>
>> >=C2=A0 =C3=A9crivait :
>> >
>> > > Running without GPU (for now) on a bhyve vm (3 CPU, 2 GB= RAM and 100
>> > > GB of disk) which I intend for determining if it is wors= e going out
>> > > and getting the hardware to do GPU.=C2=A0 =C2=A0The prob= lem I had was getting
>> > > pytorch to work since it appears I have to build it from= source and it
>> > > blows up in that build.
>> >
>> > Have you seen
>> > <https://bugs.freebsd.org= /bugzilla/show_bug.cgi?id=3D269739> ?
>>
>> This seems to be true for all OS's I guess I will have to find= an
>> intel machine... this is as bad as the motivation that led me to d= o
>> petitecloud in the first place (openstack not running on AMD perio= d).
>>=C2=A0 Is there just no way to run a ANN in pytorch data format in = any other
>> way that is not python (like Java?!!?) note the tensorflow port >> required pytorch
>>
>>
>> --
>> Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org=
>>


--
Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org


--
Mario.
--0000000000009e07eb05f9d5fdf2--