From nobody Fri Jan 10 15:03:48 2025 X-Original-To: dev-commits-src-all@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4YV4fr3ZNRz5kTTH; Fri, 10 Jan 2025 15:03:48 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "R11" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4YV4fr2nn4z4wj8; Fri, 10 Jan 2025 15:03:48 +0000 (UTC) (envelope-from git@FreeBSD.org) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1736521428; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=wD19OYQmFWrsW7RbV8TGYIOfJLNIiXhBe34ImIPrLzc=; b=iCNY4FQDs0kV8zUTDLHoAvA75/Lfk6btEEsBe4pLYUhnxRh/aL74DWlvwjnLv23VtLMEaX R902aRtYAd8uPCQ5hAgzX2lUGg9A2cA/R0b9blEj7mUm3l6pxHsfJI7vjOabx8F0nnMFiO /XOZLBP9dBinWEitzHfuCJ/3c4t6zs3FNc8n2utBbYRR/M4Q3shWYvwvNT+XO0UNFrb/+Y tZf8lwXwire4vCm6oH+1LkSsOmmniL7kfGpJRrxUIMjY5WsLvgJ0Qo4zDcMrEwCYckI02q avypBxXnNgjQoGbjwcxxS04U8Qag6GoIAGSu2DWHFcVHAYiOH5BkaXvzNOQAJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1736521428; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=wD19OYQmFWrsW7RbV8TGYIOfJLNIiXhBe34ImIPrLzc=; b=plNGuip+Mh0GgKXo7PLsRBQMmpO9t2W9H7GP3raW4179cdTUXrDLnIz4CgwiecGJZMAE4A YivMeNxMFTAubwj4w08jY38hUt6EEwEK3Ocg/ys/OKLiPWJb7MBvIREZ3YbI4N6PMFsw7v Do5zg6aaCALCK6fXj76p6b0uAgKQvf5iI5Cs1DLi0/6ixxpcjUpfeEiZWzoHXtFm9VeNYb tDTqN1ntEye5URcUzgCJSMqyY3B7bhsF0vVKD2J6wBFgrtBVjAZmJDjGclG/hR/IQoo6nE YlpysV+5q+2XAfNp4/SWzNesLiHC8DPmzJk3yJi8VYBHetiE+tBGm3cmCBdmpg== ARC-Seal: i=1; s=dkim; d=freebsd.org; t=1736521428; a=rsa-sha256; cv=none; b=MJxcvgqBCor2xoNAop0V/TFRKjVZgyuWoe/OtVEY1A4TFPRbKz7DRkvT6yV7iAU8uHzddg lbeFohRyoMadzeLei5vPQW9blIkEFPXeUHmyRXqxtnRLV1vZFIaHblj0+QetOdi+ptSKSL uwtHGyXZxLsdKKf6C2J7x1JdbfcAlOzcJPndQ1yXlcReYEtOoq64kNJvarB/Ox5NlhWqs8 3dOCJJDQ3eySjk1NVPdfkZN0a/Np/P3IqlwX1/RWVCQW95km/5XhZ3swz34pI3//LT3cQe SCAw+lT6R57q7REds5xO6Vp+IhvYDeQZBHGDewNymcHxpdPItP5aZknh2j2itQ== ARC-Authentication-Results: i=1; mx1.freebsd.org; none Received: from gitrepo.freebsd.org (gitrepo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 4YV4fr2N62z1q5; Fri, 10 Jan 2025 15:03:48 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from gitrepo.freebsd.org ([127.0.1.44]) by gitrepo.freebsd.org (8.18.1/8.18.1) with ESMTP id 50AF3mhI056923; Fri, 10 Jan 2025 15:03:48 GMT (envelope-from git@gitrepo.freebsd.org) Received: (from git@localhost) by gitrepo.freebsd.org (8.18.1/8.18.1/Submit) id 50AF3mEW056920; Fri, 10 Jan 2025 15:03:48 GMT (envelope-from git) Date: Fri, 10 Jan 2025 15:03:48 GMT Message-Id: <202501101503.50AF3mEW056920@gitrepo.freebsd.org> To: src-committers@FreeBSD.org, dev-commits-src-all@FreeBSD.org, dev-commits-src-main@FreeBSD.org From: Robert Clausecker Subject: git: 5e7d93a60440 - main - lib/libc/aarch64/string: add strcmp SIMD implementation List-Id: Commit messages for all branches of the src repository List-Archive: https://lists.freebsd.org/archives/dev-commits-src-all List-Help: List-Post: List-Subscribe: List-Unsubscribe: X-BeenThere: dev-commits-src-all@freebsd.org Sender: owner-dev-commits-src-all@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-Git-Committer: fuz X-Git-Repository: src X-Git-Refname: refs/heads/main X-Git-Reftype: branch X-Git-Commit: 5e7d93a604400ca3c9db3be1df82ce963527740c Auto-Submitted: auto-generated The branch main has been updated by fuz: URL: https://cgit.FreeBSD.org/src/commit/?id=5e7d93a604400ca3c9db3be1df82ce963527740c commit 5e7d93a604400ca3c9db3be1df82ce963527740c Author: Getz Mikalsen AuthorDate: 2024-08-26 18:13:31 +0000 Commit: Robert Clausecker CommitDate: 2025-01-10 15:02:39 +0000 lib/libc/aarch64/string: add strcmp SIMD implementation This changeset includes a port of the SIMD implementation of strcmp for amd64 to Aarch64. Below is a description of its method as described in D41971. The basic idea is to process the bulk of the string in aligned blocks of 16 bytes such that one string runs ahead and the other runs behind. The string that runs ahead is checked for NUL bytes, the one that runs behind is compared with the corresponding chunk of the string that runs ahead. This trades an extra load per iteration for the very complicated block-reassembly needed in the other implementations (bionic, glibc). On the flip side, we need two code paths depending on the relative alignment of the two buffers. The initial part of the string is compared directly if it is known not to cross a page boundary. Otherwise, a complex slow path to avoid crossing into unmapped memory commences. Performance is better in most cases than the existing implementation from the Arm Optimized Routines repository. See the DR for benchmark results. Tested by: fuz (exprun) Reviewed by: fuz, emaste Sponsored by: Google LLC (GSoC 2024) PR: 281175 Differential Revision: https://reviews.freebsd.org/D45839 --- lib/libc/aarch64/string/Makefile.inc | 4 +- lib/libc/aarch64/string/strcmp.S | 350 +++++++++++++++++++++++++++++++++++ 2 files changed, 353 insertions(+), 1 deletion(-) diff --git a/lib/libc/aarch64/string/Makefile.inc b/lib/libc/aarch64/string/Makefile.inc index cabc79e4f351..ba0947511872 100644 --- a/lib/libc/aarch64/string/Makefile.inc +++ b/lib/libc/aarch64/string/Makefile.inc @@ -13,13 +13,15 @@ AARCH64_STRING_FUNCS= \ stpcpy \ strchr \ strchrnul \ - strcmp \ strcpy \ strlen \ strncmp \ strnlen \ strrchr +# SIMD-enhanced routines not derived from Arm's code +MDSRCS+= \ + strcmp.S # # Add the above functions. Generate an asm file that includes the needed # Arm Optimized Routines file defining the function name to the libc name. diff --git a/lib/libc/aarch64/string/strcmp.S b/lib/libc/aarch64/string/strcmp.S new file mode 100644 index 000000000000..e8418dfc6763 --- /dev/null +++ b/lib/libc/aarch64/string/strcmp.S @@ -0,0 +1,350 @@ +/*- + * SPDX-License-Identifier: BSD-2-Clause + * + * Copyright (c) 2024 Getz Mikalsen +*/ + +#include +#include + + .weak strcmp + .set strcmp, __strcmp + .text + +ENTRY(__strcmp) + + bic x8, x0, #0xf // x0 aligned to the boundary + and x9, x0, #0xf // x9 is the offset + bic x10, x1, #0xf // x1 aligned to the boundary + and x11, x1, #0xf // x11 is the offset + + mov x13, #-1 + + /* + * Check if either string is located at end of page to avoid crossing + * into unmapped page. If so, we load 16 bytes from the nearest + * alignment boundary and shift based on the offset. + */ + + add x3, x0, #16 // end of head + add x4, x1, #16 + eor x3, x3, x0 + eor x4, x4, x1 // bits that changed + orr x3, x3, x4 // in either str1 or str2 + tbz w3, #PAGE_SHIFT, .Lbegin + + ldr q0, [x8] // load aligned head + ldr q2, [x10] + + lsl x14, x9, #2 + lsl x15, x11, #2 + lsl x3, x13, x14 // string head + lsl x4, x13, x15 + + cmeq v5.16b, v0.16b, #0 + cmeq v6.16b, v2.16b, #0 + + shrn v5.8b, v5.8h, #4 + shrn v6.8b, v6.8h, #4 + fmov x5, d5 + fmov x6, d6 + + adrp x2, shift_data + add x2, x2, :lo12:shift_data + + /* heads may cross page boundary, avoid unmapped loads */ + tst x5, x3 + b.eq 0f + + ldr q4, [x2, x9] // load permutation table + tbl v0.16b, {v0.16b}, v4.16b + + b 1f + .p2align 4 +0: + ldr q0, [x0] // load true head +1: + tst x6, x4 + b.eq 0f + + ldr q4, [x2, x11] + tbl v4.16b, {v2.16b}, v4.16b + + b 1f + + .p2align 4 +.Lbegin: + ldr q0, [x0] // load true heads +0: + ldr q4, [x1] +1: + + cmeq v2.16b, v0.16b, #0 // NUL byte present? + cmeq v4.16b, v0.16b, v4.16b // which bytes match? + + orn v2.16b, v2.16b, v4.16b // mismatch or NUL byte? + + shrn v2.8b, v2.8h, #4 + fmov x5, d2 + + cbnz x5, .Lhead_mismatch + + ldr q2, [x8, #16] // load second chunk + ldr q3, [x10, #16] + subs x9, x9, x11 // is a&0xf >= b&0xf + b.lo .Lswapped // if not swap operands + sub x12, x10, x9 + ldr q0, [x12, #16]! + sub x10, x10, x8 + sub x11, x10, x9 + + cmeq v1.16b, v3.16b, #0 + cmeq v0.16b, v0.16b, v2.16b + add x8, x8, #16 + shrn v1.8b, v1.8h, #4 + fmov x6, d1 + shrn v0.8b, v0.8h, #4 + fmov x5, d0 + cbnz x6, .Lnulfound + mvn x5, x5 + cbnz x5, .Lmismatch + add x8, x8, #16 // advance aligned pointers + + /* + * During the main loop, the layout of the two strings is something like: + * + * v ------1------ v ------2------ v + * X0: AAAAAAAAAAAAABBBBBBBBBBBBBBBB... + * X1: AAAAAAAAAAAAABBBBBBBBBBBBBBBBCCC... + * + * where v indicates the alignment boundaries and corresponding chunks + * of the strings have the same letters. Chunk A has been checked in + * the previous iteration. This iteration, we first check that string + * X1 doesn't end within region 2, then we compare chunk B between the + * two strings. As X1 is known not to hold a NUL byte in regions 1 + * and 2 at this point, this also ensures that x0 has not ended yet. + */ + .p2align 4 +0: + ldr q0, [x8, x11] + ldr q1, [x8, x10] + ldr q2, [x8] + + cmeq v1.16b, v1.16b, #0 // end of string? + cmeq v0.16b, v0.16b, v2.16b // do the chunks match? + + shrn v1.8b, v1.8h, #4 + fmov x6, d1 + shrn v0.8b, v0.8h, #4 + fmov x5, d0 + cbnz x6, .Lnulfound + mvn x5, x5 // any mismatches? + cbnz x5, .Lmismatch + + add x8, x8, #16 + + ldr q0, [x8, x11] + ldr q1, [x8, x10] + ldr q2, [x8] + + add x8, x8, #16 + cmeq v1.16b, v1.16b, #0 + cmeq v0.16b, v0.16b, v2.16b + + shrn v1.8b, v1.8h, #4 + fmov x6, d1 + shrn v0.8b, v0.8h, #4 + fmov x5, d0 + cbnz x6, .Lnulfound2 + mvn x5, x5 + cbz x5, 0b + + sub x8, x8, #16 // roll back second increment +.Lmismatch: + rbit x2, x5 + clz x2, x2 // index of mismatch + lsr x2, x2, #2 + add x11, x8, x11 + + ldrb w4, [x8, x2] + ldrb w5, [x11, x2] + sub w0, w4, w5 // byte difference + ret + + .p2align 4 +.Lnulfound2: + sub x8, x8, #16 + +.Lnulfound: + mov x7, x9 + mov x4, x6 + + ubfiz x7, x7, #2, #4 // x7 = (x7 & 0xf) << 2 + lsl x6, x6, x7 // adjust NUL mask to indices + orn x5, x6, x5 + cbnz x5, .Lmismatch + + /* + * (x0) == (x1) and NUL is past the string. + * Compare (x1) with the corresponding part + * of the other string until the NUL byte. + */ + ldr q0, [x8, x9] + ldr q1, [x8, x10] + + cmeq v1.16b, v0.16b, v1.16b + shrn v1.8b, v1.8h, #4 + fmov x5, d1 + + orn x5, x4, x5 + + rbit x2, x5 + clz x2, x2 + lsr x5, x2, #2 + + add x10, x10, x8 // restore x10 pointer + add x8, x8, x9 // point to corresponding chunk + + ldrb w4, [x8, x5] + ldrb w5, [x10, x5] + sub w0, w4, w5 + ret + + .p2align 4 +.Lhead_mismatch: + rbit x2, x5 + clz x2, x2 // index of mismatch + lsr x2, x2, #2 + ldrb w4, [x0, x2] + ldrb w5, [x1, x2] + sub w0, w4, w5 + ret + + /* + * If (a&0xf) < (b&0xf), we do the same thing but with swapped + * operands. I found that this performs slightly better than + * using conditional moves to do the swap branchless. + */ + .p2align 4 +.Lswapped: + add x12, x8, x9 + ldr q0, [x12, #16]! + sub x8, x8, x10 + add x11, x8, x9 + neg x9, x9 + + cmeq v1.16b, v2.16b, #0 + cmeq v0.16b, v0.16b, v3.16b + add x10, x10, #16 + shrn v1.8b, v1.8h, #4 + fmov x6, d1 + shrn v0.8b, v0.8h, #4 + fmov x5, d0 + cbnz x6, .Lnulfounds + mvn x5, x5 + cbnz x5, .Lmismatchs + add x10, x10, #16 + + /* + * During the main loop, the layout of the two strings is something like: + * + * v ------1------ v ------2------ v + * X1: AAAAAAAAAAAAABBBBBBBBBBBBBBBB... + * X0: AAAAAAAAAAAAABBBBBBBBBBBBBBBBCCC... + * + * where v indicates the alignment boundaries and corresponding chunks + * of the strings have the same letters. Chunk A has been checked in + * the previous iteration. This iteration, we first check that string + * X0 doesn't end within region 2, then we compare chunk B between the + * two strings. As X0 is known not to hold a NUL byte in regions 1 + * and 2 at this point, this also ensures that X1 has not ended yet. + */ + .p2align 4 +0: + ldr q0, [x10, x11] + ldr q1, [x10, x8] + ldr q2, [x10] + + cmeq v1.16b, v1.16b, #0 + cmeq v0.16b, v0.16b, v2.16b + + shrn v1.8b, v1.8h, #4 + fmov x6, d1 + shrn v0.8b, v0.8h, #4 + fmov x5, d0 + cbnz x6, .Lnulfounds + mvn x5, x5 + cbnz x5, .Lmismatchs + + add x10, x10, #16 + + ldr q0, [x10, x11] + ldr q1, [x10, x8] + ldr q2, [x10] + + add x10, x10, #16 + cmeq v1.16b, v1.16b, #0 + cmeq v0.16b, v0.16b, v2.16b + + shrn v1.8b, v1.8h, #4 + fmov x6, d1 + shrn v0.8b, v0.8h, #4 + fmov x5, d0 + cbnz x6, .Lnulfound2s + mvn x5, x5 + cbz x5, 0b + + sub x10, x10, #16 + +.Lmismatchs: + rbit x2, x5 + clz x2, x2 + lsr x2, x2, #2 + add x11, x10, x11 + + ldrb w4, [x10, x2] + ldrb w5, [x11, x2] + sub w0, w5, w4 + ret + + .p2align 4 +.Lnulfound2s: + sub x10, x10, #16 +.Lnulfounds: + mov x7, x9 + mov x4, x6 + + ubfiz x7, x7, #2, #4 + lsl x6, x6, x7 + orn x5, x6, x5 + cbnz x5, .Lmismatchs + + ldr q0, [x10, x9] + ldr q1, [x10, x8] + + cmeq v1.16b, v0.16b, v1.16b + shrn v1.8b, v1.8h, #4 + fmov x5, d1 + + orn x5, x4, x5 + + rbit x2, x5 + clz x2, x2 + lsr x5, x2, #2 + + add x11, x10, x8 + add x10, x10, x9 + + ldrb w4, [x10, x5] + ldrb w5, [x11, x5] + sub w0, w5, w4 + ret + +END(__strcmp) + + .section .rodata + .p2align 4 +shift_data: + .byte 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 + .fill 16, 1, -1 + .size shift_data, .-shift_data