cvs commit: src/sys/dev/bge if_bge.c
Bruce Evans
bde at zeta.org.au
Tue Dec 12 18:13:45 PST 2006
On Wed, 13 Dec 2006, Oleg Bulyzhin wrote:
> On Tue, Dec 12, 2006 at 06:24:43PM -0500, Jung-uk Kim wrote:
>> On Tuesday 12 December 2006 05:25 pm, Oleg Bulyzhin wrote:
>>> On Mon, Dec 11, 2006 at 06:00:35PM +0000, Jung-uk Kim wrote:
>>>> jkim 2006-12-11 18:00:35 UTC
>>>>
>>>> FreeBSD src repository
>>>>
>>>> Modified files:
>>>> sys/dev/bge if_bge.c
>>>> Log:
>>>> - Correct collision counter for BCM5705+. This register is
>>>> ...
>>>> - Correct integer casting from u_long to uint32_t. Casting is
>>>> not really needed for all supported platforms but we better do
>>>> this correctly[2].
>>> ...
>>> I didnt get the point of your u_long -> uint32_t changes.
>>> As i can see your change will cause more often wraps on 64bit
>>> archs:
Wrapping is the point of the changes. The hardware counters (the part
that is actually used) are 32 bits, so they must be subtracted in 32 bits,
so that when the counter wraps from 0xffffffff to 0 the result is 1 and
not -4294967295. Without the cast, the following would happen on machines
with > 32 bit ints:
uint32_t val_before = 0xffffffff;
uint32_t val_after = 0;
expression (val_after - val_before):
o Both terms are first promoted to int and the promotion is strict
since uint32_t is smaller than int.
o The result is -4294967295, provided there are no padding bits in
ints (the result needs 32 value bits and 1 sign bit to represent).
Also assume that this result is representable as an int.
expression if_ierrors += (val_after - val_before):
o The RHS is first promoted to the type of the LHS (u_long) since
the type of the LHS is not smaller than u_int and the type of
the RHS is not smaller than the type of the LHS and not smaller
than int.
o So the addition is of u_long values. Assume 64-bit u_longs, as
actually occur on all supported machines that have u_longs with
>= 33 bits.
o u_long is an unsigned type, so it cannot really represent the
negative int -4294967295. However, it can represernt values
quite a bit larger than +4294967295, so there is no benign
overflow overflow (wrapping) at 2^32. -44294967295 is converted
to (u_long)((ULONG_MAX + 1) + (-4294967295)) where the additions
are in infinite precision. This makes it (u_long)(2^64 + 1 - 2^32).
The small positive difference of 1 has been mangled to to a huge
unsigned value.
o Adding the huge unsigned value may or may not overflow. If it
doesn't overflow, then the result is bogus (a huger unsigned
value). If it overflows then the result is just bogus due to
the overflow (overflow of counters isn't benign, and with
64-bit counters can only ooccur due to bugs).
o Assigning the result of the addition makes no difference. It
either works right or preserves a value that is already bogus
due to overflow. However, if if_ierrors were 32 bits or we
converted to uint32_t before storing it then we would be back
to the relatively benign overflow (wrap) at 32 bits which occurs
natually on machines with 32-bit u_longs.
>>> In rev. 1.153 you have converted cnt to uint32_t. Since cnt
>>> is stored in sc->bge_* counters you have shortened(on 64bit archs)
>>> them as well.
This is better for similar reasons. The hardware has 64-bit registers,
but the software only every read the low 32 bits and doesn't have locking
necessary for reading 64-bit registers atomically, and anyway, reading
64 bit registers would be just a pessimization, especially with correct
locking, since all the devices that I looked at always store 0 in the
top 32 bits (0xffffffff wraps to 0). Before rev.1.153, the bug actually
occurred on all supported 64-bit machines:
if_ierrors += ((u_long)0 - (u_long)0xffffffff);
The RHS is (u_long)(2^64 + 1 - 2^32).
Casting as is necessary for the unsupported machines would probably
fix this but then 64-bit values in bge counters would be just a small
pessimization -- the top 32 bits would only be used transiently in
cases where the compiler can't figure out that they will be discarded
The problem might be clearer if the hardware only maintained the low 10
bits of the registers, as almost happens for the drop count on 5705's
(the hardware actually helps by clearing the register on read, so the
negative differences which become hige unsigned ones on overflow don't
occur). The wrapping at 32-bit can't help either accidentally or
intentionally:
- if the hardware leaves garbage in the top 22 bits, then obviously we
must clear the garbage. It's easiest to clear the garbage before
using it. Otherwise we have to do a complicated analysis like the
above to see that the garbage doesn't matter. Clearing it reduces
to the next case.
- if the hardware sets the top bits to 0 then we can safely subtract
values. However, we must handle wrap at 0x400 so that we never get
negative differences:
/* Usual sign-extension/overflow bugs: */
if_ierrors += ((any_t)0 - (any_t)0x3ff);
/* Working version depending on 2's complement magic. */
if_ierrors += (0 - 0x3ff) & 0x3ff;
/* Working version depending on 2's complement non-magic. */
if_ierrors += ((u_int)0 - (u_int)0x3ff) & 0x3ff;
/* Clearer(?) working version. Depends on h/w clearing top bits */
if_ierrors += ((0x400 + 0) - 0x3ff) & 0x3ff;
Bruce
More information about the cvs-src
mailing list