c - Why is the big O runtime of 1D parity checking log(n)? -
so reading code website: http://www.geeksforgeeks.org/write-a-c-program-to-find-the-parity-of-an-unsigned-integer/
and shows how determine whether number has or odd parity. however, don't understand why runtime efficiency log(n). here code reference:
# include <stdio.h> # define bool int /* function parity of number n. returns 1 if n has odd parity, , returns 0 if n has parity */ bool getparity(unsigned int n) { bool parity = 0; while (n) { parity = !parity; n = n & (n - 1); } return parity; }
the runtime efficiency o(log(n)), n
value of integer pass in. however, that's unconventional way use o notation.
more frequently, o notation expressed in terms of size of input in bits (the # of bits needed represent input), in case size of input k=o(log2(n)) , runtime o(k).
(even more accurately, runtime Θ(s) s number of bits set in n, although assumes bit operations o(1)).
Comments
Post a Comment