c - Why is the big O runtime of 1D parity checking log(n)? -


so reading code website: http://www.geeksforgeeks.org/write-a-c-program-to-find-the-parity-of-an-unsigned-integer/

and shows how determine whether number has or odd parity. however, don't understand why runtime efficiency log(n). here code reference:

# include <stdio.h> # define  bool int  /* function parity of number n. returns 1    if n has odd parity, , returns 0 if n has    parity */ bool getparity(unsigned int n) {     bool parity = 0;     while (n)     {         parity = !parity;         n      = n & (n - 1);     }             return parity; } 

the runtime efficiency o(log(n)), n value of integer pass in. however, that's unconventional way use o notation.

more frequently, o notation expressed in terms of size of input in bits (the # of bits needed represent input), in case size of input k=o(log2(n)) , runtime o(k).

(even more accurately, runtime Θ(s) s number of bits set in n, although assumes bit operations o(1)).


Comments

Popular posts from this blog

powershell Start-Process exit code -1073741502 when used with Credential from a windows service environment -

twig - Using Twigbridge in a Laravel 5.1 Package -

c# - LINQ join Entities from HashSet's, Join vs Dictionary vs HashSet performance -