matlab - Normalization of an image -


i applied operations on grayscale image , getting new values problem intensity values less 0, between 0 , 255 , greater 255. values between [0-255] there no problem intensity values < 0 , intensity values > 255 there problem these values cannot occur in grayscale image.

therefore, need normalize values values whether negative or greater 255 or whatever other values are, comes in range 0 255 image can displayed.

for know 2 methods:

method #1

newimg = ((255-0)/(max(img(:))-min(img(:))))*(img-min(img(:))) 

where min(img(:)) , max(img(:)) minimum , maximum values obtained after doing operations on input image img. min can less 0 , max can greater 255.

method #2

i make values less 0 0 , values greater 255 255, so:

img(img < 0) = 0; img(img > 255) = 255; 

i tried use both methods getting results using second method not first one. can of please tell me problem is?

that totally depends on image content itself. both of methods valid ensure range of values between [0,255]. however, before decide on method you're using, need ask following questions:

question #1 - what image?

the first question need ask what image represent? if output of edge detector example, method choose depend on dynamic range of values seen in result (more below in question #2). example, it's preferable use second method if there distribution of pixels , low variance. however, if dynamic range bit smaller, you'll want use first method push contrast of result.

if output image subtraction, it's preferable use first method because want visualize exact differences between pixels. truncating result not give visualization of differences.

question #2 - what's dynamic range of values?

another thing need take note of how wide dynamic range of minimum , maximum values are. example, if minimum , maximum not far off limits of [0,255], can use first or second method , won't notice of difference. however, if values within small range within [0,255], doing first method increase contrast whereas second method won't anything. if goal increase contrast of image , if intensities within valid [0,255] range, should first method.

however, if have minimum , maximum values quite far away [0,255] range, min=-50 , max=350, doing first method won't bode - especially if grayscale intensities have huge variance. mean huge variance have values in high range, values in low range , nothing else. if rescaled using first method, mean minimum gets pushed 0, maximum gets shrunk 255 , rest of intensities scaled in between values lower, scaled they're visualized gray.

question #3 - have clean or noisy image?

this not many people think about. image clean, or there couple of spurious noisy spots? first method very bad when comes noisy pixels. if had couple of pixel values have large value other pixels within range of [0,255], make of other pixels rescaled accordingly , decrease contrast of image. want ignore contribution made these pixels , second method preferable.

conclusion

therefore, there nothing wrong either of methods have talked about. need cognizant of what image is, dynamic range of values see once examine output , whether or not clear or noisy image. have make smart choice keeping 2 factors in mind. in case, first output didn't work because have large negative values , large positive values , perhaps few of values too. doing truncation better application.


Comments

Popular posts from this blog

powershell Start-Process exit code -1073741502 when used with Credential from a windows service environment -

twig - Using Twigbridge in a Laravel 5.1 Package -

c# - LINQ join Entities from HashSet's, Join vs Dictionary vs HashSet performance -