hdfs - Fault Tolerance in Hadoop -


we know in hadoop if data corrupts new replica created if node down initally works fine how namenode deals 4 replicas delete 1 node?

if newly created 1 or 1 suddenly?

in situation when 1 data node goes down, name node see data blocks under-replicated , start replication other node in cluster bring replication expected level (default 3).

when corrupted node goes again blocks node seen over-replicated 4 replicas.

when block becomes over-replicated, name node chooses replica remove. name node prefer not reduce number of racks host replicas, , secondly prefer remove replica data node least amount of available disk space. may rebalancing load on cluster.


Comments

Popular posts from this blog

twig - Using Twigbridge in a Laravel 5.1 Package -

jdbc - Not able to establish database connection in eclipse -

firemonkey - How do I make a beep sound in Android using Delphi and the API? -