hdfs - Fault Tolerance in Hadoop -


we know in hadoop if data corrupts new replica created if node down initally works fine how namenode deals 4 replicas delete 1 node?

if newly created 1 or 1 suddenly?

in situation when 1 data node goes down, name node see data blocks under-replicated , start replication other node in cluster bring replication expected level (default 3).

when corrupted node goes again blocks node seen over-replicated 4 replicas.

when block becomes over-replicated, name node chooses replica remove. name node prefer not reduce number of racks host replicas, , secondly prefer remove replica data node least amount of available disk space. may rebalancing load on cluster.


Comments

Popular posts from this blog

powershell Start-Process exit code -1073741502 when used with Credential from a windows service environment -

twig - Using Twigbridge in a Laravel 5.1 Package -

c# - LINQ join Entities from HashSet's, Join vs Dictionary vs HashSet performance -