Suppose
you have a file stored in a system, and due to some technical problem that file
gets destroyed. Then there is no chance of getting the data back present in
that file. To avoid such situations, Hadoop has introduced the feature of fault
tolerance in HDFS. In Hadoop, when we store a file, it automatically gets
replicated at two other locations also. So even if one or two of the systems
collapse, the file is still available on the third system.