Sunday, 21 August 2016

Fault-tolerant in Apache NiFi 0.7 Cluster

Flow files content is written in the content repository. When a node goes down, NiFi cluster manager will route the data to another node. However, NiFi does not replicate data like Kafka. The queued data for the failed node will still be queued for failed node. Only that data must be manually sent over to the live node in the cluster or just bring the failed node up. Any new data, will automatically be routed to other nodes in the cluster by NiFi Cluster Manager (NCM). Queued data is usually very small, around 1 second of data or even less.

The data in the queues would be lost if the disk were completely lost. Generally, for a production
flow, we would highly recommend using a RAID storage device that provides redundancy.
RAID 10, also known as RAID 1+0, combines disk mirroring and disk striping to protect data. A RAID 10 configuration requires a minimum of four disks, and stripes data across mirrored pairs. As long as one disk in each mirrored pair is functional, data can be retrieved.

Reference:
https://community.hortonworks.com/articles/8607/how-to-create-nifi-fault-tolerance-using-multiple.html
https://community.hortonworks.com/articles/8631/how-to-create-nifi-fault-tolerance-using-multiple-1.html

No comments:

Post a Comment