Authors: Heitor Faria ; Rodrigo Hagstrom ; Marco Reis ; Breno G. S. Costa ; Edward Ribeiro ; Maristela Holanda ; Priscila Solis Barreto and Aletéia P. F. Araújo
Affiliation: University of Brasilia (UnB), Brazil
ISBN: 978-989-758-295-0
Keyword(s): Hadoop Backup, Cluster, Disaster Recovery.
Abstract: Backup is a traditional and critical business service with increasing challenges, such as the snowballing of constantly growing data. Distributed data-intensive applications, such as Hadoop, can give a false impression that they do not need backup data replicas, but most researchers agree this is still necessary for the majority of its components. A brief survey reveals several disasters that can cause data loss in Hadoop HDFS clusters, and previous studies propose having an entire second Hadoop cluster to host a backup replica. However, this method is much more expensive than using traditional backup software and media, such a tape library, a Network Attached Storage (NAS) or even a Cloud Object Storage. To address these problems, this paper introduces a cheaper and faster Hadoop backup and restore solution. It compares the traditional redundant cluster replica technique with an alternative one that consists of using Hadoop client commands to create multiple streams of data from H DFS files to Bacula – the most popular open source backup software and that can receive information from named pipes (FIFO). The new mechanism is roughly 51% faster and consumed 75% less backup storage when compared with the previous solutions.
[Full Paper Access and Download]
Disponível em: Português (Portuguese (Brazil))EnglishEspañol (Spanish)