IJRCS – Volume 1 Issue 2 Paper 6


Author’s Name : M Akhila | R Sujitha

Volume 01 Issue 02  Year 2014  ISSN No:  2349-3828  Page no:  27-29



The Large class of modern distributed file systems treats metadata services as an independent system component, separately from data servers. The availability of the metadata service is key to the availability of the overall system. Given the high rates of failures observed in large-scale data centres, distributed file systems usually incorporate high- availability features. A typical approach in the development of distributed file systems is to design and develop metadata services from the ground up, at significant cost in terms of complexity and time, often leading to functional shortcomings. Our motivation in this paper was to improve on this state of things by defining a general-purpose architecture for HA metadata services that can be easily incorporated and reused in new or existing file systems, reducing development time. This project is developed using data replication with Meta server, it acquires more memory and therefore the data’s won’t be interact during a secure approach & not in ordered order .To overcome these drawback we elect the formula of Fragment and Snuffle algorithm. To overcome this problem we choose the Algorithm of FS The scope of this project is the Detach & reproduces methodology; we divide a file into fragments, and replicate the fragmented data over the cloud nodes. Each of the nodes stores only a single fragment of a particular data file that ensures that even in case of a successful attack, no meaningful information is revealed to the attacker.


Distributed File Systems, High Availability, System Recovery, Metadata Services, Fragment and Snuff


  1. Stamatakis, D., Tsikoudis, N., Smyrnaki, O. and Magoutis, K.,“Scalability of Replicated Metadata Services in Distributed File Systems,” in Proc. of 12th IFIP Int. Conference on Distributed Applications and Inter operable Systems (DAIS) 2012, Stockholm,Sweden, 2012.
  2. Ligon, M., Ross, R., “Overview of the Parallel Virtual File System,” in Proceedings of USENIX Extreme Linux Workshop, Monterey, CA, USA, 1999.
  3. Shvachko, K., Kuang, H., Radia, S. and Chansler, R., “The Hadoop Distributed File System,” in Proc. of IEEE Conference on Mass Storage Systems and Technologies (MSST), Lake Tahoe,NV, 2010.
  4. Ghemawat, S., Gobioff, H. and Leung, S.-T., “The Google File System,” in Proc. of 19th ACM Symposium on Operating Systems Principles (SOSP-19), Bolton Landing, New York, 2003.
  5. Shepler, S. et al., “Parallel NFS, RFC 5661-5664,” http://tools.ietf.org/html/rfc5661, IETF.