Main Article Content
Big Data is one of the most requested techniques in the modern world of software development. In Big Data, the treatment of distributed files is performed by the open-source software framework called Hadoop on the product hardware cluster. For the storage of Big Data, the Framework is considered the most powerful. The HDFS Name Node component is used to store all sorts of files, folders, and blocks or metadata. The HDFS is specially designed to handle large files, but this framework will not properly handle a large number of small files. Proposed systems introduce that how the Name Node memory overheads the data storage reduces the storage of the huge number of small-size files in the HDFS. This approach will be very helpful in understanding the memory consumption and workload in the Name Node reduces the distributed file system called Hadoop.