Integrating LVM with Hadoop and providing Elasticity to DataNode Storage

Sakshi Sharma
3 min readMar 14, 2021

Note:

The Screen that has the white background is of the Master Node

The screen that has the black background is of the Slave Node

hdfs-site.xml file for master node and slave node

Firstly I attached two hard discs of size 20 GB (/dev/sdb) and 10 GB (/dev/sdc) to my VM (RHEL 8)

Using the pvcreate command, I created physical volumes of the attached hard discs. After creating the PVs, using the vgcreate command, I created a volume group named “hadoop-vg”.

Details of this VG can be seen in the following screenshot.

Now, using the lvcreate command, I created one logical volume named “lv1” in the “hadoop_vg” whose size is 12 GB.

After creating the LV, I formatted it using mkfs.ext4 command.

Then I created a directory named “hadoop_lvm” and then using the mount command mounted it to the LV.

Now, I updated the value property in the hdfs-site.xml file of the slave node with the “hadoop_lvm” directory.

As you can see from the above screenshot, in the Hadoop report in the slave node that the slave node is contributing 11.75 GB of storage as of now.

This blog is continued in the next one where I will extend the size of the storage. Click here to read it.

Tags:

#arthbylw #vimaldaga #righteducation #educationredefine #rightmentor
#worldrecordholder #ARTH #linuxworld #makingindiafutureready #righeudcation #docker #webserver #elasticity #lvm

--

--