GlusterFS /etc/fstab mount options GlusterFS/NFS testing in Ubuntu 22.04

Posted on Sunday, March 5, 2023


 







I got a basic set up working for GlusterFS and now I want to make sure I am mounting it correctly and do some testing with NFS and GlusterFS in /etc/fstab

 

 






What I currently have

 

Just to show what I currently have.

 

For my GlusterFS see http://www.whiteboardcoder.com/2023/02/installing-glusterfs-103-on-ubuntu-2204.html [1] for any details on building out the glustefs and bricks… I this article is just going over mounting it in /etc/fstab

I have 3 GlusterFS servers in the Trusted Storage Pool.

 

Two servers have two bricks and the third has one brick.

I have created two volumes.  Volume-one has a replication factor of 2 and is spread across 2 bricks.  Volume-two has a replication factor of 3 and is spread over 3 bricks.

I have no special firewall rules going on, everything is in the same network and other servers can access all the glusterFS servers.

 

Install GlusterFS 10.3 on Ubuntu 22.04

OK you could install glusterFS on the client server by simply running

 

  > apt install glusterfs-client

 

But that would install GlusterFS 10.1 see

 

  > apt show glusterfs-client

 



Which to be honest would be OK.  But I want the latest and greatest vs 10.3  see https://www.gluster.org/release-schedule/ [2]

OK there is a version 11.0 but that is too new for me at this point.

So here is the procedure to download and install 10.3

Run the following commands to trust this and get it updated

 

 > sudo add-apt-repository ppa:gluster/glusterfs-10

 

Now update

 

 > sudo apt update

 

Check version

 

 > apt show glusterfs-client

 

That looks good

Now install glusterfs

 

 > sudo apt install glusterfs-client

 

 

 

OK now let me see if I can query the glusterFS Storage pool and its volumes…

OK there is not a current glusterfs way to do this see https://www.gluster.org/finding-gluster-volumes-from-a-client-machine/ [3].  But you can use the NFS tool showmount to find the names of glusterfs volumes. 

First let me install some tools

 

 > sudo apt-get install nfs-common

 

Now let’s list volumes using the NFS tool showmount

 

 > sudo apt-get install nfs-common

 

Hmm looks like I need to set up some NFS stuff on the server side…
I will attempt that later, but for now looks like no simple way to do this.

As an alternative I am going to login to one of the gluster servers in the trusted storage pool and list volumes.

 

 > sudo gluster volume list

 


Or get more details

 

 > sudo gluster volume info

 

 

Now let’s see if you we have any clients attached (I should not at this point)

 

 > sudo gluster volume status volume-one client-list
 > sudo gluster volume status volume-one clients

 

 


Here we can see that we have 3 clients mounted using glustershd
and we can see the IPs of the other glusterfs servers.  So no real client mounted yet just the other servers.

 

 > sudo gluster volume status volume-two client-list
 > sudo gluster volume status volume-two clients

 

OK looks good now let’s fiddle with /etc/fstab and get it mounted

So back the client server…

 

Create Folders

First let me make a simple directory to mount to and give myself ownership of it.

 

  > sudo mkdir /volume_one_client
  > sudo mkdir /volume_two_client
 

  > sudo chown $USER:$USER /volume_one_client/
  > sudo chown $USER:$USER /volume_two_client/

   

 

My user name happens to be patman

OK now to edit /etc/fstab so we can auto mount glusterfs

 

  > sudo vi /etc/fstab

 

And place the following lines into it (to mount both volumes)

 

#defaults = rw, suid, dev, exec, auto, nouser, and async.

192.168.0.200:/volume-one /volume_one_client glusterfs defaults,_netdev 0 0

192.168.0.200:/volume-two /volume_two_client glusterfs defaults,_netdev 0 0

 

Now I could have also used 192.168.0.201 or 192.168.0.202 since both of those are part of the Gluster Trusted Storage Pool.

OK now save that and reboot

 

  > sudo reboot

 

Then check if they are mounted properly

 

  > df -h

 

 

And if I run this from one of the gluster servers in the trusted storage pool.

 

 > sudo gluster volume status volume-one client-list
 > sudo gluster volume status volume-one clients

 

 

Now we can see that we have one mounted by via fuse.

 

There we can see the client mounted there from 192.168.0.170

 

 > sudo gluster volume status volume-two client-list
 > sudo gluster volume status volume-two clients

 

 

 

 

 

 

Mount via NFS

To get this set up I do need to do a bit of work on the servers and clients.

First install nfs-common (run this on all severs and the client)

 

 > sudo apt-get install nfs-common

 

 

Let me try a quick test. Umount one of the drives and attempt to mount it via nfs using these commands.

 

 > sudo mount -t nfs -o vers=3 192.168.0.200:volume-one /volume_one_client

 


 

mount.nfs: requested NFS version or transport protocol is not supported

 

 

 

Hmmm not working yet.

I think we have a setting issue.

Run this from one of the glusterfs servers.

 

 > sudo gluster volume info

 

 

Bad option here this should be named nfs.disable: off
but at any rate I think this needs to be flipped.

Run this to flip it

 

 > sudo gluster volume set volume-one nfs.disable off

 

 

Hmm I guess I will need to go look into NFS-Ganesha

Let’s see if that worked

 

 > sudo gluster volume info

 

 


Seem to have done the job.

 

 > sudo mount -t nfs -o vers=3 192.168.0.200:volume-one /volume_one_client

 

 

Not supported…

I don’t think its worth fighting this I am going to try NFS-Ganesha setup

First let me flip back the nfs.disable

 

 > sudo gluster volume set volume-one nfs.disable on

 

 

 


 

Mount via NFS with NFS-Ganesha

I am using this guide to start off with https://docs.gluster.org/en/main/Administrator-Guide/NFS-Ganesha-GlusterFS-Integration/#installing-nfs-ganesha  [4]

NFS default set up has been deprecated in GlusterFS now they recommend using NFS-Ganesha.  I am going to try to get this set up..

From the glusterfs server lets install nfs-ganesha.

Run the following to install

 

 > sudo apt-get install nfs-ganesha-gluster

 

See if nfs-ganesha is running

 

 > sudo systemctl status nfs-ganesha

 

It is running.  See what mounts it exposes

 

 > showmount -e localhost

 

 

 

Nothing there yet

Backup the old file

 

 > sudo mv /etc/ganesha/gluster.conf /etc/ganesha/gluster.conf.ORIG
  > sudo mv /etc/ganesha/ganesha.conf /etc/ganesha/ganesha.conf.ORIG

 

Now edit the original

 

 > sudo vi /etc/ganesha/ganesha.conf

 

Found some help with the config here https://github.com/nfs-ganesha/nfs-ganesha/wiki/GLUSTER [5]

And place the following in it.

 

###################################################
#
# EXPORT
# GlusterFS setup
#
###################################################

EXPORT
{
  Export_Id = 121;              # Just a unique ID for each EXPORT
  Path = "/volume-one";         # Assuming 'volume-one' is the Gluster volume name
  Pseudo = "/volume-one";       # Required for NFS v4 

  Access_Type = RW;
  Squash = No_Root_Squash;      # Allow root access
  SecType = "sys";              # Security Flavor Supported

   FSAL {
    name = Gluster;             # Backing type is Gluster
    Hostname = "192.168.0.200"; # IP address of this node
    volume = "volume-one";      # The name of the GlusterFS Volume
  }
}

 

Now restart nfs-ganshesha

 

 > sudo systemctl restart nfs-ganesha

 

Now test from the glusterfs server

 

 > showmount -e localhost

 

 

 

Now let me check from my client machine

 

 > showmount -e 192.168.0.200

 

 

OK now run the following command from the client to mount this volume.

 

 > showmount -e 192.168.0.200

 

 

 

 > sudo mount -t nfs -o vers=4 192.168.0.200:volume-one /volume_one_client

 

 

 

That worked.

Let me add the second volume to the nfs-ganesha

 

 > sudo vi /etc/ganesha/ganesha.conf

 


And place the following in it.

 

###################################################
#
# EXPORT
# GlusterFS setup
#
###################################################

 EXPORT
{
  Export_Id = 121;              # Just a unique ID for each EXPORT
  Path = "/volume-one";         # Assuming 'volume-one' is the Gluster volume name
  Pseudo = "/volume-one";       # Required for NFS v4 


  Access_Type = RW;
  Squash = No_Root_Squash;      # Allow root access
  SecType = "sys";              # Security Flavor Supported 

  FSAL {
    name = Gluster;             # Backing type is Gluster
    Hostname = "192.168.0.200"; # IP address of this node
    volume = "volume-one";      # The name of the GlusterFS Volume 
  }
}

EXPORT
{
  Export_Id = 122;              # Just a unique ID for each EXPORT
  Path = "/volume-two";         # Assuming 'volume-one' is the Gluster volume name
  Pseudo = "/volume-two";       # Required for NFS v4 

  Access_Type = RW;
  Squash = No_Root_Squash;      # Allow root access
  SecType = "sys";              # Security Flavor Supported 

  FSAL {
    name = Gluster;             # Backing type is Gluster
    Hostname = "192.168.0.200"; # IP address of this node
    volume = "volume-two";      # The name of the GlusterFS Volume
  }
}


Now restart nfs-ganshesha

 

 > sudo systemctl restart nfs-ganesha

 

Now test from the glusterfs server

 

 > showmount -e localhost

 

 

OK now from the client un mount both drives and mount via nfs

 

 

 > sudo umount /volume_one_client
 > sudo umount /volume_two_client 

 > sudo mount -t nfs -o vers=4 192.168.0.200:volume-one /volume_one_client
 > sudo mount -t nfs -o vers=4 192.168.0.200:volume-two /volume_two_client

 

 

Nice 😊


Now let me fiddle with /etc/fstab

 

 > sudo vi /etc/fstab

 

And place the following in it.

 

192.168.0.200:/volume-one /volume_one_client nfs defaults,_netdev,vers=4 0 0
192.168.0.200:/volume-two /volume_two_client nfs defaults,_netdev,vers=4 0 0

 

 

Now reboot

 

 

 > sudo reboot

 

 

 

Looks happy

I guess as last test I should put a file in both mounted disks.

 

 > echo "THIS IS FILE STUFF" >> /volume_one_client/my_file.txt
 > echo "THIS IS FILE STUFF" >> /volume_two_client/my_file.txt

 

 

 

Ooops had a failure on volume_two_client…

 

Hmm it’s a permissions issues.
Looks like the folders are owned by root and not by my user…

Let me fix that and reboot.. hopefully a fluke

 

  > sudo chown $USER:$USER /volume_one_client/
  > sudo chown $USER:$USER /volume_two_client/

   

 

OK try again

 

 > echo "THIS IS FILE STUFF" >> /volume_one_client/my_file.txt
 > echo "THIS IS FILE STUFF" >> /volume_two_client/my_file.txt

 

 

 

Yep happy now.  Tried a reboot and its still happy.

 


 

Thoughts???

 

A few questions I would like to explore

 

1.       Why use NFS vs glusterfs for client connections.  Aside from being compatible with something that needs NFS.   Since I can use GlusterFS or NFS from another linux box any advantages to use NFS?

2.      For completeness should I install NFS-Ganesha on all glusterFS servers?  Or at least more than one so I have a backup

3.      NFS-Ganesha has some cluster settings would those be wise to use?

4.      Since I am mounting from the first glusterFS server… What happens if it goes down?  I know I could update my /etc/fstab but is there a better way to mount this?

 

 


 

References

 

[1]       Installing GlusterFS 10.3 on Ubuntu 22.04 and get it working
             
http://www.whiteboardcoder.com/2023/02/installing-glusterfs-103-on-ubuntu-2204.html
            Accessed 03/2023

[2]       Installing GlusterFS 10.3 on Ubuntu 22.04 and get it working
             
https://www.gluster.org/release-schedule/
            Accessed 03/2023

[3]       Finding Gluster volumes from a client machine
             
https://www.gluster.org/finding-gluster-volumes-from-a-client-machine/
            Accessed 03/2023

[4]       Installing nfs-ganesha
             
https://docs.gluster.org/en/main/Administrator-Guide/NFS-Ganesha-GlusterFS-Integration/#installing-nfs-ganesha
            Accessed 03/2023

[5]       Configuring the specific stuff for FSAL_GLUSTER
             
https://github.com/nfs-ganesha/nfs-ganesha/wiki/GLUSTER
            Accessed 03/2023

 

 

 

No comments:

Post a Comment