Multiple NVMe Drives in raid0

Chudz

PG Sergeant
PG Donor
Mar 19, 2018
47
30
8
Reactions
30 0 0
#1
Was asked to post this on Discord.

apt-get update && sudo apt-get install mdadm --no-install-recommends

lsblk
(This will tell you the name of the NVMe drives so you can use them in the command below usually nvme0n1 etc)

mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/nvme0n1 /dev/nvme0n2
mkfs.ext4 -F /dev/md0

lsblk

(This will tell you the name of the raid0 partition should be md0 but for some reason it can change to md127 so use the command above to check.)

mount -o discard,defaults,nobarrier /dev/md0 /mnt
OR
mount -o discard,defaults,nobarrier /dev/md127 /mnt
(Depends on what it decided to name it. Seems it has a mind of its own.)

chmod 777 /mnt
 

thepj

PG Master Sergeant
Tech Lead
Nov 13, 2018
91
57
18
Arizona, USA
Reactions
57 0 0
#2
@Admin9705 can you please have this added to the initial setup wiki, or a link to another wiki we can put more advanced configurations on? This is something quite a few people in Discord are asking for, myself included.

Thank you so much @Chudz t6his is wonderful
 

Chudz

PG Sergeant
PG Donor
Mar 19, 2018
47
30
8
Reactions
30 0 0
#3
There is probably a better way to mount it rather than using that command for example a systemd service but im not 100% on how to set that up lol. Using that command above you will have to use the mount command again to re-mount it after a reboot, once you done that you have to restart unionfs.service and also your docker containers.

Restart Unionfs:
systemctl restart unionfs.service

Restart all docker containers:
docker stop $(docker ps -a -q) && docker start $(docker ps -a -q)
 

Somedude

PG First Class
Tech Lead
Oct 28, 2018
5
6
8
US
Reactions
6 0 0
#6
NOTE: everything in this setup is not a normal PG setup, and may introduce new variables not accounted for by the script. While several users have used this setup successfully, it is a use at your own risk setup. It makes it harder to troubleshoot. If you do go forward and have any useful information, or issues feel free to ping @Somedude on discord or reply here.

Here is how I have mine setup. I will create a more in depth setup guide to be added to the wiki as soon as I get a chance. I just created a GitHub account finally.

Benefits of raid 0 NVME:
GCP has significant bandwidth. If you look at section 3 of this article HERE it mentions that each core can give you up to 2gbps connection per core, with a max of 16gbps. I have not experienced speeds of 16gbps, which may be due to usenet providers/ gdrive speed limitations based on file size, but I do constantly get speeds in the 2gbps range.

With speeds this fast, you need a server powerful enough to utilize them. The first issue is IO bottleneck by normal hard drive/ssd speeds. This was solved by GCE using a NVME disk. But it can be pushed harder. By using raid 0 you can increase the available IO significantly. The speeds seem to plateu after 4x NVME. Also when increasing your drive speeds you need to increase cpu cores so that programs like nzbget have the processing power to unpack faster. Ram is not heavily used.

ROUGH recommended specs: 2x NVME setup- 4 cores, 4GB ram such as the n1-highcpu-4 plan. 4x NVME either a custom 6 CPU 6GB ram, or a use the n1-highcpu-8 8 core, 7.2GB ram setup. While running the n1-standard-8 setup with 4x NVME I have not observed more than 75% CPU usage and have an average of 10% ram usage while downloading over 10k files with nzbget over the last 48 hours. That is roughly 6 cores, 3GB ram being utilized. Having the minimum needed cpu setup that won't cause a bottleneck will reduce your credit usage so you can run this longer. If you have any differently observed usage please let me know, so I may log it for better recommendations.

Heres the nitty gritty of the setup:
Create an instance to be used. use the recommendations above to decide on what machine type to be used. Under boot disk I recommend a SSD peristent disk with 30GB storage and the Ubuntu 18.04 LTS minimal image. The PG programs will be running off boot disk, which is why I recommend the SSD so as to not have sonarr/radarr bottlenecked. The reason only 30GB is needed is because all the downloading and processing will be done on the NVME drives, you just need enough space for the OS, programs, and the program related files/caches. As for OS I had the best experience with Ubuntu, since I ran into major issues with Debian. I will do some further testing with debian to see if I notice the same results. If you get it working with Debian successfully, please let me know.

Once you have the machine type, and boot disk setup you now need to configure the firewall and NVME drives. Under firewall makes sure you select "allow HTTP traffic" and "allow HTTPS traffic". Failing to do so may result in you not being able to access programs in the browser. Underneath firewall click "Management, security, disks, networking, sole tenancy" then select "disks" then "Add new disk". Under type select "Local SSD scratch disk" and make sure to select "NVME" not "SCSI" underneath that for the best performance. Then select how many NVME disks you want. Don't forget to click "done" on the bottom! Do note that (at least in the region I chose) NVME disks cost $36 month PER disk. so a 4x NVME setup is going to use $144/month of your credits. Once you have done all this feel free to select "create" and wait approximately 5 minutes before the instance is set up.

Once you are SSH'ed into the server here are the commands I used. Note: I am using the root account (sudo su) for setup to speed things up. I highly encourage you follow the wiki to add a sudo user, and add the sudo command as needed. If you do use the root account for setup, do note you will have to log back into root to make any changes as it will not show with your default user.

apt-get update && apt-get upgrade to make sure everything is up to date since it is a new server

apt-get install mdadm --no-install-recommends Install MDADM which is the software to setup raid. The -no-install-recommends flag is to avoid installing unneeded dependencies.

lsblk Used to view attached drives. Please note the names of the NVME drives. you should see SDA with one or more partitions such as sda1/14/15. This is your boot drive, DO NOT TOUCH IT OR USE IT IN A COMMMAND. The NVME drives will show as having 375G each and will have a label such as nvme0n1/2/3/4 or sdb, sdc, sdd, etc depending on how Google setup your server.

mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/nvme0n1 /dev/nvme0n2 /dev/nvme0n3 /dev/nvme0n4
or
mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/sdb /dev/sdc /dev/sdd /dev/sde
This command creates the actual raid setup. /dev/md0 is the name of the raid setup. You can change this to /dev/md1, /dev/md69/ /dev/mnt if you so feel the need to rename it, but just make sure /dev is in front of it. --level=0 tells what level raid to use. 0 tells it to do raid 0, if you change it to 5 it will do raid 5, etc. --raid-devices=4 tells you how many drives you want in the raid array. change it to 2 for 2 NVME drives, 3 for 3, etc. lastly the /dev/nvme0n1 section is the name of the NVME drives. How many /dev/* drives you have in this section should be the amount the raid devices = is set to, or you will run into problems. If it created successfully you should get a response such as "mdadm: array /dev/md0 started."

cat /proc/mdstat This is to verify the raid setup is active

mkfs.ext4 -F /dev/md0 This formats the raid array to an ext4 format. replace /dev/md0 with your custom name if you changed it.

mkdir -p /mnt This creates the actual /mnt directory the plexguide uses to download and process files

mount /dev/md0 /mnt This mounts the raid array to the /mnt folder. You can access /mnt like a normal folder, but instead of the fields being stored on the boot drive it will be stored on your raid array. change /dev/md0 if you have a custom name.

df -h -x devtmpfs -x tmpfs This shows you all your mounted drives. If you did it right you should something such as:
"/dev/md0 1.5T 77M 1.4T 1% /mnt" which shows your raid array, available storage, and that it is mounted on /mnt

You now have a successful raid array to use with PG! Now for a sanity check lets save the raid array so that it mounts itself even when the server is rebooted, and verify it is worked. Without doing this, if your server ever reboots it will unmount the drive, requiring it to be remounted on every restart. Failure to remount it can cause alot of issues that leads you down a rabbit hole.

mdadm --detail --scan | tee -a /etc/mdadm/mdadm.conf Used to save the raid array in mdadm


nano /etc/fstab Used to open fstab in a text editor. You can use any text editor you like, I just prefer nano. You may have to run apt-get install nano if it is missing. fstab is your filesystem table, and you want to save your raid array in it. You want to add the following to the bottom of the file:
/dev/md0 /mnt ext4 defaults 0 0

again, you want to replace /dev/md0 if you have any custom naming. If you are using nano, use CTRL + X at the same time to exit. When it asks if you want to save type "Y" then press enter, enter

Now for some checkers! I mean the type to make sure everything worked properly.

mount -av shows an overview of fstab to verify you added the array correctly. You said see /mnt already mounted .

touch /mnt test.txt This creates a blank .txt file in the /mnt directory to verify you can write to it

echo "This is a test" > /mnt/test.txt This simply adds text to the text file you created above. Change the text how you like.

cat /mnt/test.txt This reads what is in the file. You should get a response saying "This is a test"

ls -l /mnt This shows you what in the /mnt directory. You should see the test.txt

now go ahead and reboot your server. Once its back online you should be able to run mount -av, ls -l /mnt, cat /mnt/test.txt again, with the same results as above. If that all worked your raid array with work across boots!

Once everything is ready to go, you can install plexguide normally. When it asks you if you want to change the processing disk, select "no". Once PG is setup, go ahead and start downloading, run netdata, and enjoy watching the madness!

Again, any suggestions to improve this guide or issues let me know. I want everything nailed out before adding this to the official wiki.
 

thepj

PG Master Sergeant
Tech Lead
Nov 13, 2018
91
57
18
Arizona, USA
Reactions
57 0 0
#8
NOTE: everything in this setup is not a normal PG setup, and may introduce new variables not accounted for by the script. While several users have used this setup successfully, it is a use at your own risk setup. It makes it harder to troubleshoot. If you do go forward and have any useful information, or issues feel free to ping @Somedude on discord or reply here.

Here is how I have mine setup. I will create a more in depth setup guide to be added to the wiki as soon as I get a chance. I just created a GitHub account finally.

Benefits of raid 0 NVME:
GCP has significant bandwidth. If you look at section 3 of this article HERE it mentions that each core can give you up to 2gbps connection per core, with a max of 16gbps. I have not experienced speeds of 16gbps, which may be due to usenet providers/ gdrive speed limitations based on file size, but I do constantly get speeds in the 2gbps range.

With speeds this fast, you need a server powerful enough to utilize them. The first issue is IO bottleneck by normal hard drive/ssd speeds. This was solved by GCE using a NVME disk. But it can be pushed harder. By using raid 0 you can increase the available IO significantly. The speeds seem to plateu after 4x NVME. Also when increasing your drive speeds you need to increase cpu cores so that programs like nzbget have the processing power to unpack faster. Ram is not heavily used.

ROUGH recommended specs: 2x NVME setup- 4 cores, 4GB ram such as the n1-highcpu-4 plan. 4x NVME either a custom 6 CPU 6GB ram, or a use the n1-highcpu-8 8 core, 7.2GB ram setup. While running the n1-standard-8 setup with 4x NVME I have not observed more than 75% CPU usage and have an average of 10% ram usage while downloading over 10k files with nzbget over the last 48 hours. That is roughly 6 cores, 3GB ram being utilized. Having the minimum needed cpu setup that won't cause a bottleneck will reduce your credit usage so you can run this longer. If you have any differently observed usage please let me know, so I may log it for better recommendations.

Heres the nitty gritty of the setup:
Create an instance to be used. use the recommendations above to decide on what machine type to be used. Under boot disk I recommend a SSD peristent disk with 30GB storage and the Ubuntu 18.04 LTS minimal image. The PG programs will be running off boot disk, which is why I recommend the SSD so as to not have sonarr/radarr bottlenecked. The reason only 30GB is needed is because all the downloading and processing will be done on the NVME drives, you just need enough space for the OS, programs, and the program related files/caches. As for OS I had the best experience with Ubuntu, since I ran into major issues with Debian. I will do some further testing with debian to see if I notice the same results. If you get it working with Debian successfully, please let me know.

Once you have the machine type, and boot disk setup you now need to configure the firewall and NVME drives. Under firewall makes sure you select "allow HTTP traffic" and "allow HTTPS traffic". Failing to do so may result in you not being able to access programs in the browser. Underneath firewall click "Management, security, disks, networking, sole tenancy" then select "disks" then "Add new disk". Under type select "Local SSD scratch disk" and make sure to select "NVME" not "SCSI" underneath that for the best performance. Then select how many NVME disks you want. Don't forget to click "done" on the bottom! Do note that (at least in the region I chose) NVME disks cost $36 month PER disk. so a 4x NVME setup is going to use $144/month of your credits. Once you have done all this feel free to select "create" and wait approximately 5 minutes before the instance is set up.

Once you are SSH'ed into the server here are the commands I used. Note: I am using the root account (sudo su) for setup to speed things up. I highly encourage you follow the wiki to add a sudo user, and add the sudo command as needed. If you do use the root account for setup, do note you will have to log back into root to make any changes as it will not show with your default user.

apt-get update && apt-get upgrade to make sure everything is up to date since it is a new server

apt-get install mdadm --no-install-recommends Install MDADM which is the software to setup raid. The -no-install-recommends flag is to avoid installing unneeded dependencies.

lsblk Used to view attached drives. Please note the names of the NVME drives. you should see SDA with one or more partitions such as sda1/14/15. This is your boot drive, DO NOT TOUCH IT OR USE IT IN A COMMMAND. The NVME drives will show as having 375G each and will have a label such as nvme0n1/2/3/4 or sdb, sdc, sdd, etc depending on how Google setup your server.

mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/nvme0n1 /dev/nvme0n2 /dev/nvme0n3 /dev/nvme0n4
or
mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/sdb /dev/sdc /dev/sdd /dev/sde
This command creates the actual raid setup. /dev/md0 is the name of the raid setup. You can change this to /dev/md1, /dev/md69/ /dev/mnt if you so feel the need to rename it, but just make sure /dev is in front of it. --level=0 tells what level raid to use. 0 tells it to do raid 0, if you change it to 5 it will do raid 5, etc. --raid-devices=4 tells you how many drives you want in the raid array. change it to 2 for 2 NVME drives, 3 for 3, etc. lastly the /dev/nvme0n1 section is the name of the NVME drives. How many /dev/* drives you have in this section should be the amount the raid devices = is set to, or you will run into problems. If it created successfully you should get a response such as "mdadm: array /dev/md0 started."

cat /proc/mdstat This is to verify the raid setup is active

mkfs.ext4 -F /dev/md0 This formats the raid array to an ext4 format. replace /dev/md0 with your custom name if you changed it.

mkdir -p /mnt This creates the actual /mnt directory the plexguide uses to download and process files

mount /dev/md0 /mnt This mounts the raid array to the /mnt folder. You can access /mnt like a normal folder, but instead of the fields being stored on the boot drive it will be stored on your raid array. change /dev/md0 if you have a custom name.

df -h -x devtmpfs -x tmpfs This shows you all your mounted drives. If you did it right you should something such as:
"/dev/md0 1.5T 77M 1.4T 1% /mnt" which shows your raid array, available storage, and that it is mounted on /mnt

You now have a successful raid array to use with PG! Now for a sanity check lets save the raid array so that it mounts itself even when the server is rebooted, and verify it is worked. Without doing this, if your server ever reboots it will unmount the drive, requiring it to be remounted on every restart. Failure to remount it can cause alot of issues that leads you down a rabbit hole.

mdadm --detail --scan | tee -a /etc/mdadm/mdadm.conf Used to save the raid array in mdadm


nano /etc/fstab Used to open fstab in a text editor. You can use any text editor you like, I just prefer nano. You may have to run apt-get install nano if it is missing. fstab is your filesystem table, and you want to save your raid array in it. You want to add the following to the bottom of the file:
/dev/md0 /mnt ext4 defaults 0 0

again, you want to replace /dev/md0 if you have any custom naming. If you are using nano, use CTRL + X at the same time to exit. When it asks if you want to save type "Y" then press enter, enter

Now for some checkers! I mean the type to make sure everything worked properly.

mount -av shows an overview of fstab to verify you added the array correctly. You said see /mnt already mounted .

touch /mnt test.txt This creates a blank .txt file in the /mnt directory to verify you can write to it

echo "This is a test" > /mnt/test.txt This simply adds text to the text file you created above. Change the text how you like.

cat /mnt/test.txt This reads what is in the file. You should get a response saying "This is a test"

ls -l /mnt This shows you what in the /mnt directory. You should see the test.txt

now go ahead and reboot your server. Once its back online you should be able to run mount -av, ls -l /mnt, cat /mnt/test.txt again, with the same results as above. If that all worked your raid array with work across boots!

Once everything is ready to go, you can install plexguide normally. When it asks you if you want to change the processing disk, select "no". Once PG is setup, go ahead and start downloading, run netdata, and enjoy watching the madness!

Again, any suggestions to improve this guide or issues let me know. I want everything nailed out before adding this to the official wiki.
This is a great write up, I didn't read it all, its a lot :p

However I have seen the benefit of having 3x NVMe and can see faster extraction, repair, and a larger buffer for items that have not been updated to Google Drive yet. If you are going to do anything faster than 100MB/s 3x NVMe is great, I haven't gone over the need for 2x NVMe, and even with a ton of items in post processing, I haven't maxed out the available space. Anything between 1MB/s - 100MB/s I can really see the benefit of 2x NVMe in Raid0, it really ahs made a difference, especially from the space perspective while we wait for Sonarr/Radarr to process/move the file.
 

Somedude

PG First Class
Tech Lead
Oct 28, 2018
5
6
8
US
Reactions
6 0 0
#9
This is a great write up, I didn't read it all, its a lot :p

However I have seen the benefit of having 3x NVMe and can see faster extraction, repair, and a larger buffer for items that have not been updated to Google Drive yet. If you are going to do anything faster than 100MB/s 3x NVMe is great, I haven't gone over the need for 2x NVMe, and even with a ton of items in post processing, I haven't maxed out the available space. Anything between 1MB/s - 100MB/s I can really see the benefit of 2x NVMe in Raid0, it really ahs made a difference, especially from the space perspective while we wait for Sonarr/Radarr to process/move the file.
Later on it mentions recommended specs for 2x and 4x nvme drives. It can be changed to 3, thats why I broke down the mdadm commands. That way people can adjust it to how they want to do it. Personally I'm rocking the 4 drive setup. I am getting sustained 200 MB/s speeds on most files over one GB, and am uploading as fast as im downloading. So the space really isnt even for a que anymore, just IO.
 

PG Developer Donations

 

Forum statistics

Threads
2,959
Messages
18,623
Members
5,921
Latest member
agarb