Guides - Stuck with poor performance - Local Cache for T Drive using Cachefilesd | PlexGuide.com

Guides Stuck with poor performance - Local Cache for T Drive using Cachefilesd

  • Stop using Chrome! Download the Brave Browser via >>> [Brave.com]
    It's a forked version of Chrome with native ad-blockers and Google's spyware stripped out! Download for Mac, Windows, Android, and Linux!
Welcome to the PlexGuide.com
Serving the Community since 2016!
Register Now

jjnz123

Citizen
Original poster
Donor
Apr 14, 2020
7
4
Hi folks

One of my goals is to figure out how to create a local cache drive to store frequently/recently accessed media rather than pull it from T Drive all the time. (Big assumption: this feature is not native to PG).

My instructions for completing this are below, but I seem to be stuck at either two places:
1. Normally I can grab a file at about 15MBps (rsync from unionfs/tv to /tmp), but when I grab it from cache/tv, it hovers around 500KBps when testing using rsync.
2. And I am not sure the cache is even filling up correctly.

I thought I would post here before I sleep on it and give it another go in a few days.

A quick why?
In New Zealand we are quite far from G Servers, so download capacity is an issue even for my 1Gbps internet connection. It makes sense if I am using a local Plex server, and T Drive stored media to somehow cache the most recently/frequently used media, preventing multiple downloads of the same file. I was hoping to use a 4TB drive eventually as cache.

Happy for all comments including ones telling me this is a silly idea because....

Thanks!

PS sorry for the formatting below - it didn't copy well from my Confluence page.

=======================================
Local Cache for T Drive - Confluence Wiki:

There are a few components required to ensure a local cache is available for use when your primary storage is cloud storage such as T or G Drive.

Ubuntu 18.04 LTS:

Components required:

  • NFS server installed
  • NFS shares set up
  • Cachefilesd set up
  • NFS shares mounted using FSC options for cache
  • Commit mounts to FSTAB to make them permanent
  • Restart Plex Container
  • Change Plex library locations
Other material to consider:

Architectural Design
The following is a rough design of how the system will work.

[insert pic]

Install NFS Server
sudo apt-get update
sudo apt install nfs-kernel-server
Set up NFS Shares
We will need to provide the ‘localhost’ only permission to access the NFS shares. This permission is defined through the exports file located in your system’s /etc folder. Please use the following command in order to open this file through the Nano editor:

sudo nano /etc/exports
Then add the following to the bottom of the exports file:

# Google T drive TV location
/mnt/unionfs/tv localhost(ro,no_subtree_check,fsid=0)
# Google T drive Movie location
/mnt/unionfs/movies localhost(ro,no_subtree_check,fsid=1)
This will define NFS shares that are READ ONLY, that can be mounted later on.

Now commit changes and restart the NFS Kernal server:

sudo exportfs -a
sudo systemctl restart nfs-kernel-server
Now test that the mounts have been set up properly:

sudo exportfs
Should show something like this:

/mnt/unionfs/tv
localhost
/mnt/unionfs/movies
localhost
Set Up Cachefilesd
The cachefilesd daemon manages the cache data store that is used by network filesystems such a AFS and NFS to cache data locally on disk

sudo apt-get install cachefilesd
Local file caching configuration file is located: /etc/cachefilesd.conf
The /var/cache/fscache directory will be created and used by default as cache storage.

Make sure dir is set to /var/cache/fscache in /etc/cachefilesd.conf
The scope for this wiki is to use local storage, however another ext3 filesystem can be created and mounted at /var/cache/fscache . The size could then be anything you want.

NFS Shares Mounted
Set up main folder where your NFS shares will be mounted for accessing your movies and tv shows, and the sub-directories.

sudo mkdir /mnt/cache
sudo mkdir /mnt/cache/tv
sudo mkdir /mnt/cache/movies
Mount new NFS shares onto the above sub-directories:

sudo mount -t nfs -o fsc localhost:/mnt/unionfs/tv /mnt/cache/tv
sudo mount -t nfs -o fsc localhost:/mnt/unionfs/movies /mnt/cache/movies
Using fsc tags will ensure they are cached.

Now start Cachefilesd

sudo systemctl start cachefilesd
Set Cachefilesd to autostart:

sudo update-rc.d cachefilesd defaults
Now test that the mounts have taken properly:

cat /proc/fs/nfsfs/volumes
Should show something like this:

NV SERVER PORT DEV FSID FSC
v3 7f000001 801 0:123 0:0 yes
v3 7f000001 801 0:124 1:0 yes
Another test:

nfsstat -m
Shows the output:

/mnt/cache/tv from localhost:/mnt/unionfs/tv
Flags: rw,relatime,vers=3,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=127.0.0.1,mountvers=3,mountport=35539,mountproto=udp,fsc,local_lock=none,addr=127.0.0.1

/mnt/cache/movies from localhost:/mnt/unionfs/movies
Flags: rw,relatime,vers=3,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=127.0.0.1,mountvers=3,mountport=60274,mountproto=udp,fsc,local_lock=none,addr=127.0.0.1

Commit Mounts to FSTAB
If the NFS client was rebooted, this mount would not be reestablished after the system boots. To make this a persistent mount across reboots, add an entry like the following in the /etc/fstab file:

sudo nano /etc/fstab
Add the following lines:

localhost:/mnt/unionfs/movies /mnt/cache/movies nfs rsize=524288,wsize=524288,hard,timeo=600,retrans=2,sec=sys,fsc,local_lock=none 0 0
localhost:/mnt/unionfs/tv /mnt/cache/tv nfs rsize=524288,wsize=524288,hard,timeo=600,retrans=2,sec=sys,fsc,local_lock=none 0 0

Now make sure the mounts mount correctly from the fstab file:

Unmount tv and movies (if they are already mounted)

sudo umount /mnt/cache/tv
sudo umount /mnt/cache/tv
Now try mount them via fstab:

sudo mount /mnt/cache/tv
sudo mount /mnt/cache/movies
And check with

nfsstat -m
You should see:

/mnt/cache/tv from localhost:/mnt/unionfs/tv
Flags: rw,relatime,vers=3,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=127.0.0.1,mountvers=3,mountport=60274,mountproto=udp,fsc,local_lock=none,addr=127.0.0.1

/mnt/cache/movies from localhost:/mnt/unionfs/movies
Flags: rw,relatime,vers=3,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=127.0.0.1,mountvers=3,mountport=60274,mountproto=udp,fsc,local_lock=none,addr=127.0.0.1

Restart Plex Container
From within Portainer, select the Plex container and restart.

Once restarted, the new cache/tv and cache/movies mount points will be available.

Change Plex Library Locations
Now that the new NFS mounts are operational and using local cache, add new Plex library locations:

/mnt/cache/tv
/mnt/cache/movies
Don’t delete the old locations just yet. A library scan should start. If not manually scan library.

Once scanned, remove old library location (unionsfs/tv etc) and 'empty trash' under the library options.

Is the Cache working?
How do I verify that nfs client is working with CacheFS?

Cd to /var/cache/fscache directory and type the following command:
# ls -Z
Sample outputs:

drwx------. root root system_u:eek:bject_r:cachefiles_var_t:s0 cache
drwx------. root root system_u:eek:bject_r:cachefiles_var_t:s0 graveyard

When the cache is set up correctly, you will see two directories as above. You can type the following command to list files and cache size:
# find
# du -sh
Sample outputs:

142M
How do I see FS-Cache statistics?
Simply type the following command:
# cat /proc/fs/fscache/stats
Sample outputs:

FS-Cache statistics
Cookies: idx=30 dat=7895 spc=0
Objects: alc=7164 nal=0 avl=7164 ded=4261
ChkAux : non=0 ok=3727 upd=0 obs=3
Pages : mrk=59000 unc=37195
Acquire: n=7925 nul=0 noc=0 ok=7925 nbf=0 oom=0
Lookups: n=7164 neg=3429 pos=3735 crt=3429 tmo=0
Updates: n=0 nul=0 run=0
Relinqs: n=5022 nul=0 wcr=0 rtr=22
AttrChg: n=0 ok=0 nbf=0 oom=0 run=0
Allocs : n=0 ok=0 wt=0 nbf=0 int=0
Allocs : ops=0 owt=0 abt=0
Retrvls: n=7307 ok=3324 wt=1540 nod=3243 nbf=740 int=0 oom=0
Retrvls: ops=6567 owt=1672 abt=0
Stores : n=32683 ok=32683 agn=0 nbf=0 oom=0
Stores : ops=5842 run=38525 pgs=32683 rxd=32683 olm=0
VmScan : nos=181 gon=0 bsy=0 can=0
Ops : pend=1695 run=12409 enq=38525 can=0 rej=0
Ops : dfr=21 rel=12409 gc=21
CacheOp: alo=0 luo=0 luc=0 gro=0
CacheOp: upo=0 dro=0 pto=0 atc=0 syn=0
CacheOp: rap=0 ras=0 alp=0 als=0 wrp=0 ucp=0 dsp=0
 

Datamonkeh

Data Hoarding Primate
Project Manager
Donor
Donor
Jan 20, 2018
866
395
I only got as far as the objective rather than the process, but my first question is why aren't you altering the rclone cache time? It would retain local data for as long as you want, with about 30 seconds work and take circa 20 keystrokes?
 

Edrock200

MVP
Staff
Nov 17, 2019
639
227
If I'm understanding what you are trying to do, you basically just need to fork and modify the pgmove engine to upload content only older that x timeframe. I.e. --min-age=2d. Then everything just stays in /mnt/move, which is merged into unionfs, and stays local. Alternatively you can tell your fork to rclone copy instead of move to gdrive, then add a cronjob that runs through /mnt/move and deletes items older than x days. That way you aren't delaying your content going upto gdrive.
 

jjnz123

Citizen
Original poster
Donor
Apr 14, 2020
7
4
I only got as far as the objective rather than the process, but my first question is why aren't you altering the rclone cache time? It would retain local data for as long as you want, with about 30 seconds work and take circa 20 keystrokes?
Haha, I did take a look into rcache and it appeared not suitable. But taking another look just now I can't remember why I dismissed it.

Will take another look a rcache.

Thanks!
 

Recommend NewsGroups

      Up To a 58% Discount!

Trending