Discussion Uploading 30TB from home via PGBlitz - how?

Assists Greatly with Development Costs

RiCH

Full Member
Original poster
Local time
7:54 AM
Jun 29, 2019
37
2
Mount Type
Local Drives
Server Type
Local - Dedicated Server
Got 34TB locally that I want to push through PGBlitz in a week to my gdrive .. what's the best way to do it as /mt/move can't handle 34TB :)

Also, as Blitz sends to tcrypt, I guess the files need to be manually moved from tcrypt > gcrypt once they're uploaded?

Or? :)
 

Admin9705

Administrator
Project Manager
Local time
2:54 AM
Jan 17, 2018
4,760
1,815
You run a home vm and you can run an rclone command with a custom bandwidth limit for a local move to /mnt/gdrive. This would take some work. If you search the forms, i believe in the past a few users had some solutions in regards to this.
 

RiCH

Full Member
Original poster
Local time
7:54 AM
Jun 29, 2019
37
2
I see. I'm already running PGBlitz on a UBuntu VM within Windows 10.

To push as much data as possible, could I mount and/or bind each data drive in the VM (as RO) so that Blitz could push to tcrypt directly without hitting the 750GB limit?

Then it would be easy to manually drag the tcrypt folders to gcrypt without worrying about that 750GB limit.

Or is it necessary to do it data > send to /mnt/move > tcrypt?

Workable? :)
 

ONEILL

Junior Member
Local time
7:54 AM
May 2, 2019
5
1
i had a vm withc docker at home , and wanted to migrate to google cloud , and i installed rclone in the vm and put a llimit so i wasnt banned , it worked took me 2 weeks and half maybe to upload 26tb , rclone works !!!
 

Derfla76

Full Member
Local time
1:54 AM
Mar 3, 2018
40
18
I would think this simple in theory.

Setup PGBlitz via VM... mount your local shares as drives in the VM... then move the files to the corresponding unionfs folders... PGBlitz should take care of the rest and moving them to GCloud.

The only thing I know nothing about is setting up limits to avoid the banning... but then myself I would only move portions at a time to ensure everything is going right and only so many files daily.
 

RiCH

Full Member
Original poster
Local time
7:54 AM
Jun 29, 2019
37
2
then move the files to the corresponding unionfs folders...
I wondered about that ... but that is a lengthy process that would involve manually copying TBs of data:

data ---> mnt/move/ --> PGBlitz --> tcrypt

Plus, if I want to move 10TB in one go, I'd need a 10TB HDD as the 'feeder' drive for /mnt/ ... I think?

The ideal is:

data ---> PGBlitz --> tcrypt

But is that achievable?
 

Derfla76

Full Member
Local time
1:54 AM
Mar 3, 2018
40
18
I wondered about that ... but that is a lengthy process that would involve manually copying TBs of data:

data ---> mnt/move/ --> PGBlitz --> tcrypt

Plus, if I want to move 10TB in one go, I'd need a 10TB HDD as the 'feeder' drive for /mnt/ ... I think?

The ideal is:

data ---> PGBlitz --> tcrypt

But is that achievable?
I don't think you have a way around having a feeder drive. Moving the files to unionfs starts with the move folder... where they will sit until PGBlitz is done moving them to the cloud... so either way they will need a place to "sit" until the move completed.

Unless I am misunderstanding the process.

For instance I have a 2TB download drive...

The way that I would do it is, is if it's simply movies/tv shows is to setup my management program (sonarr/radarr) and have it add my local drives to the library and then within the program have it to move the root folders and let it go... that way I am not having to manually do it.

But either way I don't know how to get around having the feeder drive...

I don't consider myself an expert so, there may be another way I am not aware of.
 

RiCH

Full Member
Original poster
Local time
7:54 AM
Jun 29, 2019
37
2
I guess that mounting an existing HDD as the feeder drive temporarily could work ... then unmount it after the feed is finished.

Does anyone know if that is possibe?

Can an existing HDD 'pretend' to be /mnt/move ?

:)
 

Derfla76

Full Member
Local time
1:54 AM
Mar 3, 2018
40
18

See above link ^^^

May be helpful...
 

RiCH

Full Member
Original poster
Local time
7:54 AM
Jun 29, 2019
37
2

See above link ^^^

May be helpful...
Thanks - that could work :)

It's a LOT easier than moving 10TB of data multiple times. Seems you can just softlink to the /mnt/move folder .. but would that sync the data, or would it send it and then delete the source?!


@Admin

Can you see any reason why that wouldn't work?

Is there a way to ensure that PGBlitz didn't delete the source data after sending it to tcrypt?
 

Admin9705

Administrator
Project Manager
Local time
2:54 AM
Jan 17, 2018
4,760
1,815
It would work but rclone has a bandwidth command so you don't need to have a huge drive within ubuntu. You can tell it to push a local copy transfer at the speeds of your home upload. I would run it 25 percent less due to any pauses and etc.
 

Derfla76

Full Member
Local time
1:54 AM
Mar 3, 2018
40
18
It would work but rclone has a bandwidth command so you don't need to have a huge drive within ubuntu. You can tell it to push a local copy transfer at the speeds of your home upload. I would run it 25 percent less due to any pauses and etc.
I don't need to do this myself, but could you expound on this? More of the how than the can?
 

RiCH

Full Member
Original poster
Local time
7:54 AM
Jun 29, 2019
37
2
I don't need to do this myself, but could you expound on this? More of the how than the can?
I know you can instruct rclone to limit a transfer with 'rclone move ........ --bwlimit 10M', but I didn't know that PGBlitz can do it.

Where is the bwlimit setting?
 

RiCH

Full Member
Original poster
Local time
7:54 AM
Jun 29, 2019
37
2
It would work but rclone has a bandwidth command so you don't need to have a huge drive within ubuntu. You can tell it to push a local copy transfer at the speeds of your home upload. I would run it 25 percent less due to any pauses and etc.
But would PGBlitz delete the local data after the transfer? That is crucial ...
 

RiCH

Full Member
Original poster
Local time
7:54 AM
Jun 29, 2019
37
2
@Admin9705

PGBlitz defaults to removing anything located at /mnt/move after it's been sent to tdrive, doesn't it?

If so, is there any way to modify that behaviour so that local data is retained?
 

RiCH

Full Member
Original poster
Local time
7:54 AM
Jun 29, 2019
37
2
@fr0sty

Happy to hard or soft link it if I can be sure that Blitz won't delete the data after sending it ... !
 
T

TheShadow

Guest
yes, th
@Admin9705

PGBlitz defaults to removing anything located at /mnt/move after it's been sent to tdrive, doesn't it?

If so, is there any way to modify that behaviour so that local data is retained?
It uses rclone moveto command. Change the command in the script to copyto instead
 

RiCH

Full Member
Original poster
Local time
7:54 AM
Jun 29, 2019
37
2
Thanks the reply TheShadow

I'm okay using Blitz, but not sure which script I should edit?
 
T

TheShadow

Guest
sudo npgblitz

Keep in mind, a redeploy will overwrite any changes you make.
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads


Members online

Maintenance Donations

Recommend NewsGroups

      Up To a 58% Discount!

Trending