PGX Status 26 – May | PlexGuide.com
  • Stop using Chrome! Download the Brave Browser via >>> [Brave.com]
    It's a forked version of Chrome with native ad-blockers and Google's spyware stripped out! Download for Mac, Windows, Android, and Linux!
Welcome to the PlexGuide.com
Serving the Community since 2016!
Register Now

Admin9705

Administrator
Original poster
Project Manager
Donor
Jan 17, 2018
5,156
2,120
Good news, the PGX Blitz uploader is working from what’s completed, but have to conduct more:



Baseline TestingAdd encryption upload Not ready for full testing, but will be soon. Basic (PG Move) moved much faster than PG8 and maxed my daily upload. This should uploader much faster for Blitz & GCE.



NOTE: The raffle will be announced in 4 days! There are 919 entries! You still have time – https://plexguide.com/raffles/



Upload to a Share Drive named ~ Media



With this soon to be done, then we can move on to everything else which takes far less time.



It uses the same uploading system, but rotates keys. The new method also is more noob friendly and catches errors and provides better links.



Key Interface



The new...
Continue reading...


 
  • Like
Reactions: 6 users

Admin9705

Administrator
Original poster
Project Manager
Donor
Jan 17, 2018
5,156
2,120
Baseline test works.

Some notes:
  1. In GDrive interface, exiting by pushing z when deploy token will take you to wrong location.
  2. Fix cap of 10 share drives showing up (from PG8)
  3. Build in measurements to prevent users executing things ahead until they do certain things for BLITZ
  4. Ensure encrypted uploads for BLITZ
  5. Basic does not upload to a share drive
  6. Min5 name for sharedrive enforcement
USE systemctl status uploader - for quick glitch check


To check log - cat /pg/log/uploader/primary.log | tail -n 30
 
Last edited:
  • Like
Reactions: 1 users

rcarteraz

Veteran
Donor
Nov 14, 2018
164
42
Exciting to see this come together! With the move to PGX will the upgrade be pretty straight forward without the need for much modification of our existing setups?
 
  • Like
Reactions: 1 user

DeadPool

Elite
Staff
May 2, 2018
227
78
Exciting to see this come together! With the move to PGX will the upgrade be pretty straight forward without the need for much modification of our existing setups?
unfortunately not. it is a whole new system and (hopefully) the last major change of format for a while as Glorious Leader understands how big changes make the people nervous.There is no upgrade path.
I'm sure eventually there will be options to port data, but i am taking the opportunity to start afresh as i think most of us are..

dP
 

Horatius369

Citizen+
Feb 5, 2020
27
4
unfortunately not. it is a whole new system and (hopefully) the last major change of format for a while as Glorious Leader understands how big changes make the people nervous.There is no upgrade path.
I'm sure eventually there will be options to port data, but i am taking the opportunity to start afresh as i think most of us are..

dP
It would be really nice to move the Plex, Sonarr and Radarr databases into the new system as those can be a royal pain in the a** to setup with a large amount of data. Is this a thing that is considered as these DBs should still be compatible I assume?
 
  • Like
Reactions: 1 users

Admin9705

Administrator
Original poster
Project Manager
Donor
Jan 17, 2018
5,156
2,120
It would be really nice to move the Plex, Sonarr and Radarr databases into the new system as those can be a royal pain in the a** to setup with a large amount of data. Is this a thing that is considered as these DBs should still be compatible I assume?
Sonarr and Radarr v3s import is easy to setup compared to v2. Plex will require a rescan due to path changes.
 
  • Like
Reactions: 1 users

Squid00

Citizen
Apr 18, 2020
12
13
Sonarr and Radarr v3s import is easy to setup compared to v2. Plex will require a rescan due to path changes.
I’ve found v3, of Radarr at least, to be a lot more streamlined, not least when in comes to importing libraries.

Hopefully it will make it out of testing before we’re all grey and old.

Looking forwarding to PGX!

One question: are there any plans to make PGmove more robust? Are there any hash checks in the current method?
 
  • Like
Reactions: 1 user

Admin9705

Administrator
Original poster
Project Manager
Donor
Jan 17, 2018
5,156
2,120
Fixes have been made, redeply the uploader - new
I’ve found v3, of Radarr at least, to be a lot more streamlined, not least when in comes to importing libraries.

Hopefully it will make it out of testing before we’re all grey and old.

Looking forwarding to PGX!

One question: are there any plans to make PGmove more robust? Are there any hash checks in the current method?
Ya it's a total new upload system. Hash checks are not needed.
 
  • Like
Reactions: 1 user

Neon

Citizen
Feb 25, 2019
11
3
Is this making up to 100 API key per user account? Or is it 100 user accounts you'd have to make? This is exciting, because uploading at only 9MB/s to stay under the daily limit cap right now is excruciating to watch. (even though it's barely an inconvenience)
 

Admin9705

Administrator
Original poster
Project Manager
Donor
Jan 17, 2018
5,156
2,120
Is this making up to 100 API key per user account? Or is it 100 user accounts you'd have to make? This is exciting, because uploading at only 9MB/s to stay under the daily limit cap right now is excruciating to watch. (even though it's barely an inconvenience)
So for basic, it will upload at max speeds (pgmove), but when the api ban hits, pg will keep attempting to upload every 15 minutes. For Blitz, yes you can make tons of keys. Right now, max is 33 from 20 from PG8. 100 keys would be a bit obsessive, but for right now unless there is a reason, we do this so you can create keys 3 times before wiping them (you'll hit 99 and poof, had to do it recreate them again).
 

Neon

Citizen
Feb 25, 2019
11
3
So for basic, it will upload at max speeds (pgmove), but when the api ban hits, pg will keep attempting to upload every 15 minutes. For Blitz, yes you can make tons of keys. Right now, max is 33 from 20 from PG8. 100 keys would be a bit obsessive, but for right now unless there is a reason, we do this so you can create keys 3 times before wiping them (you'll hit 99 and poof, had to do it recreate them again).
Very exciting. I'm used to my current 750GB a day limit, so even just quadrupling with 4 keys would be a nice upgrade. I probably download 5TB a month usually, but just over one weekend, so always hit that upload wall.
 
  • Like
Reactions: 1 user

004a

Citizen
Apr 24, 2020
8
1
Is basic uploading working currently in PGX? Also, where would you like an issue filed if one is found?
 
  • Like
Reactions: 1 user

Admin9705

Administrator
Original poster
Project Manager
Donor
Jan 17, 2018
5,156
2,120
Just post here. Ya basic works, but I could be changing things out on the fly. Please list anything. Really want to have this solid in the next two weeks.
 

004a

Citizen
Apr 24, 2020
8
1
I'm currently using PGX basic on Debian10. The uploader systemd service occasionally dies and has to be manually restarted. One possible fix is to change the restart parameter to "on-abnormal" to help fix this issue. The logs do not report a reason why the service has stopped.

Also when the daemon is running the logs look like it's performing correctly and migrating data out of /pg/complete yet it never shows up in /pg/gd or /pg/gc (depending on which is enabled) and is subsequently deleted from the originating dir after a "successful" transfer. Ultimately resulting in just a pure deletion of data.
 

Admin9705

Administrator
Original poster
Project Manager
Donor
Jan 17, 2018
5,156
2,120
I'm currently using PGX basic on Debian10. The uploader systemd service occasionally dies and has to be manually restarted. One possible fix is to change the restart parameter to "on-abnormal" to help fix this issue. The logs do not report a reason why the service has stopped.

Also when the daemon is running the logs look like it's performing correctly and migrating data out of /pg/complete yet it never shows up in /pg/gd or /pg/gc (depending on which is enabled) and is subsequently deleted from the originating dir after a "successful" transfer. Ultimately resulting in just a pure deletion of data.
Good feedback. I’ve seen it also and working some fixes. When is the last time your ran pgalpha?
 

004a

Citizen
Apr 24, 2020
8
1
I downloaded the latest main.sh and ran manually yesterday at 1046EDT.

Since I already have docker installed and running, I do have to modify the script and comment out lines 7,8,51,52. Else I must reinstall docker-ce-cli as the /bin/docker binary is overwritten with an empty file.
 

Recommend NewsGroups

      Up To a 58% Discount!

Trending