Sonarr - Sonarr leaving shows in queue | PlexGuide.com

Sonarr Sonarr leaving shows in queue

  • Stop using Chrome! Download the Brave Browser via >>> [Brave.com]
    It's a forked version of Chrome with native ad-blockers and Google's spyware stripped out! Download for Mac, Windows, Android, and Linux!
Welcome to the PlexGuide.com
Serving the Community since 2016!
Register Now

Edrock200

MVP
Original poster
Staff
Nov 17, 2019
624
224
So I had posted about this before thinking it was a sonarr v3 bug but I don't think it is. In sonarr, at least once a day at least one or more shows show up yellow in sonarr with the error that the file doesn't exist. Here's the odd part, if I go to the show and rescan, the episode shows up. So sonarr clearly copied it, but then somehow thought it failed.

This didn't happen in windows with sonarr v3 so I'm thinking it has something to do with unionfs and/or the move script.

I don't upload a ton, maybe max 15 showd/day. But this seems to happen when it's trying to process several shows at once. Anyone else getting this?
Post automatically merged:

This problem is now getting worse. 3gb shows are uploading as 250mb. I restarted, switched from hard link to copy, etc to no avail. Anyone have ideas?
Post automatically merged:

No one's run into this before? Just about every upload was truncating to ~250mb for some reason. The odd part is my sonarr4k instance is fine. For now, since my Plex servers are external (not running in the PG docker container), I told the sonarr container to remap /mnt/unionfs to /mnt/gcrypt so it uploads direct to gdrive and bypasses the /mnt/move script. This has temporarily mitigated the situation but I'd like to get it working. Any ideas, no matter how far fetched, are welcome please. :)

The odd part is this was working fine up until a week ago or so. I'm at a total loss at this point. I've rebooted the plexguide VM, redeployed the sonarr cotainer, switched from hard link to copy, etc. to no change in symptoms.
Post automatically merged:

Bump. No one's run into this?
Post automatically merged:

Well thanks for nothing guys. ? Jk. I didnt end up solving this issue but I did rebuild the host on Ubuntu 18.04 vs the precious VMware VM with Ubuntu. The problem hasn't resurfaced yet. It would almost seem like VMware was either too slow on writes or doing something with the file system that made rclone think the file was done writing. However the other containers (radarr, sonarr4k, etc) did not exhibit this behavior. Was really odd.
 
Last edited:

insanepoet

Citizen+
Staff
Donor
Jul 4, 2019
20
19
Just noticed this post, but i've seen this occur in the past and now more frequently with v3(preview). I tried to reproduce the issue to post to them but it seems to do it at odd times, ive tried throttling the container's connection to my download server and storage servers along with a couple other guesses (memory limit/cpu throttle) to no avail trying to get it to rear its ugly head. It's likely a bug with whatever threading is being used to "track" the individual "snatched" items through the download and post processing stages. I'm just making an educated guess here as it only seems to happen for when a large amount of items are in que but even under that situation I cant seem to reproduce the issue as it does it so rarely.
 

Admin9705

Administrator
Project Manager
Donor
Jan 17, 2018
5,156
2,117
Just noticed this post, but i've seen this occur in the past and now more frequently with v3(preview). I tried to reproduce the issue to post to them but it seems to do it at odd times, ive tried throttling the container's connection to my download server and storage servers along with a couple other guesses (memory limit/cpu throttle) to no avail trying to get it to rear its ugly head. It's likely a bug with whatever threading is being used to "track" the individual "snatched" items through the download and post processing stages. I'm just making an educated guess here as it only seems to happen for when a large amount of items are in que but even under that situation I cant seem to reproduce the issue as it does it so rarely.
It occurs on my end also and have seen this here or there.
 

Edrock200

MVP
Original poster
Staff
Nov 17, 2019
624
224
Thanks the the replies. The odd part was once it started with that container, it was consistent, every file would do it. But my other sonarr/radarr containers wouldn't, same version, same settings. Tried toggling hardlink/copy settings, remove/readd container, etc to no avail. I have since blown that server away but the only thing I can think of is if ioctl/inotify counts were exhausted but not sure, I'm fairly new to Linux.
 

Recommend NewsGroups

      Up To a 58% Discount!

Trending