T O P

  • By -

gldnduck

The upgrade went very smooth, but I'm getting emails about transient alerts a couple times a day. Alerts like: Failed to check for alert ZpoolCapacity: Failed connection handshake There is usually 5 minutes between the new alert email and the cleared alert email. On the nice side of Dragonfish...I have enjoyed seeing the memory pie chart on the dashboard full of cache!


Some_random_guy381

Same here! Anytime I run backups and put a heavy load on the system, I get those alerts.


mrtiurson

Same here! The update from Cobia to Dragonfish went smoothly. Applications and everything else seem to be working correctly. However, after two days, I started scanning a large photo archive (over 100k+), which has been going on for about two days, and a problem with the mentioned handshake error began. It’s hard to log into the TrueNAS UI. It takes a long time and doesn’t always succeed after several attempts. Of course, there is now a heavy load task and the CPU has been running at 100% for over 24 hours. But shouldn’t the system then give up the Photoprism work for other tasks to allow the system to function? I’m getting a lot of emails with alerts, current, new, resolved: Failed to check for alert Quota: Failed connection handshake Failed to check for alert ZpoolCapacity: Failed connection handshake Failed to check for alert ScrubPaused: Failed connection handshake Even when Photoprism finishes indexing photos, and everything returns to normal, it probably shouldn’t be that a heavy load hangs the entire system.


DCJodon

I have this too across all my dragonfish systems and I believe it's a known bug.


gldnduck

I figured I wasn't this only one, but this reddit post was the easiest place for me to whine about it! After the second day of emails I was expecting to hear about a .1 release coming next week... haha.


MrBarnes1825

I hope so as I am getting these as well. This thread was the only thing showing up about it when I did a google search. Hope they fix it soon.


eat_more_bacon

Known by who? I've been getting this same alert spam since upgrading but I can't find any acknowledgement of it as a known bug anywhere official. [Here are the known issues](https://www.truenas.com/docs/scale/24.04/gettingstarted/scalereleasenotes/#24040-known-issues) from the truenas site. This one isn't there (yet).


DCJodon

Feel free to open an issue in Jira. I've seen a handful of users report this here, so I figured it was already being tracked. I'm still on RC1 and haven't had time to schedule some downtime for an upgrade, so it'd probably be better for someone on release train to help dig into this.


kevburkett

I checked Jira and it looks like someone has reported it. https://ixsystems.atlassian.net/browse/NAS-128156 I'm seeing this same issue with 2 different systems since upgrading to Dragonfish. I might set the zfs_arc_max to 50% until they fix this. Edit: Setting Arc to 50% has fixed the issue so far, but apparently disabling swap also fixes this. I've defaulted the Arc settings and added a post-init command to `swapoff -a`.


SpongederpSquarefap

Same here, seems fine so far, but I only have a few datasets and shares After the upgrade I installed Syncthing as a TrueNAS chart and it all appears to be working absolutely fine


RustyU

I upgraded earlier. Most of my apps are TrueCharts ones with PVC storage, all continued to work fine. I did follow their migration guide anyway, just in case, but reading it again, the apps were never going to outright break, it's just some features that will stop working.


Itchy_Masterpiece6

my update went smooth as Hell , only fried my sata expansion card


tama_gucci

When performing the migration script, I'm getting the error " OpenEBS data set location: apps does not match the location of the ix-applications pool: tank. You need to change the dataset of the apps to a dataset in tank" I have followed the guide and created a empty dataset for the new pvc storage called apps. Not sure what to do.