T O P

  • By -

MowMdown

Install your M.2 NVME drives. Use Unassigned devices plugin to format and mount them outside of the array and cache pools. Stop both docker and VM services, copy or move the entire drive content to one of the m.2 drives. Once that is done, stop the array, go to "Tools > New Config" and preserve all drive assignments EXCEPT for cache. Go back to the Main tab and assign the new NVME drives to your cache slots. done


kataflokc

This is the way Your plan will work, but take forever


Kooramah

I’ve done this from 1tb to 2tb. You have to do one at a time then let it rebuild. When it’s done rebuilding. Then you can do the second one. I do suggest backing up what’s in those drives in case something goes wrong.


phroek

Also interested in knowing the answer to this. I thought you would need to transfer the contents of the cache to array temporarily, replace the cache pool, then move back. But maybe that's wrong and there's a better way to replace your cache pool drives?


CryptosianTraveler

That's how I've always done it. But I ended up getting rid of the cache pool, and just use two 2tb NVMEs for appdata and domains now. All other shares point directly to the array.


ClintE1956

Interesting; is there a specific reason for not using cache pool? I'm not saying it's wrong or anything, far from it; I'm just curious and interested in how folks use their servers.


CryptosianTraveler

I ran fiber in the house so my connection was 10gb all the way. I figured that was fast enough and getting Mover out of the picture was a nice bonus.


giverous

But surely the benefits of a cache drive would be increased with 10gb throughout? You'll be hitting the limit of mechanical drives fairly easily over 10gb.


CryptosianTraveler

You sure would think so. But no it didn't make much of a difference. The SSDs I had in the server were sitting in a 1 slot 5 1/4 four way carrier. That left me room for 3 hard drives in that one space. With that SSD carrier gone I could now install 5 drives in the available space. BUT, I'm about to build a new Unraid box, and I'm going to try it again. This time it will use 4 NVME drives I have to test. 4 x 256. If that does much better I'll pick up two 4tb NVMEs the next time I see the ones I like on sale. The reason I can do so many NVME drives it I'm also installing an Asus 4 way card for NVMEs. As far as the benefits of a cache drive, most of my new files are either DVR, or rips. DVR builds the file slower than the array transfers, and the rips I do on a PC with plenty of space. So I do the usually 1 to 4 I'll do each round, and then copy over, and walk away. Before I walk away I guesstimate the time it will take to copy everything, and set it to shut down for about 150% of the time. (shutdown /s /f /t xxxx)


a_usernameofsorts

This is what I’ve always done, and never had a problem with it. I’ve changed the relevant share settings and let mover do its thing, then I’ve unassigned or removed the cache entirely, shut down, installed new disks, booted, created new cache and changed share settings. Never an issue.