• littleomid@feddit.org
    link
    fedilink
    English
    arrow-up
    29
    ·
    1 month ago

    For beginners here: do not run apt upgrade!! Read the documentation on how to upgrade properly.

    • beerclue@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      19
      ·
      edit-2
      1 month ago

      It’s always good to read the docs, but I often skip them myself :)

      They have this nifty tool called pve8to9 that you could run before upgrading, to check if everything is healthy.

      I have a 3 node cluster, so I usually migrate my VMs to a different node and do my maintenance then, with minimal risks.

  • Midnight Wolf@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    edit-2
    1 month ago

    Yay, it only took 2 hours and the help of an llm since the upgrade corrupted my lvm metadata! Little bit of post cleanup and verifying everything works. Now I can go to sleep (it’s 5am).

    Wasn’t that bad, but not exactly relaxing. And when my VMs threw a useless error (‘can’t start need manual fix’) I might have slightly panicked…

  • Damage@feddit.it
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 month ago

    ZFS now supports adding new devices to existing RAIDZ pools with minimal downtime.

    Yes!!

    • non_burglar@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      edit-2
      1 month ago

      Edit2: the following is no longer true, so ignore it.

      Why do you want this? There are very few valid use cases for it.

      Edit: this is a serious question. Adding a member to a vdev does not automatically move any of the parity or data distribution off the old vdev. You’ll not only have old data distributed on old vdev layout until you copy it back, but you’ll also now have a mix of io requests for old and new vdev layout, which will kill performance.

      Not to mention that the metadata is now stored for new layout, which means reads from the old layout will cause rw on both layouts. It’s not actually something anyone should want, unless they are really, really stuck for expansion.

      And we’re talking about a hypervisor here, so performance is likely a factor.

      Jim Salter did a couple writeups on this.

  • TheUnicornOfPerfidy@feddit.uk
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 month ago

    As a person who just installed proxmox for the first time a couple of weeks ago, does this allow me to fix some of my mistakes and convert VMs to LXCs?

  • coffeetastesbadlikecoffee@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 month ago

    This is awesome, I am going to imediatly get a test cluster set up when I get to work. Snapshots with FC support was the only major thing (appart from Veeam support) holding us back from switching to Proxmox. The HA improvements also sound nice!