PCS6 --> VZ7 with VZ Storage

Discussion in 'General Questions' started by futureweb, Feb 16, 2017.

  1. futureweb

    futureweb Tera Poster

    Messages:
    394
    Hey there,
    just evaluating an upgrade of our PCS6 + PStorage Cluster to VZ7 + VZStorage.
    Are there any real world experiences with In-Place upgrades? Problems with upgrade? Seamless upgrade? (http://docs.virtuozzo.com/virtuozzo...zzo-sto/upgrading-in-place-with-vstorage.html)
    Or would it be better to start a new Cluster from scratch?
    As it's a productive System - we don't want to risk unexpected downtimes ... ;-)
    thx
    Andreas Schnederle-Wagner

     
  2. Pavel

    Pavel A.I. Auto-Responder Staff Member

    Messages:
    475
    Hello,

    Hybrid clusters work just fine. You can add both vz6 and vz7 hosts to the same cluster.
    As long as there are any vz6 hosts in the cluster - storage services will work on a "vz6 patchlevel". Once it's vz7-only - it will operate on "vz7 patchlevel" automatically.

    I do not recommend using in-place upgrade function - it's a last resort for those who cannot do it any other way (no other host to offload VEs to).
    In-place upgrade takes a lot longer to complete than a simple re-install, probably including the offloading(there's heavy scripting and analyzing which requires a lot of time), and has some risks to fail (e.g. to leave host in non-operable state). It was thoroughly tested, and chances to "brick" the server are low, but I'd recommend a clean installation anyway since it's faster, less downtime and 100% reliable.

    So, scenario I'm suggesting is simple:
    1) offload host
    2) reinstall host
    3) join host in cluster
    4) migrate environment back
     
  3. futureweb

    futureweb Tera Poster

    Messages:
    394
    alright - thx 4 info!
    Hope I will find some time to do the upgrade within the next few months ... :)
     
  4. SteveITS

    SteveITS Tera Poster

    Messages:
    251
  5. SteveITS

    SteveITS Tera Poster

    Messages:
    251
    Hi, is it just a straight join to the VZ6/PCS6 cluster, like if you were installling a new VZ6 node?

    I understand your point about starting clean but it's harder to do that without going to the data center which could turn into a day or longer project, and the blower noise is exhausting after a while. My thought was to move all containers off each hardware node, upgrade it, then move the containers back on.
     
  6. Pavel

    Pavel A.I. Auto-Responder Staff Member

    Messages:
    475
    Hello Steve,

    Sorry for a very late reply - for some reason forum did not notify me about new replies to this thread, and it got lost.
    There is nothing special about joining vz7 into vz6 - just join the host into an existing cluster as you normally do.

    P.S. it would be best if you updated vz6 hosts to the latest versions before doing so
     
  7. SteveITS

    SteveITS Tera Poster

    Messages:
    251
    Best Answer
    Hi Andreas,

    I don't know if you are still looking for information but I've done three upgrades. My notes so far:
    • rsyslog service is left disabled after upgrade (this is a bug, slated to be fixed soon in an update to the upgrade script). Just enable and start.
    • on one server shamand was left disabled after upgrade (support couldn't duplicate, but it was the only one with shamand running that I upgraded).
    • the upgrade doesn't seem to handle VLAN adapters well; it seemed to go better to delete the VLAN adapters first and create them again after.
    • the third of three upgrade attempts ended badly...after rebooting libvirtd and sshd would not start. They were looking for library /usr/lib64/libsasl2.so.2, which doesn’t exist in Virtuozzo7/RHEL 7. A quick search indicated this could be due to booting in the middle of the upgrade.
    So after the third attempt failed I figured out how to boot an ISO file using the remote IPMI software and have installed twice as new using that (the failed upgrade and another server). Before doing so I removed the node from the shaman cluster, removed MDS, removed chunk servers (which takes a long while to copy data to other CS...make sure you have enough space), and removed the node from Virtual Automator. The new install handles partitions differently, such as creating the boot partition on all drives, and only using half our SSD's space. On the first boot I had to change the BIOS boot order to boot off "UEFI Virtuozzo." Knowing now how to boot the hardware to an ISO in my office, and since it can apparently sometimes fail, I would do it that way from now on. Also of note the NIC drivers changed names from eth0/eth1 to eno1/eno2.

    On some new installs and some upgrades the temp license Virtuozzo sales had provided would not activate. Apparently VZ7 can sometimes use a different hardware ID than PCS6, so sometimes works, but if it uses the same HWID it will fail saying the HWID is already being used for another license. Then you have to "Cancel Activation" of the PCS6 license in order to get a new license to activate on the same hardware. Note vzlicview prlsrvctl info shows HWID as blank in VZ7 and that is apparently intentional. vzlicview shows it, and vzlicview --show-hwid shows both (?) IDs.

    Unrelated to the upgrades, there is an issue in VZ7 where it can't suspend containers that have 32 bit processes running (e.g. Plesk Automation). So a container can suspend and migrate from PCS6 to VZ7 but then cannot be suspended or migrated between VZ7 nodes. Workaround is to stop/shut down the container.

    Edit: also unrelated to upgrades, we had a couple of FreeBSD-based VMs that won't migrate to VZ7, though we can pretty easily recreate them there, for our purposes. It doesn't recognize the VM OS, since FreeBSD isn't officially supported, even though it is for KVM.
     
    Last edited: Apr 17, 2017
  8. Pavel

    Pavel A.I. Auto-Responder Staff Member

    Messages:
    475
    Hi Steve,

    Thanks for a great post! That's a lot of experience.

    This is not intentional, all my vz7 hosts do show it. You may want to contact support for debugging.

    Sorry for doubting you, but vz6 and vz7 have absolutely incompatible suspend/resume mechanisms, it shouldn't be possible to resume vz6 container on vz7. Have you used "vzctl start" on a suspended container? Because when you "start" suspended container it will do a clean start if resume failed. Could have been the case?

    Although KVM has support for FreeBSD, we still bring in a lot of changes to "stock" KVM (and contribute them to mainstream though). If you're wondering you can always find the sources on src.openvz.org. Some of Virtuozzo features are not available in KVM, like, the easiest example, "prlctl enter" or "prlctl exec". This and many other features (such as migration&conversion, backups, etc.) must be thoroughly tested, which was not done yet. Hence "unsupported". But support is planned for the future updates.
     
  9. SteveITS

    SteveITS Tera Poster

    Messages:
    251
    re: HWID blank, you are correct, here I typed the wrong command from memory. vzlicview shows a hardware ID. "vzlicview --show-hwid" shows two HWIDs on the licenses we have. "prlsrvctl info" does not. In Virtuozzo ticket 13982. Comment from support: "Regarding the absence of 'Hardware Id' in the 'prlsrvctl info' output on Virtuozzo 7 - I have clarified the question with development, this is indeed expected behavior which is dictated by certain differences of Virtuozzo 7 dispatcher compared to Virtuozzo 6."

    re: suspend/resume:
    http://docs.virtuozzo.com/virtuozzo...ntainers-from-virtuozzo-6-to-virtuozzo-7.html
    "To migrate a VM or a stopped or suspended container from Virtuozzo 6 to Virtuozzo 7..."

    Apparently it can resume on VZ7 but then can't be suspended because of the 32 bit processes (per support). It sounded like that was going to get worked on for the future?
     
  10. Pavel

    Pavel A.I. Auto-Responder Staff Member

    Messages:
    475
    You can migrate suspended container, but you won't be able to resume it, only to start (which will drop the memory dump and perform a clean boot). Just tested this in my environment.

    As for the 32bit support. In a currently released version CRIU is not able to suspend 32bit applications, but "factory" already should have some support for it.
     
  11. SteveITS

    SteveITS Tera Poster

    Messages:
    251
    Okayyyy...seems like that should be in the documentation then, somewhere around "stopped or suspended container"...

    I found one other thing on the server that had Virtuozzo Storage, that we upgraded. Since it started as PCS 6, it had stored its VMs in /var/parallels which was a symlink to /pstorage/cluster1/vmprivate. After upgrade the /var/parallels link is still there but it stored a VM in /vz/vmprivate which was now a local directory. I can make a symlink vmprivate -> /pstorage/cluster1/vmprivate of course but I didn't realize it didn't have that and so was not storing the VM in the cluster storage.
     
  12. SteveITS

    SteveITS Tera Poster

    Messages:
    251
    A followup note about migrating Windows VMs from 6 to 7: it copies the hard drive(s) from /vstorage/cluster1/vmprivate/vmname to /vstorage/cluster1/vmprivate/NewGUID/ and leaves behind a /vstorage/cluster1/vmprivate/vmname.migrated directory as a backup (http://forum.odin.com/threads/leftover-migrated-directory-after-migrating-vm-to-virtuozzo-7.341923/). This will of course double the storage requirements of the VM so something to watch for if you're low on available cluster space or have large drives.

    It took a while to copy the drives and although it shows a notice that it may take 30 minutes to convert it was done somewhere in the under-10-minute range when I next looked.

    Also, after my summary post above I found that our upgraded node didn't have /vz/vmprivate created as a symlink into the cluster, it was local storage. Not sure if that is due to the upgrade, or due to our cluster being old enough to have used /var/parallels/ as the link (if I recall correctly). Anyway, just something to watch for.
    [ Edit: as I recall, the upgrade made me remove the /var/parallels symlink before proceeding.]

    Another issue is in post (http://forum.odin.com/posts/799475/) where the upgraded node doesn't show the shaman roles correctly, That is presumably limited to shaman because migrated containers and VMs run on that node OK.
     
    Last edited: Apr 21, 2017 at 5:48 PM

Share This Page