Parallels Cloud Server Hardware calculation for a specific requirement.

Discussion in 'General Questions' started by adamjohnson, Nov 28, 2016.

  1. adamjohnson

    adamjohnson Bit Poster

    Messages:
    3
    Hello All,

    I am very new to parallels and Parallels cloud servers. I want to deploy the PCS in my lab environment for testing. Before the process, I want some information about hardware requirement.

    Can I have any document about minimum and maximum hardware requirement for PCS server ( number of metadata, chunk and client servers required). What would be difference with different combination of the servers ?

    If i have a special requirement (like 100 VM/containers, each with 4GB RAM, 2CPUs and 100GB hard disk). How can i calculate the required servers, hardware and HA? If you have any requirement calculation application or document please share me.

    Thank you for your understanding and time.
     
    Last edited by a moderator: Dec 21, 2016
  2. Pavel

    Pavel A.I. Auto-Responder Odin Team

    Messages:
    403
    Hello Adam,

    All requirements are enlisted in our documentation:
    vz6 user guide: http://docs.virtuozzo.com/legacy/vz6/Virtuozzo_Users_Guide.pdfv
    storage configuration: http://docs.virtuozzo.com/legacy/vz6/Virtuozzo_Storage_Administrators_Guide.pdf

    You can calculate hardware for containers, and then divide it between number of hosts you plan adding to the storage.
    Documentation above should help you, but there are several points I'd like to mention explicitly:

    First of all - it's not recommended to have less than 4 servers running CSes in your pstorage environment, 5 - is a good number. Having 4-5 servers is necessary to ensure you'll have enough space for chunks allocation when one of the nodes goes down - it's planning ahead. So, you may divide resources you plan between 4-5 servers.

    Second - some resources can be oversold, like, CPU, some - can be oversold/overcommited on a condition, and some - should not be oversold no matter what. But, after all, it all depends on your workload patterns.

    - CPU: It's OK to oversell/overcommit CPU unless your VMs/CTs are doing some CPU-intensive work. If that is just some plain webhosting you can probably overcommit CPU easily. If your workload pattern implies high CPU activity you probably should not overcommit CPU much.

    - Memory: For CTs it's OK to overcommit memory, but for VMs you should never do it since virtual machines allocate and utilize memory in a different manner.

    - Diskspace: it's not recommended to overcommit diskspace at all.

    So, considering these factors you can do some basic planning. Lets start with Disks:

    E.g. you need to have at least 100*100Gb = 10 Tb for CTs/VMs.
    That's 10Tb of storage space, which is, taking replication into an account, at least 30 Tb of actual diskspace.
    Note, if one node goes down for a long time (e.g. some serious hardware failure, prolonged outage) - pstorage will attempt to replica lost chunks, thus you should account space margin for that.

    5 hosts: 30/(5-1) = 7.5Tb per host, or 4 hosts: 30/(4-1) = 10Tb per host. It's recommended to have 1-4Tb disks for chunks to ensure throughput is OK (e.g. if you'll have 10Tb disks they most likely won't be able to handle IOPS towards the chunks, thus iowait might be high, so it's better to separate the replicas, and smaller disks just dont make much sense).
    So, it's 4-5 x 2Tb or 2-3 x 4Tb disks per host depending on amount of hosts. It may be something mixed like 2x4Tb + 1x2Tb, but I'd recommend to keep disk sizes even to ensure good IOPS distribution between chunks.

    Plus you'll need some disk for the server itself. Plus, if want to improve performance, I'd recommend to add an SSD disk for journal and cache. 10Gb of journal per CS + some-amount for pstorage cache (make sure SSD is not filled more than 90% - that will cause performance degradation for SSD). 100Gb SSD seem to be fitting well.
    Thus, based on 4/5 host setup, each host goes with:
    3/4 x 2Tb HDD + 1xOS HDD (it's recommended to have sufficient diskspace for vmcore collection - thus you might want to get a 500Gb-1Tb size here) + 1x100Gb SSD(important note - always choose an enterprise grade SSD with a powerloss protection mechanism)

    * when partitioning OS disk you may omit creating separate /vz/ partition since VEs will be on a pstorage. Note, it is not recommended to have swap partition of a large size - that might create problems. 8-16 Gb swap is enough. Large swap would only create problems with performance.

    RAM:
    If you want to be safe when any node goes down you must ensure you'll have enough RAM on N-1 nodes. Especially if you're using VM-only environment. If you're going to use CTs mostly it's okay to cheat a bit and overcommit the memory. That will also help you to undergo any maintenance without noticeable drawback to performance or stability.
    For 5 host-setup - 4Gb * 100 = 400Gb/(5-1)= ~100Gb per host. 96 Gb sounds OK // (5-1) - that's planning ahead for failover and maintenance moments
    For 4 host-setup - 4Gb * 100 = 400Gb/(4-1)= ~133Gb per host. 128 Gb sounds OK

    CPU:
    if you're doing some CPU-intensive tasks I wont be able to recommend anything - that's too specific to recommend anything. But since it's only 2VCPU per VE I suppose you're not planning high performance computing.

    If you are not going to do any HPC indeed, it's ok to overcommit CPUs at least twice (IMHO).
    100 VEs * 2VCPU / 2(overcommit) - ~100 vcpus per cluster, which is 25/20 per 4/5 hosts. Note, hyperthreading should be used only if you're expecting low utilization of individual CPUs because hyperthreading is ineffective when CPU is loaded. If you're not expecting high utilization of individual cores, 16 physical (32 with hyperthreading) sounds a good fit. 2x8-core Xeon with HT might do well.
    Note, since you're probably going to run MDS-es on the same hosts as VEs, it's better to have a higher cpu power. 2.0 GHz might be insufficient during peak usage of the pstorage. I'd recommend to settle at least with ~2.5Ghz.


    And last resource, Network. It's a _must_ to have a separate network for pstorage. Including a separate switch if possible - you need to ensure that pstorage traffic is not affected by client traffic to avoid performance degradation(e.g. - client VE is hacked, it starts flooding the network with packets, switch is spending CPU on processing the flood, pstorage traffic gets delayed, pstorage performance degrades). If separate switch is an overkill for you, at least configure QoS for the network.
    If you expect low disk usage it might be sufficient to use 2x1GbE network (pstorage supports some bonding modes). If you want to be on a safe side you may use 1x10GbE - that should be sufficient. or 2x10GbE if you want to use some fault tolerance even on a network level.

    ....
    That's pretty much all.
    ----

    What I wrote above is just an example of a hardware planning. Take notes of what must be considered, and do your own calculations based on your experience and workload expectations you have. If you have any questions - please ask, I'll do my best to address them.
     
  3. Pavel

    Pavel A.I. Auto-Responder Odin Team

    Messages:
    403
    Oh, one addition.
    If you're planning to utilize snapshots, you must be prepared to the fact that vm files might someday grow larger than 100Gb.
    E.g. - you have a 50Gb data inside of a disk. You take a snasphot, remove those 50Gb and write 100Gb of new data. Because of a snapshot you now utilize 150Gb within a 100Gb VM.
    So, if you're going to utilize snapshots you should consider some additional space margin, or, to arrange some sort of a monitoring for huge snapshots.
     

Share This Page