Cannot create new container

Discussion in 'Containers and Virtual Machines Discussion' started by JsneedPercom, Aug 21, 2015.

  1. JsneedPercom

    JsneedPercom Bit Poster

    Messages:
    1
    Today I deleted a container because it would not start. Kept giving me an error that the hardware node could not mount the container. When I deleted it, it also had an error stating that he could not delete everything because it was in use. However the container disappeared off my list. When I tried to recreate the container Parallels told me that the container existed even though it does not. How can I fix this issue.
     
  2. Pavel

    Pavel A.I. Auto-Responder Odin Team

    Messages:
    416
    Deleting container just because it couldn't start is a hasty decision, I'm 100% sure it was possible to make it work without deleting it. Now it is most likely damaged beyond repair.

    Unfortunately the question is too broad to answer it properly, but I'll try to give a most reliable solution.
    If it does not help I'd suggest to contact technical support.

    Container would appear in the list if its "<CTID>.conf" is present in /etc/vz/conf/ directory. If container is not listed that would mean config is deleted, however, that does not guarantee that directory "/vz/private/<CTID>" was removed. When you create new container with container ID "XXXX" it would always check if directory "/vz/private/XXXX" is present, and fail container creation if it is present. According to the symptoms you have described container's filesystem was locked by some process prior to the moment you decided to destroy it. Config file was removed, but directory "/vz/private/<CTID>" was not, because it is still locked by some process.

    As the easiest and "cheapest" workaround - create container with a different CTID.
    If you want to get rid of that directory and create container with exactly the same CTID - you have two choices.
    ==============
    1) Easy and reliable, but might be not entirely suitable for the production environment, also, it does not reveal the actual root cause - reboot the hardware node to make sure process locking it is dead
    ==============
    OR
    ==============
    2) Proper solution revealing root cause, however, requiring some basic server administrator skills:
    2.1) use "lsof" to determine process which keeps "/vz/private/<CTID>" busy
    2.2) once process is found - try to understand why was this process accessing "/vz/private/<CTID>" to make sure it never happens again
    2.3) kill the process
    2.4) if it was a ploop container - check if ploop is still mounted using following command:
    Code:
    # ploop list | grep -w "<CTID>"
    where "<CTID>" should be replaced with your container ID.
    If you see output similar to the following one:
    Code:
    [root@benderbrau ~]# ploop list | grep -w 725
    ploop22013   /pstorage/spcs/private/725/root.hdd/root.hds  
    
    That would mean ploop is still mounted, and you should unmount it using following command:
    Code:
    # ploop umount -d /dev/ploopXXXXX
    Where "ploopXXXXX" is the ploop id from the previous command's output 1st column.
    2.5) if PVA Agent is present - you might need to restart it using "pvaagent restart" command if container is still listed in PVA MN interface.
    ==============

    After one of the solutions above was applied you should remove "/vz/private/<CTID>" directory manually.
    Once it is removed you can now create container with the same ID.

    If steps above did not help you I'd recommend to contact technical support.
     

Share This Page