This is an old revision of the document!
Since OpenVZ is slowly dying and new distributions aren't supporting it, we had to find a way to upgrade our kernel, which meant choosing a different virtualization technology. Linux kernel now has some support for containers, so we've decided to stick with that. Next, we needed some distribution that we could use on nodes to serve as hypervisors, as a replacement of Scientific Linux 6 with OpenVZ kernel. We've chosen NixOS, which allows you to declare the system and its configuration and then reproducibly build it. And since we have a bit specific requirements, we've created our own distribution on top of NixOS.
vpsAdminOS is based on
NixOS and not-os.
It's a live distribution serving as a hypervisor for container
virtualisation. Its as capable as OpenVZ Legacy was in its time. We have
our own userspace tools to manage containers called osctl
, which
internally uses LXC. vpsAdminOS naturally integrates with vpsAdmin, our
administration interface with web interface, which you're all using to manage
your VPS. However, vpsAdminOS is meant to be fully usable even on its own, as
a replacement to OpenVZ Legacy deployments. If you have some OpenVZ servers
and would like a newer system, you can consider vpsAdminOS. We also have
scripts to help
with migration of OpenVZ containers onto vpsAdminOS.
The upgrade of our infrastructure with all VPSes to vpsAdminOS is divided into several phases:
We're trying to make the migration to vpsAdminOS as seamless as possible, so that one day your VPS will stop on the OpenVZ node and will start on vpsAdminOS node a while later, without you having to do anything. However, it depends on what programs you're running and what configuration changes you have made. That's why we recommend for everyone to try VPS on vpsAdminOS in the staging environment, so that we can find and solve problems before we start migration production VPS.
Linux kernel doesn't have anything like venet from OpenVZ, so we had to find a different way. Networking is done by a pair of veth interfaces: one on the host, the other in the VPS. IP addresses are routed through an interconnecting network that is assigned to every VPS.
For example, let's say the assigned interconnecting network is
10.100.10.0/30
. The veth interface on the host will have address
10.100.10.1
and the interface in the VPS will have 10.100.10.2
.
IP addresses are then routed via 10.100.10.2
, e.g. public IPv4 1.2.3.4
would be routed as 1.2.3.4/32 via 10.100.10.2
. The default gateway
in the VPS would be set as default via 10.100.10.1 src 1.2.3.4
. The
interface on the host is configured automatically by osctl
, which
will also generate configuration files inside your VPS, depending on your
distribution. The init system from your VPS will then read those files
and setup the network interface. The first address on the interface will be
the address from the interconnecting network, not the public address, as has
been the case on OpenVZ. If you have some custom network configuration,
you need to be aware of how the networking is supposed to work.
VPS in vpsAdminOS are using so called user namespaces. User namespace means that your system user and group IDs are mapped to different values on the host. For example, the root user in your VPS has UID 0, but from the host's point of view, its UID is e.g. 666000. Every member has been assigned a unique user namespace, which ensures that your data is isolated from other users. In case an attacker manages to leave the container, he will not be able to access data from VPS belonging to other members.
Every member is assigned a user namespace of 524288 user/group IDs. It means that you can use UID/GID from 0 to 524287. All VPS from one member are in the same user namespace. In the future, it will be possible to define custom UID/GID maps for VPS and NAS datasets, which will let each member to isolate his own VPS and yet share some chosen range of user/group IDs.
The user namespace significantly changes how you can share data between VPS and NAS. At the moment, it is not possible to mount NAS to a VPS running on a vpsAdminOS node so that you'd have access to the data. This will become possible when custom UID/GID maps are properly implemented.
Changes regarding VPS, but independent on the distribution used:
/proc/loadavg
shows load average of the entire node, i.e. of processes from all VPS on the node you're on, it does not tell anything abour your VPS/proc/cpuinfo
and /proc/stat
show all CPUs from the node, but you can't utilize more than 8 of them (800% CPU)/proc/meminfo
dmesg
is forbidden, as it's not virtualized in the kernelip
from iproute2
, you no longer need ifconfig
from net-tools
/etc/network/interfaces.{head,tail}
aren't inserted into /etc/network/interfaces
, but rather included using source
, i.e. they do not affect contents of /etc/network/interfaces
directly, like it was with vzctl./etc/network/interfaces.d
, it is sourced before /etc/network/interfaces.tail
.In order for all members to test VPS on vpsAdminOS, we've created so called staging environment. It's similar to playground, where everyone can create a VPS. When creating a VPS, just select location Staging and the VPS will be created on a vpsAdminOS node.
It's terms of use are similar to playground VPS, only it can be a bit rougher, like unplanned outages and reboots when we need to fix something. Everyone can use up to 8 CPUs, 4 GB RAM, 120 GB disk space, 4 public IPv4 addresses and 32 IPv6 /64 addresses. You can split these resources among 4 VPS.
It is not possible to clone or swap production VPS with VPS in the staging environment. Migration of OpenVZ VPS onto vpsAdminOS is not implemented yet. Access to the NAS is also restricted, see user namespaces.
In case your distribution isn't supported yet, you can help us make it happen, or wait until someone does it for you, see open issues.
Distribution templates installable from vpsAdmin are built using scripts at vpsadminos-templates. If your distribution isn't there, it has to be added.
When the built script is done, it is necessary to add support for your
distribution into osctl
, so that it can configure hostname, network, DNS
resolvers, etc., see doc.
Choose at your own discretion: