 on [Unsplash](https://unsplash.com)](https://blog.sdfg.com.ar/posts/userns-in-kubernetes-part-iii/cover.jpg)
User Namespaces in Kubernetes, Part III: The Implementation
This blog post is part of a series on user namespaces in Kubernetes. In the previous post, we saw how idmap mounts let containers with different userns mappings share volumes. Now let’s see what other questions we needed to answer for a Kubernetes implementation: Who decides the mapping: the kubelet or the runtime? Kubernetes supports running different runtimes on one node, so the kubelet needs to decide the mappings. Otherwise, runtimes have no way to know if a range is already used by another runtime. How large should the mapping be for each pod? Most container images already use IDs up to 65535. If a UID in use is not mapped, it will be shown as the overflow id and you can’t modify it. Using 0-65535 is a sensible choice here and also divides the 32-bit UID space evenly. How to choose which ID (UIDs/GIDs) range a pod will use? Pods have a unique ID range chosen on the node at pod creation time that doesn’t overlap with ranges used by other pods. After the last post, we know we can do that without issues if we use idmap mounts. This gives pods better isolation in case of a container breakout: they can’t read/write inodes owned by a different UID/GID (unless the inodes have permissions for others), they can’t send signals to other pods, etc. Furthermore, we can also reserve a separate range for the host’s files and processes, extending the same isolation to the host. The implementation The UID/GID space in Linux is 32 bits. We divide the ID space into chunks of 16 bits each: ...
 on [Unsplash](https://unsplash.com)](https://blog.sdfg.com.ar/posts/userns-in-kubernetes-part-ii/cover.jpg)
 on [Unsplash](https://unsplash.com)](https://blog.sdfg.com.ar/posts/userns-in-kubernetes-part-i/cover.jpg)