The Internet of things (IoT) requires a transition of many paradigms that are commonly used today. One of them is the shift from *scaling up*, i.e. from a system design towards bigger machines with higher computational power to *scaling down*, i.e. a system design based on tiny devices with small computational power. Heavy virtual machines
and energy-hungry data center technology are not suited for sensor networks at the edge of the IoT.
With the latest release of Docker (August 2016), there is an orchestration tool available that is energy-efficient and lightweight enough to handle heavy load on small, but many IoT devices. This is because Docker has now been made available on the ARM CPU architecture, which is already used in a large share of small devices available today because ARM is optimized for small scale and energy-efficient computing.
Thus, with the new Docker ported to ARM, the power of container virtualization technology is now available for the IoT, thereby simplifying the development, deployment and operation of applications.
In the mean time, Kubernetes, Google's production-ready orchestration tool, has also been ported to the ARM architecture and therefore is also available to run large sensor networks at little computational power.
In this talk I present the results of a research project that evaluated the high availability performance of Kubernetes and Docker in an IoT-like environment: A multi-node Raspberry Pi cluster.
This talk includes a live demo if requested.