High Availability and Services with Kubernetes

In our previous blog, Getting Started with a Local Deployment, we deployed an Nginx pod to a standalone (single-node) Kubernetes cluster. This pod was bound to a specified node. If the pod were to fail unexpectedly, Kubernetes (specifically, the Kubelet service) would restart the pod. By default, pods have an ‘Always’ restart policy, but only to the node that it is first bound; it will not be rebound to another node. This means of course that if the node fails then pods will not be rescheduled elsewhere.

Kubernetes: Getting Started With a Local Deployment

In Part 1 of this series of blogs, we introduced Kubernetes, an open source container management system from Google, based on operational systems that run over 2 billion containers a week. Kubernetes will very soon be production-ready with the 1.0 release scheduled for this month. In this second part, we will get hands-on, setup a local cluster and deploy a Nginx web server.

Kubernetes: Are you Ready to Manage your Infrastructure like Google?

Google’s Kubernetes open source project for container management has just recently celebrated its first birthday. In its first year, it has attracted massive community and enterprise interest. The numbers speak for themselves: almost 400 contributors from across industry; over 8000 stars and 12000+ commits on Github. And many will have heard it mentioned in almost every other conversation at recent container meetups and industry conferences – no doubt with various different pronunciations!