Key Takeouts ▪ Nonetheless, the isolation and dependency minimization provided by containers have proved quite effective at Google, and the container has become the sole runnable entity supported by the Google infrastructure. ▪ Building management APIs around containers rather than machines shifts the "primary key" of the data center from machine to application. – It relieves application developers and operations teams from worrying about specific details of machines and operating systems – it provides the infrastructure team flexibility to roll out new hardware and upgrade operating systems with minimal impact on running applications and their developers – it ties telemetry collected by the management system (e.g., metrics such as CPU and memory usage) to applications rather than machines. ⇒ It allows application developers to use the infrastructure as a single computer hiding hardware and operating systems.
Application Deployment without System Boundaries ▪ The container scheduler places containers across multiple hosts, like a process scheduler of the operating system. ▪ Splitting application into multiple containers based on functions gives additional flexibility. – Autoscaling not a whole application, but just a necessary component. – Specific functions can be replaced without disrupting the service. A single computer on top of a Kubernetes cluster Container Container Container ・・・ Microservice Host Host Host Application
Combining Microservices with front/back-ends Object Store Common Existing Backend Applications Services Cloud Storage Global Network RDB Compute Engine Load Balancing Cloud SQL Microservices NoSQL DB Agile / Dynamic / Container Engine Scalable parts of the Application Cloud Datastore
Share Your Best Practices! ▪ Architecture Design – How to migrate from existing architecture. ▪ Practical Knowledge – The devil lives in the detail, as always. ▪ Aligning the team toward DevOps – Need to remap existing people to a new set of roles.