Anthos Attached Clusters

Next in our series of posts taking a look at Google Cloud Anthos functionality, we’re going to take a look at attaching Kubernetes clusters running in AKS and EKS to Anthos in Google Cloud. This builds on the multi-cloud capabilties of Anthos we saw previously with GKE on AWS moving to GA. Anthos is orientated around being the management plane for all of your enterprise workload clusters, providing a centralised, consolidated hub to orchestrate infrastructure and applications. Additionally, through Anthos’ add-on features the experience is enriched to facilitate cluster and application administration with Config Management, compliance at scale with Policy Controller, as well as multi-cluster traffic management courtesy of Anthos Service Mesh.

With GKE On-Prem, and as we saw previously with GKE on AWS, we’ve seen how the GKE experience can be extended beyond Google Cloud and brought to our infrastructure, whether that’s the datacentre of another cloud provider. With attached clusters, Anthos is providing the mechanisms to enroll Kubernetes clusters agnostic of environment. That means that regardless of where our clusters are running, we can benefit from the Anthos feature set, and the centralised management plane Anthos provides through the Google Cloud console.

This enables a plethora of Anthos use-cases. Whether you’re running managed clusters in EKS or AKS, running on bare-metal with kubeadm, or leveraging Cluster API for the lifecycle of your infrastructure, you can register clusters with Anthos and gain a holistic perspective of your Kubernetes infrastructure, application deployments, traffic routing and security conformance.

We’ll be taking a look at how easy it is to register clusters with Anthos running in a variety of environments, and how the value-add of features can ameliorate our Kubernetes experience on these platforms.

Managed Clusters

Firstly we’ll be taking a look at attaching managed clusters to Anthos. As mentioned, we’ve seen how GKE can be brought to AWS via Anthos, however through attached clusters, existing clusters in EKS can be registered and added to the quorum of clusters under Anthos management. Consequently, no refactoring of cluster lifecycle or toolsets needs to change to bring Anthos to your EKS deployment.

Due later in 2020, GKE on Azure is the accompaniment of GKE on AWS, whereby the lifecycling of clusters in Azure is orchestrated through an Anthos managed pipeline. However, through attached clusters we are also able to bring existing AKS clusters under Anthos’ ownership.

Let’s take a look at how we can take existing managed Kubernetes clusters and add them to the GKE Hub.

To demonstrate the flow of attaching managed clusters to Anthos, our use-case will be AKS and EKS cluster deployments which will be registered in the GKE Hub, with add-ons installed to enable Anthos’ features.

multi-cloud

Register Clusters

With GKE On-Prem and GKE on AWS, registration is handled automatically as part of the cluster bootstrap process. In this instance, we are manually adding existing clusters to the GKE Hub on an ad-hoc basis.

When a cluster is registered with Google Cloud, a long-lived, authenticated and encrypted connection is established between the cluster and the Google Cloud Hub via Connect. This acts as the main conduit to serve cluster and application state to Google Cloud, but also provides the connectivity to manage and deploy resources and configuration. This connection to Google Cloud is initiated from the cluster, whereby Google Cloud is making requests over Connect to each connected cluster, with the cluster responding back to the Google Control control plane. User services cannot route to Google Cloud via the link established via Connect.

AKS

$ gcloud container hub memberships register aks-cluster-0 \
    --project=$(gcloud config get-value project) \
    --context=aks \
    --kubeconfig=${KUBECONFIG} \
    --service-account-key-file=hub-key.json
Waiting for membership to be created...done.
Created a new membership [projects/jetstack-anthos/locations/global/memberships/aks-cluster-0] for the cluster [aks-cluster-0]
Generating the Connect Agent manifest...
Deploying the Connect Agent on cluster [aks-cluster-0] in namespace [gke-connect]...
Deployed the Connect Agent on cluster [aks-cluster-0] in namespace [gke-connect].
Finished registering the cluster [aks-cluster-0] with the Hub.

EKS

$ gcloud container hub memberships register eks-cluster-0 \
    --project=$(gcloud config get-value project) \
    --context=eks \
    --kubeconfig=${KUBECONFIG} \
    --service-account-key-file=hub-key.json
Waiting for membership to be created...done.
Created a new membership [projects/jetstack-anthos/locations/global/memberships/eks-cluster-0] for the cluster [eks-cluster-0]
Generating the Connect Agent manifest...
Deploying the Connect Agent on cluster [eks-cluster-0] in namespace [gke-connect]...
Deployed the Connect Agent on cluster [eks-cluster-0] in namespace [gke-connect].
Finished registering the cluster [eks-cluster-0] with the Hub.

GKE

$ gcloud container hub memberships register gke-cluster-0 \
    --project=$(gcloud config get-value project) \
    --gke-cluster=europe-west2/cluster-0 \
    --service-account-key-file=hub-key.json
kubeconfig entry generated for gke-cluster-0.
Waiting for membership to be created...done.
Created a new membership [projects/jetstack-anthos/locations/global/memberships/gke-cluster-0] for the cluster [gke-cluster-0]
Generating the Connect Agent manifest...
Deploying the Connect Agent on cluster [gke-cluster-0] in namespace [gke-connect]...
Deployed the Connect Agent on cluster [gke-cluster-0] in namespace [gke-connect].
Finished registering the cluster [gke-cluster-0] with the Hub.

As you can see, the process for attaching a cluster is near identical for each of the deployments. As part of the registration, a Connect agent is deployed into each cluster. After the connection is established, the Connect Agent service will exchange account credentials, technical details, and metadata about connected infrastructure and workloads necessary to manage them with Google Cloud, including the details of resources, applications, and compute resources. The agent is deployed into the gke-connect namespace.

$ kubectx eks
$ kubectl get deploy,po -n gke-connect
NAME                                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/gke-connect-agent-20200724-01-00   1/1     1            1           4h27m

NAME                                                    READY   STATUS    RESTARTS   AGE
pod/gke-connect-agent-20200724-01-00-68d78fb54f-qpsxz   1/1     Running   0          4h27m

With our managed clusters registered and the gke-connect-agent deployed, we can see that our clusters are available in the Anthos Dashboard.

managed-clusters

Once we login to each cluster within the GCP Console we can administer and inspect their behaviour. Navigating through each cluster provides an overview of all the cluster’s infrastructure and specification, as well as utilisation and workloads.

aks-nodes

aks-node-details

managed-cluster-workloads

Unmanaged Clusters

As we’ve seen, attaching a Kubernetes cluster to Anthos is achieved through the simple process of registration and subsequent deployment of the gke-connect-agent to communicate with the GKE Hub. Whilst at the time of writing, AKS and EKS are cited as the supported clusters which can be attached, we can extend this futher and add external clusters from a variety of distributions and environments.

In this example, we’ll firstly use kind to show how a standalone cluster can equally be added to Anthos. Following that, we’ll see how our repertoire of hosting platforms is unlimited by leveraging Cluster API to lifecycle our clusters which can also be connected to Anthos.

Create Clusters

Kind

The simplest example of an unmanaged cluster is to run kind locally. The beauty of this that it demonstrates that Anthos can really run on any Kubernetes distribution, but also the extent to which our clusters can be disparate but still consolidated into the single-pane-of-glass that is the Anthos Dashboard.

$ kind create cluster
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.18.2) đŸ–ŧ
 ✓ Preparing nodes đŸ“Ļ
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹ī¸
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
$ kubectx kind
$ kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:38401
KubeDNS is running at https://127.0.0.1:38401/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

$ kubectl get nodes
NAME                 STATUS   ROLES    AGE     VERSION
kind-control-plane   Ready    master   7m53s   v1.18.2

Again, attaching the cluster is the same process of registration and deploying the gke-connect-agent.

$ gcloud container hub memberships register kind-cluster-0 --project=jetstack-anthos --context=kind --kubeconfig=${KUBECONFIG} --service-account-key-file=hub-key.json
Waiting for membership to be created...done.
Created a new membership [projects/jetstack-anthos/locations/global/memberships/kind-cluster-0] for the cluster [kind-cluster-0]
Generating the Connect Agent manifest...
Deploying the Connect Agent on cluster [kind-cluster-0] in namespace [gke-connect]...
Deployed the Connect Agent on cluster [kind-cluster-0] in namespace [gke-connect].
Finished registering the cluster [kind-cluster-0] with the Hub.

Once the cluster is registered and we’ve logged in, our kind cluster is simiarly viewable in the GKE Hub and we can inspect it in the same fashion as if it were a GKE cluster, an aforementioned registered managed cluster or GKE on X deployment.

kind-registered

kind-node

Cluster API

We’ve just seen how seemingly any cluster can be brought to Anthos, regardless of where it is being hosted. This unlocks powerful capabilities and compositions for how we can lifecycle our clusters and leverage Anthos and it’s features in our environments.

With Cluster API, we can use providers and Kubernetes CustomResourceDefinitions to orchestrate the lifecycling of Kubernetes clusters. Through a management cluster, we can create, scale, upgrade and destroy Kubernetes infrastructure in a variety of environments all through a declarative API. Ergo, we can leverage Cluster API to bring Anthos to many more Kubernetes-conformant distributions.

In this example, we’ll use our kind cluster from the previous step as our bootstrap cluster, and the Cluster API AWS provider to provision a workload Kubernetes cluster in AWS.

$ clusterctl init --infrastructure aws
$ clusterctl config cluster capi-quickstart --kubernetes-version v1.17.5 --control-plane-machine-count=3 --worker-machine-count=3 > capi-quickstart.yaml
$ kubectx kind
$ kubectl apply -f capi-quickstart.yaml
cluster.cluster.x-k8s.io/capi-quickstart created
awscluster.infrastructure.cluster.x-k8s.io/capi-quickstart created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capi-quickstart-control-plane created
awsmachinetemplate.infrastructure.cluster.x-k8s.io/capi-quickstart-control-plane created
machinedeployment.cluster.x-k8s.io/capi-quickstart-md-0 created
awsmachinetemplate.infrastructure.cluster.x-k8s.io/capi-quickstart-md-0 created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capi-quickstart-md-0 created

With the Cluster API resources deployed to bootstrap our AWS hosted cluster, the requisite infrastructure is provisioned in AWS by the provider’s controllers.

$ kubectl get clusters.cluster.x-k8s.io,awsclusters.infrastructure.cluster.x-k8s.io,kubeadmcontrolplanes.controlplane.cluster.x-k8s.io,machines.cluster.x-k8s.io,awsmachines.infrastructure.cluster.x-k8s.io
NAME                                       PHASE
cluster.cluster.x-k8s.io/capi-quickstart   Provisioned

NAME                                                         CLUSTER           READY   VPC                     BASTION IP
awscluster.infrastructure.cluster.x-k8s.io/capi-quickstart   capi-quickstart   true    vpc-0f918389d58146b9c

NAME                                                                              READY   INITIALIZED   REPLICAS   READY REPLICAS   UPDATED REPLICAS   UNAVAILABLE REPLICAS
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capi-quickstart-control-plane   true    true          3          2                3                  1

NAME                                                             PROVIDERID                              PHASE
machine.cluster.x-k8s.io/capi-quickstart-control-plane-4g8kx     aws:///eu-west-1a/i-0a88eb8936b624cb2   Running
machine.cluster.x-k8s.io/capi-quickstart-control-plane-b5djn     aws:///eu-west-1c/i-031c6d67096d31a41   Running
machine.cluster.x-k8s.io/capi-quickstart-control-plane-jl5jc     aws:///eu-west-1b/i-09de1d030507370c6   Running
machine.cluster.x-k8s.io/capi-quickstart-md-0-7cbc758dfd-48gg8   aws:///eu-west-1a/i-0dcadafad83176bd7   Running
machine.cluster.x-k8s.io/capi-quickstart-md-0-7cbc758dfd-ftqtz   aws:///eu-west-1a/i-0b4b83b36fec4bd87   Running
machine.cluster.x-k8s.io/capi-quickstart-md-0-7cbc758dfd-mk6lf   aws:///eu-west-1a/i-0079492f974cd81e1   Running

NAME                                                                             CLUSTER           STATE     READY   INSTANCEID                              MACHINE
awsmachine.infrastructure.cluster.x-k8s.io/capi-quickstart-control-plane-7m2t8   capi-quickstart   running   true    aws:///eu-west-1b/i-09de1d030507370c6   capi-quickstart-control-plane-jl5jc
awsmachine.infrastructure.cluster.x-k8s.io/capi-quickstart-control-plane-kvl6w   capi-quickstart   running   true    aws:///eu-west-1a/i-0a88eb8936b624cb2   capi-quickstart-control-plane-4g8kx
awsmachine.infrastructure.cluster.x-k8s.io/capi-quickstart-control-plane-lgvdx   capi-quickstart   running   true    aws:///eu-west-1c/i-031c6d67096d31a41   capi-quickstart-control-plane-b5djn
awsmachine.infrastructure.cluster.x-k8s.io/capi-quickstart-md-0-9fqzk            capi-quickstart   running   true    aws:///eu-west-1a/i-0dcadafad83176bd7   capi-quickstart-md-0-7cbc758dfd-48gg8
awsmachine.infrastructure.cluster.x-k8s.io/capi-quickstart-md-0-p8d65            capi-quickstart   running   true    aws:///eu-west-1a/i-0b4b83b36fec4bd87   capi-quickstart-md-0-7cbc758dfd-ftqtz
awsmachine.infrastructure.cluster.x-k8s.io/capi-quickstart-md-0-wlxdc            capi-quickstart   running   true    aws:///eu-west-1a/i-0079492f974cd81e1   capi-quickstart-md-0-7cbc758dfd-mk6lf

capi-instances

Upon provisioning and bootstrapping the Cluster API workload cluster, the kubeconfig can be obtained from the management cluster in order to communicate with the workload cluster’s Kubernetes API.

$ kubectl --namespace=default get secret/capi-quickstart-kubeconfig -o jsonpath={.data.value} \
  | base64 --decode \
  > ./capi-quickstart.kubeconfig
$ kubectl --kubeconfig=./capi-quickstart.kubeconfig cluster-info
Kubernetes master is running at https://capi-quickstart-apiserver-1849317286.eu-west-1.elb.amazonaws.com:6443
KubeDNS is running at https://capi-quickstart-apiserver-1849317286.eu-west-1.elb.amazonaws.com:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

$ kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes
NAME                                         STATUS   ROLES    AGE     VERSION
ip-10-0-106-249.eu-west-1.compute.internal   Ready    <none>   8m47s   v1.17.5
ip-10-0-118-5.eu-west-1.compute.internal     Ready    master   11m     v1.17.5
ip-10-0-133-154.eu-west-1.compute.internal   Ready    master   2m27s   v1.17.5
ip-10-0-219-210.eu-west-1.compute.internal   Ready    master   6m33s   v1.17.5
ip-10-0-77-143.eu-west-1.compute.internal    Ready    <none>   8m47s   v1.17.5
ip-10-0-88-102.eu-west-1.compute.internal    Ready    <none>   8m47s   v1.17.5

Registering the cluster once again connects the cluster to Google Cloud enabling Anthos in our environment.

$ gcloud container hub memberships register capi-cluster-0 --project=jetstack-anthos --context=capi-quickstart-admin@capi-quickstart --kubeconfig=./capi-quickstart.kubeconfig --service-account-key-file=hub-key.json
Waiting for membership to be created...done.
Created a new membership [projects/jetstack-anthos/locations/global/memberships/capi-cluster-0] for the cluster [capi-cluster-0]
Generating the Connect Agent manifest...
Deploying the Connect Agent on cluster [capi-cluster-0] in namespace [gke-connect]...
Deployed the Connect Agent on cluster [capi-cluster-0] in namespace [gke-connect].
Finished registering the cluster [capi-cluster-0] with the Hub.

As we can see, all of our clusters are now registered in the GKE Hub. This demonstrates the extent to which we can bring Anthos into an array of different environments, and even how we can leverage other platforms for orchestrating clusters to enable the further extension of Anthos.

This is a testament to the ethos of Anthos, being the control plane for managing Kubernetes whilst being environment and platform agnostic. This consolidates operations and provides consistency across cloud providers, whilst embracing existing infrastructure investments and unlocking new possibilities for hybrid and multi-cloud compositions. This also allows for companies to modernise in place, continuing to run workloads on-prem or on their infrastructure but adopting Kubernetes and cloud-native principles.

registered-clusters

Google Cloud Marketplace

A core feature of the Anthos proposition is the capability to deploy applications from the Google Cloud Marketplace to any of your Anthos registered clusters, whether they are external (attached), or in GKE (On-Prem, AWS or GCP), all through the Google Cloud Console. This catalogue of open source and licensed software simplifies the deployment and maintenance of business-critical applications, tailoring their configuration to your use case and environment.

gcp-marketplace

In this instance, we are able to use a simple Nginx deployment to our EKS cluster which we registered to Anthos earlier. There is a vast catalogue of applications which are supported on Anthos, and as we can see they can be configured to be compatible with the native environment of the host cluster.

nginx-marketplace

Once the marketplace application is deployed to our cluster, there is comprehensive observability for the application’s health, configuration and behaviour. All the components which comprise the application can be inspected and editted if necessary, with events and raw resource YAML available.

nginx-application

All of this observability and orchestration is still possible whilst running the cluster in it’s native environment. Anthos in this instance is facilitating the delivery of applications to registered clusters, as well as consolidating the workloads across the GKE Hub, whether in other clouds, virtualised environments or bare metal.

$ kubectx eks
$ kubectl get po -n application-system
NAME                     READY   STATUS      RESTARTS   AGE
nginx-1-deployer-ckn65   0/1     Completed   0          110s
nginx-1-nginx-0          1/1     Running     0          107s
nginx-1-nginx-1          1/1     Running     0          86s
nginx-1-nginx-2          1/1     Running     0          62s

Anthos Service Mesh

Anthos Service Mesh (ASM) is core to the proposition of running hybrid Kubernetes across cloud and on-premises infrastructure. Built using Istio, it enhances our experience by abstracting and automating cross-cutting concerns, such as issuing workload identities via X.509 certificates to facilitate automatic mutual TLS across our workloads and clusters, and provides mechanisms for layer 7 traffic routing within the mesh.

Additionally ASM centralises the process of certificate issuance and renewal, leading to segregated clusters being able to have cross-boundary trust ensuring service-to-service communications can mutually authenticate.

Anthos provides the means to deploy the Istio control plane in a variety of configurations to best suit your usage of Anthos via the use of istioctl profiles. For deployments of GKE in Google Cloud which are registered to Anthos, there is an asm-gcp profile, whilst for GKE On-Prem, GKE on AWS, EKS and AKS the asm-multicloud profile facilitates the installation of the Istio control plane and configuration of core features, as well as enabling auto mTLS and ingress gateways.

Due to the sidecar proxies that are deployed into each pod as part of ASM, there is a high degree of telemetry and metadata available about the traffic and behaviour of our applications. This is done transparently, with the proxy intercepting inbound traffic to the pod before passing it over localhost to the application container. With this added insight into services within the mesh, service level objectives can be defined in accordance with the four golden signals: latency, traffic, errors and saturation.

This consolidation of application SLOs into a unified management plane is a significant proposition for enterprises running segregated clusters and applications, across multiple environments. Streamlining the administrative experience and minimising the operational overhead of managing multiple systems is at the core of Anthos’ raison d’etre.

$ kubectx eks
$ istioctl install --set profile=asm-multicloud
! global.mtls.enabled is deprecated; use the PeerAuthentication resource instead
✔ Istio core installed
✔ Istiod installed
✔ Ingress gateways installed
✔ Addons installed
✔ Installation complete
$ kubectl get pod -n istio-system
NAME                                    READY   STATUS    RESTARTS   AGE
grafana-5dc4b4676c-k8g4f                1/1     Running   0          10m
istio-ingressgateway-749d6659ff-k2lcb   1/1     Running   0          10m
istio-ingressgateway-749d6659ff-zhbpr   1/1     Running   0          10m
istiod-69657b479-bqqk2                  1/1     Running   1          10m
istiod-69657b479-dwr5b                  1/1     Running   0          10m
kiali-6f457f5964-vmf2l                  1/1     Running   0          10m
prometheus-6b567696c5-kp7lg             2/2     Running   0          10m
promsd-6b77b75f8b-kw94z                 2/2     Running   1          10m

With ASM installed we can leverage the core features of traffic management, security and observability that Istio offers, but also an array of additional features available within Anthos’ implementation of Service Mesh. The forte of ASM is when we have multi-cluster deployments, where applications are traversing cross-boundary for communications. We have seen with replicated control planes that Istio can be configured to communicate cross-cluster, however ASM seeks to abstract that additional layer of configuration away from the adminstrator, and enable cross-cluster routing but also cross-boundary trust. This is an area which is still developing, however the prospect of a managed service mesh control plane to oversee certificate issuance and renewal across multiple meshes, as well as facilitate cross-cluster routing is a significant value-add for Anthos and ASM.

Workloads

Lastly, registered clusters within Anthos are treated as any other cluster in our Google Cloud environment. Consequently, workloads running on those clusters are available within the Google Cloud Console, again enabling a single-pane-of-glass for all of our workloads across environments.

This observability extends to application configuration and state, as well as telemetry data around usage and behaviour through pod metrics and logs.

Deploying the Online Boutique demonstrates our capability to monitor workloads running in non-Google Cloud environments from the Google Cloud Console.

kubectx eks
kubectl label namespace default istio-injection=enabled
kpt pkg get \
https://github.com/GoogleCloudPlatform/microservices-demo.git/release \
microservices-demo
$ kubectl get pods
NAME                                     READY   STATUS    RESTARTS   AGE
adservice-687b58699c-wftf2               2/2     Running   0          22m
cartservice-778cffc8f6-hgsh2             2/2     Running   1          22m
checkoutservice-98cf4f4c-gx846           2/2     Running   0          22m
currencyservice-c69c86b7c-nf6cd          2/2     Running   0          22m
emailservice-5db6c8b59f-4sszg            2/2     Running   0          22m
frontend-8d8958c77-g76gw                 2/2     Running   0          22m
loadgenerator-6bf9fd5bc9-v6b24           2/2     Running   3          22m
paymentservice-698f684cf9-xdb92          2/2     Running   0          22m
productcatalogservice-789c77b8dc-kv96k   2/2     Running   0          22m
recommendationservice-75d7cd8d5c-dhrfj   2/2     Running   0          22m
redis-cart-5f59546cdd-ln6r9              2/2     Running   0          22m
shippingservice-7d87945947-c6hqw         2/2     Running   0          22m
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].'"$HOST_KEY"'}')
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
sensible-browser ${INGRESS_HOST}:${INGRESS_PORT}

online-boutique

Navigating to the Google Cloud Console, all the workloads running across the Anthos environs are viewable across the control plane (where possible) and application namespaces.

gcp-workloads

Drilling down into a specific application, metadata and configuration is available depicting it’s state and behaviour.

frontend-pod

frontend-events

frontend-logs

Future

Anthos attached clusters brings the orchestration and administration of disparate Kubernetes clusters under a consolidated view of the world in GCP, whilst extending the Anthos feature set to a multitude of environments.

Later in 2020, GKE on Azure will accompany GKE on AWS as a fully supported GKE deployment. In the meantime attached clusters allows for existing Azure and other cluster environments to make use of Anthos’ feature set, as well as aiding enterprises in their hybrid and multi-cloud strategies, and cloud-native transformation initiatives.

Get in touch

If you’re wanting to know more about Anthos or running hybrid and multi-cloud Kubernetes, Jetstack offers consulting and subscription which can help you in your investigation and adoption in a variety of ways. Let us know if you’re interested in a workshop or working together to dive deeper into Anthos.