# Overview

# Introduction

With the Managed Kubernetes offering, ESC provides Kubernetes clusters on demand with a complete integration into customer networks. Swisscom manages the lifecycle of the cluster and guarantees availability of each cluster's control plane, meaning all services running on Kubernetes masters, such as the API or the scheduler.

The managed Kubernetes offering is based on VMware PKS (opens new window) and deployable via the ESC portal. This service is composed of two blueprints:

  • Kubernetes Environment: This is the network definition for the deployment of Kubernetes clusters, and is used to automate connectivity to your network. Each Kubernetes cluster must belong to an environment, which can contain up to 20 clusters.
  • Kubernetes Cluster: This is the actual deployment description of the Kubernetes cluster within a Kubernetes environment, it requires parameters which are explained below. Day 2 actions are available for multiple cases such as scaling in or out the number of worker nodes.

Further details about the offering can be found in the service description.

The screenshots below are taken from vRealize.

# Kubernetes Environment

Kubernetes environments are the parent instance of Kubernetes clusters, and can hold up to 20 of them. They are used to provision the network environment for your clusters and their connection to your network zone.

When you first select the Kubernetes Environment item from the catalog, you must choose in which Business group should your environment be created. Please note that Environments are tenant-wide, that is that they can be provisioned in Business Group A and be used to deploy Kubernetes Clusters in Business Groups A, B or C. Note that you cannot delete a Kubernetes Environment till all the Kubernetes cluster within are deleted.

Below is a screenshot from the environment form which appears after business group has been selected. All fields are required.

kubernetes environment form
  • Uplink Topology: Select the network to which your clusters should be connected to.
  • Service: There is only one service to choose from, this is in case we offer future versions of the Kubernetes Environment service in future.
  • Plan: There is only one plan to choose from, this is in case we offer future versions of the Kubernetes Environment service in future.
  • Description: Add here a useful description to identify the environment later on.
  • VIP Pool: Define a pool from which IPs will be taken from by the service. It needs to be coordinated within your environment as these will be your egress and ingress IPs. See more details about how IP usage in the Kubernetes Cluster Network section.
  • DNS Servers: Define the DNS servers that will be used by all your Kubernetes Clusters. At least one is mandatory, otherwise your containers will not be able to resolve any service outside the cluster, or even your own container image registry.

# Restrictions

Following network ranges cannot be used due to conflicts with internal communications:

  • 169.254.0.0/28
  • 172.17.0.0/16
  • 172.18.0.0/16
  • 172.19.0.0/16
  • 172.20.0.0/16
  • 172.21.0.0/16
  • 172.22.0.0/16
  • 10.100.200.0/24

# Kubernetes Cluster

The Kubernetes Cluster blueprint triggers the required workflows to provide a Kubernetes cluster with PKS. The version of Kubernetes cannot be chosen and depends on the version of PKS deployed by Swisscom. Current installed versions of each service are:

  • PKS: 1.6.1
  • Kubernetes: 1.15.5

The Kubernetes cluster deployed by PKS is a standard Kubernetes cluster and most of the standard concepts are available. Please note that there might be some differences with what can be found on various Kubernetes offerings such as available annotations or pre-configured Custom Resource Definitions. Documentation regarding PKS can be found on the VMware documentation (opens new window). Documentation on Kubernetes is available on its official website (opens new window).

The cluster is deployed within the chosen Kubernetes environment and its API is accessible through an IP randomly taken from the VIP Pool specified in your environment, using for instance the kubectl CLI tool on this IP on port 8443. Please note that you should configure your DNS to point the domain selected at provisioning to this IP, though you may also edit your local hosts file or disable host verification in your local Kubernetes CLI configuration.

The plans are described within the service description. Depending on the plan selected, the cluster will be deployed with 1 or 3 Master nodes, on which no containers can be scheduled, and at least 1 Worker node. A Load Balancer is provisioned as well which will by default configured to handle the API service on one IP, with another IP reserved for Ingress traffic, as PKS comes with a default Ingress Controller (opens new window) provided by VMware NCP (opens new window).

In this version of the service, privileged containers are enabled.

# Plan sizes

The source of truth is the Service Description. Available plans are currently:

Plan Master Worker Worker Count Worker Storage
2c8r.dev 1 Master w/ 2 CPU, 8GB RAM 2 CPU, 8GB RAM 3 - 10 30GB
2c8r.std 3 Masters w/ 2 CPU, 8GB RAM 2 CPU, 8GB RAM 3 - 10 30GB
4c16r.std 3 Masters w/ 4 CPU, 16GB RAM 4 CPU, 16GB RAM 3 - 50 30GB
4c32r.mem 3 Masters w/ 4 CPU, 16GB RAM 4 CPU, 32GB RAM 3 - 50 80GB
8c32r.std 3 Masters w/ 4 CPU, 16GB RAM 8 CPU, 32GB RAM 3 - 50 80GB
8c64r.mem 3 Masters w/ 4 CPU, 16GB RAM 8 CPU, 64GB RAM 3 - 50 150GB
16c64r.std 3 Masters w/ 4 CPU, 16GB RAM 16 CPU, 64GB RAM 3 - 50 150GB
16c128r.mem 3 Masters w/ 4 CPU, 16GB RAM 16 CPU, 128GB RAM 3 - 50 200GB

# Default Storage

The standard storage class which is applied to every cluster is pks-default-thick-storage and uses the kubernetes.io/vsphere-volume storage provider. Creating a persistent volume claim will trigger the creation of a ReadWriteOnce Persistent Volume represented as a vmdk volume in the underlying vSphere Datastore of your environment. Each cluster comes with an allowance of 1TB.

Using hostpath as storage is highly discouraged since every update from swisscom will delete your worker node.

# Kubernetes Services

Services with the type loadbalancer take an IP from the floating pool of your environment. Your workload will be accessible from this IP address. Please note that you can ignore the first IP (169.254.x.y) which is an internal artifact.

$ kubectl get svc http-lb
NAME      TYPE           CLUSTER-IP       EXTERNAL-IP                   PORT(S)        AGE
http-lb   LoadBalancer   10.100.200.155   169.254.128.3,172.16.200.27   80:32499/TCP   28s

The service type nodePort is not supported by PKS.