Oracle Kubernetes Cluster
Being an Oracle Groundbreaker Ambassador, I get to use the Oracle Cloud.
They have added support for Kubernetes lately. I must say I was pleasantly surprised about it.
It works perfectly.
So, here is a little tutorial if you want to play with it.
It uses Terraform. Oracle Cloud has developed a connector to it. It makes everything easier and command line.
The first step is to configure correctly your Oracle Cloud Infrastructure (OCI). You can more or less follow this.
I will comment and summarize here
- Install Terraform (
brew install terraform, you probably have brew already on a mac so no need to install it and do the
- Generate a ssh key for oci. Follow the instructions. You could use an existing key. But other scripts are assuming you have a key in the
.ocidirectory so it’s just easier to create a new one
- Add the public key on the Oracle console. Your life will be easier if you log yourself before clicking on all the links
- Create a
env-vars.sh. I haven’t added it in my
.bash_profile. I just do a
source env-vars.shwhen needed. There are 2 fun values to find:
TF_VAR_user_ocid. The tenancy is here. The user is here. You can, of course, use the region you prefer.
Create the OKE
Now we get serious and we will create the Oracle Kubernetes Engine (OKE). This is explained here.
Again, the steps with my comments
- Get the git repository for oke:
git clone https://github.com/cloud-partners/oke-how-to.git. You might want to fork and commit since you will tweak it
- Init terraform:
- Generate the plan to make sure it works:
terracotta plan. You might want to modify
terraform/variables.tffirst. This file contains the name of your cluster, the number of nodes per subnet you want, the server instance type and the OKE version used.
- You can then apply the plan to create your cluster;
terracotta apply. It should work magically. I had one problem on my side though. I think it’s because I have an old account. My OKE limit was at 0. So I couldn’t create a cluster. I had to ask the support to fix it. Which was done pretty quickly.
One thing I am not sure about is if you will need to add some policies.
Just in case, here are mine (I’m in the group
- ListandGetVCNs: Allow group Administrators to manage vcn in tenancy
- ListGetsubnets: Allow group Administrators to manage virtual-network-family in tenancy
- OKE: Allow service OKE to manage all-resources in tenancy
- PSM-root-policy: PSM managed compartment root policy
- Tenant Admin Policy: Tenant Admin Policy
I haven’t mastered the policy system yet to I’m not quite sure what is doing what.
Deploy on the cluster
You now have a running cluster so let’s deploy some stuff on it.
- For that you need
brew install kubectl kubernetes-helm
- Add the kube config for the cluster. The doc will tell you to use the
configfile generated by Terraform. It works but in general you want to keep the configuration of other clusters (e.g docker-for-desktop-cluster or minikube). So you will probably prefer to give it another name. You can then switch from one context to another using
kubectl config use-context oci
Then just deploy whatever you want to deploy.
Destroy your cluster
Really important when you are done: