I have used kubectl before as the cli to access kubernetes clusters and I also sort of understood the way that cluster/context and users work in kubectl config but these are quite low-level tools and trying to connect to a new cluster was failing with this error.

The underlying cause was that the cluster was behind a L4 load balancer which was offloading SSL. I did know this and I suspected that when I went into my rancher control panel to get the kube config for remote access, it might not work. However, trying to find out how to resolve it was confusing.

I tried pasting in the top-level cert into the certificate-authority-data property in kube config as well as the CA cert but neither worked. I suspected I needed the chain like you normally do but didn't know how to do this correctly.

Fortunately, I found this: https://stackoverflow.com/a/63518617/14401893 which was the answer I needed.

All you have to do is:

1) Use open ssl (or whatever else you want) to view the contents of the certificates on the load balancer, for example: openssl s_client -showcerts -servername rancher.example.io -connect rancher.example.io:443 and then copy and paste all of the certs (including the being and end certificate markers) into a single text file with the specific one first and the CA last.

2) Instead of specifying certificate-authority-data in your kube config, use certificate-authority instead and set its value to the name of the text file you created.

Bing bang baboosh!