kubectl
This guide shows how to authenticate kubectl against a Kubernetes cluster using Nauthera-issued OIDC tokens. There are two approaches depending on your use case:
| Method | Use case | Flow |
|---|---|---|
| kubelogin (recommended) | Developer workstations | Authorization code + PKCE, browser-based login |
| NautheraServiceAccount | CI/CD pipelines, automation | Client credentials, no browser needed |
Both methods require the Kubernetes API server to trust Nauthera as an OIDC issuer.
Prerequisites
- A running Nauthera instance with a reachable issuer URL (e.g.
https://auth.example.com) - Admin access to configure kube-apiserver flags
- A
ClusterAuthPolicyorAuthPolicythat includes thegroupsscope
Step 1 — Configure the API server
Add the OIDC flags to your kube-apiserver. This is the same configuration used in the Kubernetes Dashboard guide:
--oidc-issuer-url=https://auth.example.com
--oidc-client-id=<client-id>
--oidc-username-claim=email
--oidc-groups-claim=groups
--oidc-username-prefix="oidc:"
--oidc-groups-prefix="oidc:"
For managed Kubernetes providers:
| Provider | How to configure |
|---|---|
| EKS | aws eks associate-identity-provider-config with OIDC config |
| GKE | Not natively supported; use Anthos Identity Service or a webhook token reviewer |
| AKS | az aks update --enable-oidc-issuer or AAD integration |
| kubeadm | Edit /etc/kubernetes/manifests/kube-apiserver.yaml |
| k3s | Add flags to /etc/rancher/k3s/config.yaml under kube-apiserver-arg |
Step 2 — Create Kubernetes RBAC bindings
Bind Nauthera groups to Kubernetes roles. The oidc: prefix matches the --oidc-groups-prefix flag:
# Cluster admins — full access
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nauthera-cluster-admins
subjects:
- kind: Group
name: "oidc:cluster-admins"
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
# Developers — namespace-scoped access
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: nauthera-developers
namespace: my-app
subjects:
- kind: Group
name: "oidc:developers"
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: edit
apiGroup: rbac.authorization.k8s.io
---
# Read-only for everyone else
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nauthera-viewers
subjects:
- kind: Group
name: "oidc:viewers"
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.ioOption A — Interactive login with kubelogin
kubelogin is a kubectl plugin that handles the OIDC authorization code flow with PKCE. It opens a browser for authentication and caches tokens locally.
Create an OidcClient
apiVersion: auth.nauthera.io/v1alpha1
kind: OidcClient
metadata:
name: kubectl
namespace: nauthera-system
spec:
displayName: kubectl
redirectUris:
- "http://localhost:8000"
- "http://localhost:18000"
allowedScopes:
- openid
- profile
- email
- groups
grantTypes:
- authorization_code
- refresh_token
pkce:
required: true
authMethod: noneThe redirect URIs use localhost because kubelogin runs a temporary local server to receive the callback. Setting authMethod: none with PKCE required is the recommended pattern for public clients like CLI tools.
kubectl apply -f kubectl-oidc-client.yaml
CLIENT_ID=$(kubectl get secret kubectl-credentials -n nauthera-system \
-o jsonpath='{.data.client_id}' | base64 -d)Install kubelogin
# Homebrew (macOS/Linux)
brew install int128/kubelogin/kubelogin
# Krew (kubectl plugin manager)
kubectl krew install oidc-login
# Go install
go install github.com/int128/kubelogin/cmd/kubelogin@latestConfigure kubeconfig
Add a user entry that uses kubelogin as a credential plugin:
kubectl config set-credentials nauthera-oidc \
--exec-api-version=client.authentication.k8s.io/v1beta1 \
--exec-command=kubectl \
--exec-arg=oidc-login \
--exec-arg=get-token \
--exec-arg=--oidc-issuer-url=https://auth.example.com \
--exec-arg=--oidc-client-id=$CLIENT_ID \
--exec-arg=--oidc-extra-scope=email \
--exec-arg=--oidc-extra-scope=profile \
--exec-arg=--oidc-extra-scope=groupsOr edit ~/.kube/config directly:
users:
- name: nauthera-oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: kubectl
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://auth.example.com
- --oidc-client-id=<your-client-id>
- --oidc-extra-scope=email
- --oidc-extra-scope=profile
- --oidc-extra-scope=groupsSet the context
kubectl config set-context nauthera \
--cluster=my-cluster \
--user=nauthera-oidc
kubectl config use-context nautheraTest it
kubectl get podsOn the first run, kubelogin opens your browser for authentication. After login, the token is cached and refreshed automatically.
Token cache
kubelogin stores tokens in ~/.kube/cache/oidc-login/. To force re-authentication:
rm -rf ~/.kube/cache/oidc-login/
kubectl get pods # triggers new loginOption B — CI/CD with NautheraServiceAccount
For pipelines and automation that need non-interactive access, use a NautheraServiceAccount with client credentials.
Create a NautheraServiceAccount
apiVersion: auth.nauthera.io/v1alpha1
kind: NautheraServiceAccount
metadata:
name: ci-deployer
namespace: ci-cd
spec:
displayName: CI Deployer
allowedScopes:
- openid
- email
- groupskubectl apply -f ci-deployer-sa.yamlThe operator provisions a Secret ci-deployer-credentials with client_id and client_secret.
Get a token
Exchange the client credentials for an access token:
CLIENT_ID=$(kubectl get secret ci-deployer-credentials -n ci-cd \
-o jsonpath='{.data.client_id}' | base64 -d)
CLIENT_SECRET=$(kubectl get secret ci-deployer-credentials -n ci-cd \
-o jsonpath='{.data.client_secret}' | base64 -d)
TOKEN=$(curl -s -X POST https://auth.example.com/oauth2/token \
-d "grant_type=client_credentials" \
-d "client_id=$CLIENT_ID" \
-d "client_secret=$CLIENT_SECRET" \
-d "scope=openid email groups" \
| jq -r '.access_token')Use the token with kubectl
kubectl --token="$TOKEN" get podsOr configure it in kubeconfig:
kubectl config set-credentials ci-deployer --token="$TOKEN"
kubectl config set-context ci --cluster=my-cluster --user=ci-deployer
kubectl config use-context ciGitHub Actions example
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Get Nauthera token
id: auth
run: |
TOKEN=$(curl -s -X POST https://auth.example.com/oauth2/token \
-d "grant_type=client_credentials" \
-d "client_id=${{ secrets.NAUTHERA_CLIENT_ID }}" \
-d "client_secret=${{ secrets.NAUTHERA_CLIENT_SECRET }}" \
-d "scope=openid email groups" \
| jq -r '.access_token')
echo "token=$TOKEN" >> "$GITHUB_OUTPUT"
- name: Deploy
run: |
kubectl --token="${{ steps.auth.outputs.token }}" apply -f manifests/RBAC for service accounts
The API server resolves service account tokens the same way as user tokens. Bind the service account's email or group to a Kubernetes role:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ci-deployer
subjects:
- kind: User
name: "oidc:ci-deployer@ci-cd.svc" # email claim from the token
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: edit
apiGroup: rbac.authorization.k8s.ioTroubleshooting
| Symptom | Cause | Fix |
|---|---|---|
error: You must be logged in | API server doesn't trust Nauthera | Verify --oidc-issuer-url flag is set correctly |
Unauthorized after successful login | RBAC binding missing or wrong prefix | Check that group names include the oidc: prefix |
| kubelogin: "dial tcp: connection refused" | Localhost port conflict | Try a different port with --listen-address |
| Token expired, no auto-refresh | refresh_token grant not enabled | Add refresh_token to OidcClient grantTypes |
| CI token always denied | Service account not in any group | Assign the service account to a group, or bind by username |
| "invalid issuer" error | Issuer URL mismatch between token and API server | Ensure --oidc-issuer-url exactly matches Nauthera's issuerUrl |
Related
- Kubernetes Dashboard — OIDC sign-in for the web dashboard
- OidcClient — Full CRD reference
- NautheraServiceAccount — Machine-to-machine credentials
- AuthPolicy — Scope and claim mapping configuration