Architecture
Nauthera is built on the Kubernetes Operator pattern. Just as ingress-nginx deploys nginx and watches Ingress resources, Nauthera deploys an auth server and watches OidcClient and AuthPolicy resources. The operator continuously reconciles the actual state of the cluster against the desired state declared in those resources.
The Operator Pattern
A Kubernetes operator extends the Kubernetes API with domain-specific logic. Rather than requiring administrators to run imperative scripts when configuration changes, an operator reacts to changes in real time and takes the actions necessary to bring the cluster into the desired state.
Nauthera's operator watches four CRDs:
OidcClientServiceAccountClusterAuthPolicyAuthPolicy
When any of these resources is created, updated, or deleted, the operator's reconciliation loop is triggered.
CRD Relationships
The CRDs interact through the operator:
Nauthera Operator
The operator is the central piece. It:
- Runs the OIDC/OAuth2 auth server as part of the operator deployment (all-in-one container).
- Stores users in PostgreSQL and optionally manages sessions in Redis for HA deployments (configured via Helm values).
- Applies
ClusterAuthPolicyresources for cluster-wide defaults andAuthPolicyresources for namespace-scoped overrides. - Provisions OIDC credentials for each
OidcClientandServiceAccount.
ClusterAuthPolicy & AuthPolicy
ClusterAuthPolicy resources are cluster-scoped and govern token issuance across the entire cluster. Multiple ClusterAuthPolicy resources are merged according to a defined precedence order. AuthPolicy resources are namespaced and override specific cluster-wide defaults for OidcClients within a given namespace. ClusterAuthPolicy resources are typically managed by the security team, while AuthPolicy resources can be delegated to application teams for per-namespace customisation.
OidcClient
OidcClient resources are namespaced to the application they represent. An application team in the payments-prod namespace creates their own OidcClient and the operator provisions a Secret in the same namespace containing the OIDC credentials — the application team never needs cluster-admin access.
ServiceAccount
ServiceAccount resources provide machine-to-machine OAuth2 credentials for workloads that need to authenticate without a user context. Like OidcClient, each ServiceAccount is namespaced and the operator provisions a credentials Secret automatically. ServiceAccounts exclusively use the client_credentials grant type and support automatic credential rotation.
The Reconciliation Loop
Each CRD has a dedicated controller with its own reconciliation loop. The loops are event-driven: Kubernetes informs the controller when a watched resource changes.
What the Operator Manages
The operator deployment runs the auth server directly — no separate Deployments or Services to manage. When you install Nauthera via Helm, the operator:
- Runs the OIDC server process within the operator pod.
- Exposes the OIDC endpoint via a
Serviceand optionalIngress(orHTTPRoutefor Gateway API). - Connects to PostgreSQL for user storage and optionally Redis for session management in HA deployments (configured via Helm values).
- References cert-manager
Certificatesecrets for TLS.
When you create an OidcClient or ServiceAccount, the operator:
- Generates a cryptographically random
client_idandclient_secret. - Registers the client with the built-in auth server.
- Projects the credentials into a
Secretin the resource's namespace. - Creates a
ConfigMapwith OIDC endpoint URLs (issuer, token endpoint, JWKS URI, etc.) so applications can integrate without hardcoded values or manual discovery.
Admission Webhook
Nauthera includes a validating admission webhook that rejects invalid CRD configurations before they are persisted to etcd. The webhook validates:
- Redirect URIs are well-formed and use HTTPS (except
localhostfor development). - Grant types are from the supported set.
- Token TTL values are parseable and within allowed bounds.
- Scope names conform to RFC 6749 syntax.
- Claim mapping fields reference valid attributes.
The webhook is installed automatically by the Helm chart and uses a cert-manager Certificate for TLS.
Token Format & Signing
Nauthera issues JSON Web Tokens (JWT) for access tokens, ID tokens, and optionally refresh tokens.
- Signing algorithms: RS256 (default) or ES256, configurable via Helm values (
operator.signingAlgorithm). - JWKS endpoint: The public keys are published at
/.well-known/jwks.jsonon the auth server, allowing relying parties to verify tokens without contacting the auth server directly. - Key storage: Signing keys are stored as Kubernetes
Secretresources in the operator's namespace. All clients within the cluster share the same signing keys (single trust domain). If you need separate trust domains, deploy separate Nauthera instances.
Token Revocation
Nauthera implements RFC 7009 token revocation via the /oauth2/revoke endpoint.
- Supports revocation of both access tokens and refresh tokens.
- Revoking a refresh token also invalidates all access tokens issued from that refresh token.
- Revocation requests are idempotent — revoking an already-revoked or expired token returns a success response.
Token Verification
Nauthera issues self-contained JWTs. Resource servers verify tokens locally using the public keys from the JWKS endpoint (/.well-known/jwks.json) — no call to the auth server is needed at request time.
Token Introspection (RFC 7662): Nauthera does not currently implement the token introspection endpoint. Since all tokens are self-contained JWTs, resource servers can verify tokens locally using the JWKS endpoint. Token introspection is planned for a future release to support opaque token formats.
For the full list of endpoints, see the Endpoint Reference.
Rate Limiting
Auth endpoints are rate-limited to protect against abuse:
| Endpoint | Default Limit | Configurable Via |
|---|---|---|
/oauth2/token | 100 req/min per client | operator.rateLimits.tokenEndpoint |
/oauth2/authorize | 60 req/min per IP | operator.rateLimits.authorizeEndpoint |
/oauth2/revoke | 30 req/min per client | operator.rateLimits.revokeEndpoint |
Rate limits are configurable via Helm values. When a limit is exceeded, the server returns 429 Too Many Requests.
CORS
CORS (Cross-Origin Resource Sharing) headers are configured automatically based on the redirectUris of registered OidcClients — the origins are extracted from the redirect URIs. Additional allowed origins can be configured via the Helm value operator.cors.allowedOrigins.
User Management & Security
The operator manages user accounts in PostgreSQL with the following security features:
- Password hashing: All passwords are hashed using bcrypt with a configurable cost factor (default: 12).
- Password policy: Configurable minimum length and complexity requirements via Helm values (
operator.passwordPolicy). - Brute-force protection: Account lockout after a configurable number of failed login attempts (default: 5 within 15 minutes). Locked accounts unlock automatically after a cooldown period (default: 30 minutes).
- Admin API: The operator exposes a management API for user CRUD operations, scoped to the operator's namespace and protected by Kubernetes RBAC.
For detailed instructions on creating and managing users, see the User Management guide.
Login & Consent UI
The operator serves a built-in login page and consent screen. The login page collects user credentials and creates a session. The consent screen presents requested scopes for user approval.
Both pages can be customized via Helm values (logo, colors, footer). See the Login Experience guide for details.
Security Headers
The auth server sets the following security headers on all login and consent pages:
| Header | Value |
|---|---|
Strict-Transport-Security | max-age=31536000; includeSubDomains |
Content-Security-Policy | default-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' https: |
X-Frame-Options | DENY |
X-Content-Type-Options | nosniff |
Referrer-Policy | strict-origin-when-cross-origin |
Cache-Control | no-store, no-cache, must-revalidate |
These headers protect against clickjacking, XSS, and caching of sensitive pages. They are set automatically and cannot be disabled.
Multi-Tenancy Model
Nauthera is designed for clear ownership boundaries with self-service for application teams:
| Role | Responsibility | How |
|---|---|---|
| Platform team | Install and configure the operator | Helm chart + values.yaml |
| Security team | Define cluster-wide auth policies | ClusterAuthPolicy resources |
| Application team | Register OIDC clients and service accounts, optionally override policies | OidcClient / ServiceAccount resources + optional AuthPolicy in their namespace |
Application teams can only see resources in their own namespace. They cannot read the database configuration or any other team's clients.
Note: The auth server is a single shared process within the operator deployment. Namespace isolation is logical (enforced via CRD scoping and Kubernetes RBAC), not process-level. All namespaces share the same auth server instance, signing keys, and database. If you require hard process-level isolation between tenants, deploy separate Nauthera operator instances per tenant.
Scaling & Performance
The operator deployment serves two roles in a single process:
- Controller (CRD reconciliation) — uses Kubernetes leader election (
leaderelection.k8s.io/lease) so only one replica reconciles at a time. The controller is event-driven and not a throughput bottleneck. - Auth server (HTTP) — all replicas serve traffic simultaneously, making the auth server horizontally scalable.
Horizontal Scaling
Scale the operator with replicaCount in the Helm values or attach a Horizontal Pod Autoscaler (HPA) targeting CPU utilisation or request rate. Auth-server requests are stateless per-request: sessions live in Redis, users live in PostgreSQL, and signing keys are cached in-memory via a Kubernetes watch on the signing-key Secret.
Built-in Caching
The operator caches frequently accessed data in-memory to minimise database and API-server round-trips:
| Data | Cache Strategy | Invalidation |
|---|---|---|
| Client registry (OidcClient specs) | In-memory | CRD watch events |
| JWKS / signing keys | In-memory | Key rotation watch |
| Merged policy (ClusterAuthPolicy + AuthPolicy) | In-memory | Policy CRD watch events |
| User lookups | Not cached | — (correctness for auth decisions) |
Connection Pooling
The operator uses a built-in connection pool for PostgreSQL. Pool parameters are configurable via Helm values:
database:
pool:
maxConnections: 25
minConnections: 5
connectionTimeout: 5sReference Sizing
| Deployment | Replicas | Clients | Users | Token req/s |
|---|---|---|---|---|
| Dev / test | 1 | < 50 | < 1 000 | < 50 |
| Small prod | 2–3 | < 200 | < 10 000 | < 500 |
| Medium prod | 3–5 | < 1 000 | < 100 000 | < 2 000 |
| Large prod | 5–10 + HPA | < 5 000 | < 1 000 000 | < 10 000 |
Infrastructure dependencies. PostgreSQL and Redis high-availability are outside Nauthera's scope. For PostgreSQL HA, consider CloudNativePG. For Redis HA, use Redis Sentinel or Redis Cluster.
Key Rotation
Nauthera automatically rotates OIDC signing keys on a configurable schedule (default: 30 days). During rotation:
- A new signing key is generated and added to the JWKS endpoint alongside the existing key.
- After a configurable overlap period (default: 24 hours, allowing existing tokens to expire naturally), the old key is removed.
This overlap ensures that tokens issued before the rotation remain valid throughout their configured lifetime.
Backup & Recovery: Signing key Secrets should be included in your cluster backup strategy (e.g., Velero). If signing keys are lost, all previously issued tokens become unverifiable and clients will need to re-authenticate. Consider backing up the
nauthera-signing-keysSecret in the operator namespace as part of your disaster recovery plan.
Roadmap
Nauthera is under active development. Planned features include:
- Self-service registration & password reset: Email-based sign-up and forgot-password flows.
- Token introspection (RFC 7662): For resource servers that prefer server-side token validation.
- OpenTelemetry tracing: Distributed tracing support alongside existing Prometheus metrics.
- WebAuthn / passkey MFA: Additional MFA methods beyond TOTP.
- Alternative user storage: LDAP and external identity provider (IdP) support for organisations with existing directory infrastructure.
- OAuth2 social login / sign-up: Allow end-users to authenticate via external OAuth2 providers (e.g., Google, GitHub) in addition to the built-in PostgreSQL user store.
- Admin dashboard: Web-based UI for managing users, clients, and policies.