Skip to content

design-proposal: cross-cluster mesh for tenant access to host services#7

Draft
kvaps wants to merge 5 commits intocozystack:mainfrom
kvaps:proposal/cross-cluster-tenant-mesh
Draft

design-proposal: cross-cluster mesh for tenant access to host services#7
kvaps wants to merge 5 commits intocozystack:mainfrom
kvaps:proposal/cross-cluster-tenant-mesh

Conversation

@kvaps
Copy link
Copy Markdown
Member

@kvaps kvaps commented May 4, 2026

Summary

Adds a design proposal for cross-cluster connectivity between Cozystack-managed tenant clusters and the host cluster.

The motivating use case: a host cluster running Ceph (managed by Rook) that should be reachable from inside tenant clusters as if it were local storage. Standard single-gateway approaches (Submariner, Kilo's default mesh-granularity=location) bottleneck Ceph traffic; this proposal uses Kilo's mesh-granularity=cross (squat/kilo#328) to build a node-to-node mesh that scales linearly with cluster size and handles Rook-driven failover without controller intervention on the data path.

The proposal covers:

  • Topology and why cross-mesh fits Ceph's traffic patterns
  • A new operator (cozystack-meshlink-operator) and TenantMeshLink CRD for managing Peer objects on both sides
  • Trust model (one-way: Cozystack → tenant; tenants have no host-cluster API access)
  • IP allocation: no new address space; existing pod-CIDRs are sufficient
  • Failure semantics, edge cases, and alternatives considered

Looking for feedback on the open questions, especially the upstream Kilo PR #328 strategy and whether tenant-side Kilo should be a hard requirement.

Test plan

This is a design proposal; no code yet. Implementation testing is scoped in the proposal and will follow in implementation PRs:

  • Unit tests for reconciliation logic
  • Admission webhook tests for pod-CIDR overlap detection
  • Integration tests with kind (two clusters)
  • E2E with real Rook + tenant cluster + Ceph CSI

Propose a controller-driven design that wires Cozystack tenant clusters
into a node-to-node WireGuard mesh with the host cluster, using Kilo's
mesh-granularity=cross topology. The motivating use case is exposing a
Rook-managed Ceph cluster to tenant pods.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 4, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 4c34d89d-93d6-49ca-b155-1c07a3744e3f

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a design proposal for a cross-cluster mesh using Kilo to allow tenant clusters to access host-cluster services like Ceph. The design utilizes a bipartite node-to-node topology managed by a new operator. The review feedback provides several technical improvements, including addressing MTU overhead for WireGuard, analyzing scalability limits of the N x M mesh, implementing fallback logic for node endpoints, using finalizers for robust resource cleanup, and expanding IP disjointness checks to include Service CIDRs.


### Topology

Both the host cluster and every participating tenant cluster run Kilo with `--mesh-granularity=cross`. In this mode every node is a topology segment of one. Within a single logical location (e.g. all nodes inside one cluster) traffic uses the underlying CNI without WireGuard. Across logical locations every node holds a direct WireGuard tunnel to every node in the other location.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The proposal should address MTU configuration for the cross-cluster mesh. Since WireGuard adds encapsulation overhead (typically 60-80 bytes), packets from pods using the default 1500 MTU will exceed the tunnel MTU, leading to fragmentation or packet loss. The design should specify how this will be handled, for example, by configuring the Kilo interface MTU and ensuring MSS clamping is active or by adjusting the CNI MTU in the tenant clusters.


Both the host cluster and every participating tenant cluster run Kilo with `--mesh-granularity=cross`. In this mode every node is a topology segment of one. Within a single logical location (e.g. all nodes inside one cluster) traffic uses the underlying CNI without WireGuard. Across logical locations every node holds a direct WireGuard tunnel to every node in the other location.

For the host ↔ tenant pair, the result is a full bipartite mesh: every tenant node has a tunnel to every host node, and vice versa. The number of tunnels is `N × M` where N is the tenant node count and M is the host node count; this is intentional and is what enables the throughput and HA properties described below.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The N x M bipartite mesh topology may face scalability challenges as the number of nodes increases. For instance, a 100-node host cluster and a 100-node tenant cluster would result in 10,000 WireGuard peers per node. The proposal should include an analysis of the practical limits for the number of peers the kg-agent and the Linux kernel can manage before performance or control-plane stability is impacted.

For each `TenantMeshLink`, the operator:

1. Validates `spec.podCIDR` against all other `TenantMeshLink` objects and the host cluster's pod-CIDR; any overlap sets `PodCIDRConflict=True` and aborts further reconciliation for that tenant.
2. Lists host cluster Nodes; for each node, ensures a `Peer` exists in the tenant cluster with: `publicKey` from the `kilo.squat.ai/wireguard-public-key` annotation, `endpoint` from `kilo.squat.ai/force-endpoint`, and `allowedIPs` containing the node's per-node pod-CIDR.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The operator should have a fallback strategy if the kilo.squat.ai/force-endpoint annotation is missing on a host node. Without a defined endpoint, tenant nodes will not be able to initiate the WireGuard handshake. Consider falling back to the Node's ExternalIP or InternalIP, or surfacing a specific error in the TenantMeshLink status.

Suggested change
2. Lists host cluster Nodes; for each node, ensures a `Peer` exists in the tenant cluster with: `publicKey` from the `kilo.squat.ai/wireguard-public-key` annotation, `endpoint` from `kilo.squat.ai/force-endpoint`, and `allowedIPs` containing the node's per-node pod-CIDR.
2. Lists host cluster Nodes; for each node, ensures a Peer exists in the tenant cluster with: publicKey from the kilo.squat.ai/wireguard-public-key annotation, endpoint from kilo.squat.ai/force-endpoint (falling back to Node IP if missing), and allowedIPs containing the node's per-node pod-CIDR.

1. Validates `spec.podCIDR` against all other `TenantMeshLink` objects and the host cluster's pod-CIDR; any overlap sets `PodCIDRConflict=True` and aborts further reconciliation for that tenant.
2. Lists host cluster Nodes; for each node, ensures a `Peer` exists in the tenant cluster with: `publicKey` from the `kilo.squat.ai/wireguard-public-key` annotation, `endpoint` from `kilo.squat.ai/force-endpoint`, and `allowedIPs` containing the node's per-node pod-CIDR.
3. Lists tenant cluster Nodes; for each node, ensures a `Peer` exists in the host cluster with: `publicKey` from the tenant Node's annotation, `allowedIPs` containing the tenant per-node pod-CIDR, no `endpoint` (the tenant initiates).
4. Removes orphaned Peer objects on either side using a label selector tied to the `TenantMeshLink` name.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To ensure that Peer objects created in the tenant cluster are reliably cleaned up when a TenantMeshLink is deleted, the operator should use Kubernetes finalizers. Without finalizers, if the operator is unavailable or the tenant API is unreachable during deletion, orphaned Peer resources will remain in the tenant cluster.

Suggested change
4. Removes orphaned Peer objects on either side using a label selector tied to the `TenantMeshLink` name.
4. Uses finalizers to ensure all remote Peer objects are removed from the tenant cluster before the TenantMeshLink is deleted.\n5. Removes orphaned Peer objects on either side using a label selector tied to the TenantMeshLink name.


The constraints on pod-CIDRs are:

- The host pod-CIDR and every tenant pod-CIDR must be pairwise disjoint.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The disjointness requirement should be extended to include the Service CIDRs of both clusters. Overlaps between a tenant's pod-CIDR and the host's Service CIDR (or vice versa) can cause routing conflicts, making it impossible for pods to reach internal services or the advertised host services.

Suggested change
- The host pod-CIDR and every tenant pod-CIDR must be pairwise disjoint.
- The host pod-CIDR, host service-CIDR, and every tenant pod-CIDR must be pairwise disjoint.

kvaps and others added 4 commits May 5, 2026 19:24
Adjust the proposal to reflect that the controller will be developed as
an independent project under the kilo-io organization, per confirmed
interest from Kilo maintainer @squat. Generalize the CRD from a
tenant-specific TenantMeshLink to a tenant-agnostic ClusterMesh that
references peer clusters through a map of kubeconfig Secrets. Move all
tenant semantics into a dedicated Cozystack integration section that
also accounts for the kubernetes-nodes split (PR cozystack#8) so a single
ClusterMesh covers multi-location, multi-backend tenants.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
…oller allowlist + RBAC monopoly

Drop the planned admission webhook. Instead, harden the design with two
controls owned by the host-cluster operator:

- The controller is the only principal with write access to
  kilo.squat.ai/Peer in any participating cluster. Tenant-provisioning,
  the dashboard, and cluster admins can author ClusterMesh objects
  (intent) but never touch Peer directly.
- The controller is configured at deploy time with a subnet allowlist
  (--allowed-cidr). Any ClusterMesh whose allowedNetworks fall outside
  that list is rejected with a status condition before any Peer is
  written. The allowlist cannot be widened through the ClusterMesh API.

Collapse the per-cluster podCIDR + advertise fields into a single
allowedNetworks list, since both are now validated against the same
allowlist and can be expressed uniformly.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
…ontainment

Make the WG-IP threat model explicit. A tenant root that tampers with a
Node's kilo.squat.ai/wireguard-ip annotation must not be able to inject
a Peer with attacker-chosen allowedIPs onto the host side. Add:

- A second controller-level allowlist, --allowed-wireguard-cidr, that
  bounds where any kilo0 interface in the mesh may live. spec.clusters
  carries no WG-CIDR field; the WG address space is host-admin-owned
  infrastructure, not part of per-mesh data.
- Per-Node validation alongside the existing mesh-level checks: WG-IP
  must be /32 (or /128), in --allowed-wireguard-cidr, and unique within
  its cluster. PodCIDRs must be in allowedNetworks. Failures skip the
  offending Node only; the mesh stays Ready.
- A primary-boundary statement in Security: the host's exposure to a
  tenant peer is bounded exclusively by the host-side Peer.allowedIPs,
  so anything the tenant does to its own kilo0, routes, or kg-agent
  post-reconcile cannot widen that bound.
- Cozystack integration spelled out for both allowlists: pod-pool to
  --allowed-cidr, WG-pool to --allowed-wireguard-cidr; tenant
  provisioning allocates from each.

WG-IP is now restored to Peer.allowedIPs (standard Kilo Peer shape),
since the new allowlist makes that safe and it brings cross-cluster
diagnostics back.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
…Networks list

Drop the second --allowed-wireguard-cidr allowlist. WG-CIDR is just
another entry in the same allowedNetworks list as pod-CIDR and
service-CIDR; per-Node WG-IP containment is validated against the
cluster's own allowedNetworks rather than against a separate global
pool. A tenant root cannot widen its surface to host pod/WG/service-CIDR
because those CIDRs live in the host's allowedNetworks (a different
spec.clusters entry), and per-Node containment rejects out-of-range
annotations.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

- Pods in any peer cluster can reach selected services in another peer cluster as if they were on the local network. (Cozystack use case: tenant pods reach host Ceph monitors, OSDs and MDS daemons.)
- Nodes added to or removed from any participating cluster are wired into / detached from the mesh automatically, without per-node manual configuration.
- A compromise of a peer cluster (up to and including full root on a peer node) cannot affect routing in another peer cluster beyond the network surface that was explicitly granted, and cannot affect unrelated peers.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unless it is the cluster running the controller, in which case I guess they do get perms on peer clusters, but that's not new

# The controller's own cluster — no kubeconfig needed.
local: true
allowedNetworks:
- 10.4.0.0/16 # WG-CIDR
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should any of these be named fields in the struct rather than open fields in allowedNetworks? If the WireGuard mesh CIDR and the Pod CIDR are mandatory then maybe they get special treatment? Alternatively, using the open list can later be easily migrated into the stricter design

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess these are all technically optional and it just determines which networks from the Peer resources we want to honor / validate


**Mesh-level (halts reconciliation on failure):**

1. Every CIDR in every `spec.clusters[*].allowedNetworks` is a subset of the controller's `--allowed-cidr` allowlist; otherwise `NetworksNotAllowed=True`.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is kind of annoying, it means that there is functionally no difference between the cluster admin and the mesh admin. If you want to create a new mesh, then you have to edit the mesh controller deployment to add the allow list. I need to think about this a bit. What are we hoping to defend against here?

namespace: kilo
spec:
clusters:
cluster-a:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe in keeping with Kubernetes convention this should become a list of named structs, like how a Pod contains a list of named containers.

- **`--allowed-cidr` allowlist** bounds what `spec.clusters[*].allowedNetworks` can ever declare. Pod-CIDRs, WG-CIDRs, and service-CIDRs all flow through the same allowlist. A user who can author `ClusterMesh` objects cannot widen the address surface beyond what the host admin pre-approved.
- **Per-Node containment** validates that every observed annotation (`Node.Spec.PodCIDRs`, `kilo.squat.ai/wireguard-ip`) lies within the cluster's own `allowedNetworks`. A tenant root forging an annotation that points at the host pod-CIDR, host WG-CIDR, or any other CIDR the tenant did not declare itself is rejected — the offending Node is skipped and never appears as a Peer on the host side.
- **Trust direction by kubeconfig placement.** Whichever cluster holds the controller and the kubeconfig Secrets is the side that drives writes; the side whose kubeconfig is held cannot write back. In Cozystack, only the host holds tenant kubeconfigs — trust flows host → tenant.
- **Cross-mesh isolation.** Each `ClusterMesh`'s Peers are labelled with the mesh name; the controller never deletes or modifies Peers belonging to a different mesh, and `allowedNetworks` overlap between meshes (not just within a single mesh) is rejected.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably also add labels for the source cluster name so if two controllers running on different hosts are managing meshes on the same tenant (some triangle where the two hosts don't know about each other) then the controllers are less likely to compete for ownership of Peers if the mesh object has the same name.

2. **Cluster identifier scope**: should `spec.clusters` keys be free-form strings or follow a stricter schema (e.g. DNS-1123 labels) so they can be reused as label values? Likely the latter; to confirm during implementation.
3. **Transitive routing**: with three or more clusters in the same `ClusterMesh`, the controller currently builds a full mesh. Should it support partial topologies (e.g. star)? Out of scope for v1; the CRD shape allows it later.
4. **Multi-controller scenarios**: in a deployment where two clusters each run their own controller, how should they coordinate? Likely via a "leader" cluster identified in the CRD; deferred.
5. **Per-peer opt-in for received CIDRs**: today `allowedNetworks` is a unilateral declaration on the source side, plus a global allowlist on the controller. Should there additionally be a per-peer `acceptedNetworks` field, so a peer can refuse to accept some of what another peer publishes? Likely unnecessary given the controller-level allowlist, but worth revisiting once there are multi-tenant deployments with heterogeneous policies.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The more I read about the controller allowlist, the more o actually started leaning in this direction. Maybe this needs to be a flag on Kilo, actually (or an entirely new PeerClass resource that declares what allowed IPs are permissible for every Peer in a cluster). This would allow individual clusters to guard against peers being created by a rogue cluster mesh controller. It's not blocking: this is orthogonal Kilo work that would be great to upstream to improve the administration of Kilo meshes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants