eBPF-powered, identity-aware network security for Kubernetes — from L3 to L7, from label selectors to DNS rules.
Cilium is an open-source CNCF project that provides networking, security, and observability for cloud-native workloads. Its superpower is eBPF (extended Berkeley Packet Filter) — programs injected directly into the Linux kernel that intercept and process network packets at blazing speed without kernel modifications.
Enforces policies directly in the kernel's network stack — no iptables, no kube-proxy overhead. Dramatically lower latency and CPU overhead.
Tracks endpoints by a numeric identity derived from their labels, not their IP. Survives pod restarts and IP churn in dynamic clusters.
Real-time network flow logs, metrics, and tracing via Hubble — deeply integrated with policy enforcement for easy troubleshooting.
Enforces policies at all network layers — IP/CIDR (L3), port/protocol (L4), and application protocol (L7: HTTP, gRPC, Kafka, DNS).
Used in EKS, GKE, AKS, and bare-metal clusters. Adopted by Adobe, Datadog, GitLab, Capital One, and thousands more.
Extends policy enforcement across multiple Kubernetes clusters with global service discovery and cross-cluster connectivity.
DELETE but allow GET; restrict Kafka produce to specific topics; filter DNS queries.world), other nodes (remote-node), or only intra-cluster peers.allow rules forms the effective policy.networking.k8s.io/v1 NetworkPolicy API and also offers its own cilium.io/v2 CiliumNetworkPolicy CRD for extended capabilities.| Feature | K8s NetworkPolicy | CiliumNetworkPolicy |
|---|---|---|
| L3 — CIDR / IP block rules | ✔ Yes | ✔ Yes (richer) |
| L4 — Port / Protocol | ✔ Yes | ✔ Yes + port ranges |
| L7 — HTTP / gRPC / Kafka / DNS | ✘ No | ✔ Yes |
| Label-based endpoint selector | ✔ podSelector | ✔ endpointSelector |
| Namespace selector | ✔ Yes | ✔ Yes |
| Entity-based rules (world, host…) | ✘ No | ✔ Yes |
| DNS-aware egress rules | ✘ No | ✔ toFQDNs |
| Deny rules (explicit deny) | ✘ No (implicit only) | ✔ ingressDeny / egressDeny |
| Cluster-wide (non-namespaced) | ✘ No | ✔ CiliumClusterwideNetworkPolicy |
| Node-level policies | ✘ No | ✔ nodeSelector on CCNP |
| Required label matching | ✘ No | ✔ requires / requiresV2 |
| TLS/SNI awareness | ✘ No | ✔ Partial (via Hubble) |
| Wildcard port matching | ✘ No | ✔ Yes |
| Portability (non-Cilium CNI) | ✔ Portable | ✘ Cilium only |
Cilium's policy engine operates across the kernel and userspace, using eBPF maps to share policy state efficiently between the Cilium agent and the kernel data plane.
Runs on every node. Watches Kubernetes API for NetworkPolicy / CiliumNetworkPolicy objects, computes endpoint identities, and programs eBPF maps.
Hash-map structures in the kernel. Contain policy rules keyed by identity. Updated atomically by the agent; the data plane reads them without syscall overhead.
Each pod gets a numeric identity = hash of its labels (e.g., k8s:app=frontend). Identities are shared across nodes via distributed state (kvstore or CRDs).
For L7 policies, Cilium spawns a managed Envoy proxy instance that performs deep packet inspection and applies HTTP/gRPC/Kafka rules before forwarding.
default (allow-all until a policy matches), always (strict deny-all), and never modes per endpoint. The always mode enforces zero-trust even if no policy explicitly selects the pod.L3 policies control who can communicate with a pod based on endpoint identity (label selectors), CIDR ranges, or named entities.
L4 policies refine how traffic is allowed by restricting specific ports and protocols (TCP, UDP, SCTP). Cilium extends standard K8s with port ranges.
L7 policies are defined inside a toPorts[].rules block. Cilium directs matching traffic through Envoy for deep inspection.
method, path (regex), host header, and arbitrary headers.serviceName and methodName pairs in gRPC services.apiKey (Produce, Fetch, etc.) and topic name patterns.matchName or matchPattern glob. Resolves to IPs dynamically.| Selector Field | Scope | Description |
|---|---|---|
endpointSelector | Policy target | Selects which pods this policy applies to (replaces podSelector). Empty = select all. |
fromEndpoints | Ingress L3 | Pods matching the given labels are allowed to send ingress traffic. |
toEndpoints | Egress L3 | Pods matching the given labels are allowed to receive egress traffic. |
fromRequires | Ingress L3 | Requires that inbound traffic also matches these labels (logical AND with fromEndpoints). |
fromCIDR / toCIDR | L3 IP | Allows traffic from/to specific CIDR blocks (IPv4 and IPv6). |
fromCIDRSet / toCIDRSet | L3 IP | CIDR block with exceptions (except field to carve out sub-ranges). |
fromEntities | Ingress Entity | Named logical groups: world, cluster, host, remote-node, kube-apiserver, init, health. |
toEntities | Egress Entity | Same named entity groups for egress direction. |
toFQDNs | Egress DNS | Allow egress to DNS names (matchName exact, matchPattern glob like *.example.com). |
toServices | Egress Svc | Allow egress to a named Kubernetes Service (resolves ClusterIP). |
toPorts[].rules | L7 | HTTP / Kafka / DNS L7 rules nested under a port selector. |
app=frontend to reach backend pods on TCP 8080.apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy metadata: name: allow-frontend-to-backend namespace: production spec: endpointSelector: matchLabels: app: backend ingress: - fromEndpoints: - matchLabels: app: frontend toPorts: - ports: - port: "8080" protocol: TCP
GET /api/.* requests from pods with role=api-consumer. All other HTTP methods and paths are denied.apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy metadata: name: api-http-filter namespace: production spec: endpointSelector: matchLabels: app: api-server ingress: - fromEndpoints: - matchLabels: role: api-consumer toPorts: - ports: - port: "80" protocol: TCP rules: http: - method: GET path: "/api/.*" - method: POST path: "/api/submit" headers: - "X-App-Token: .*"
apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy metadata: name: allow-s3-egress namespace: data-pipeline spec: endpointSelector: matchLabels: app: uploader egress: - # Allow DNS resolution first (port 53) toEndpoints: - matchLabels: "k8s:io.kubernetes.pod.namespace": kube-system k8s-app: kube-dns toPorts: - ports: - port: "53" protocol: ANY rules: dns: - matchPattern: "*.amazonaws.com" - matchName: "api.example.com" - # Then allow HTTPS to those resolved IPs toFQDNs: - matchPattern: "*.amazonaws.com" - matchName: "api.example.com" toPorts: - ports: - port: "443" protocol: TCP
ingressDeny and egressDeny. Deny rules always take precedence over allow rules. Useful for carving out exceptions within broader allow policies.apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy metadata: name: deny-dev-to-prod-db namespace: production spec: endpointSelector: matchLabels: app: database env: production ingress: - # Allow all cluster traffic on 5432 fromEntities: - cluster toPorts: - ports: - port: "5432" protocol: TCP ingressDeny: - # But DENY anything from dev namespace fromEndpoints: - matchLabels: "k8s:io.kubernetes.pod.namespace": development
payments topic. Consumers are granted Fetch access to specific topics.apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy metadata: name: kafka-topic-control namespace: messaging spec: endpointSelector: matchLabels: app: kafka-broker ingress: - fromEndpoints: - matchLabels: role: producer toPorts: - ports: - port: "9092" protocol: TCP rules: kafka: - apiKey: produce topic: payments - fromEndpoints: - matchLabels: role: consumer toPorts: - ports: - port: "9092" protocol: TCP rules: kafka: - apiKey: fetch topic: payments - apiKey: fetch topic: notifications
nodeSelector. Ideal for enforcing baseline security posture across all namespaces.apiVersion: cilium.io/v2 kind: CiliumClusterwideNetworkPolicy metadata: name: baseline-deny-world-egress spec: # Applies to ALL pods cluster-wide endpointSelector: {} egressDeny: - # Block all internet egress by default toEntities: - world --- apiVersion: cilium.io/v2 kind: CiliumClusterwideNetworkPolicy metadata: name: allow-kube-apiserver spec: # Allow all pods to reach kube-apiserver endpointSelector: {} egress: - toEntities: - kube-apiserver toPorts: - ports: - port: "443" protocol: TCP
Use a CiliumClusterwideNetworkPolicy with an empty endpointSelector and egressDeny: world to block all internet access by default, then explicitly allow required FQDNs.
Meaningful labels like app, role, tier, and env power selector-based policies. Establish a labeling convention before adopting network policies.
L7 DNS policies require pods to be able to reach kube-dns on port 53. Add a toEndpoints rule for the DNS service before applying toFQDNs rules.
Run hubble observe --namespace <ns> --verdict DROPPED to see which policy is dropping traffic. Hubble shows the source identity, destination, and matched policy name.
Always combine fromEndpoints with toPorts.rules. An L7 rule without an L3 selector would allow all sources to reach the port, only filtering at the application layer.
Cilium supports a policy-audit-mode per namespace that logs-but-doesn't-drop policy violations, allowing you to validate policies before going enforcement-only.
Explicit ingressDeny/egressDeny override allow rules. Use them for carving out exceptions (e.g., block dev namespace from prod DB despite a broader cluster allow).
Cilium assigns one identity per unique label-set. With many unique pods, watch identity cardinality via cilium identity list and consolidate labels to reduce overhead.
CiliumNetworkPolicy to a pod with only ingress rules, egress from that pod is still unrestricted unless you also define egress rules or a separate egress policy. Both directions must be explicitly controlled.apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy # or CiliumClusterwideNetworkPolicy metadata: name: <policy-name> namespace: <namespace> # omit for CiliumClusterwideNetworkPolicy spec: endpointSelector: # required — which pods this policy targets matchLabels: {} matchExpressions: [] ingress: # list of ingress allow rules - fromEndpoints: [] # label-based endpoint selectors fromRequires: [] # mandatory additional labels (AND logic) fromCIDR: [] # CIDR blocks fromCIDRSet: [] # CIDR with exceptions fromEntities: [] # world / cluster / host / etc. toPorts: # port + optional L7 rules - ports: [] rules: http: [] # method, path, headers kafka: [] # apiKey, topic dns: [] # matchName, matchPattern ingressDeny: # explicit deny rules (override allows) - fromEndpoints: [] egress: # list of egress allow rules - toEndpoints: [] toCIDR: [] toCIDRSet: [] toEntities: [] toFQDNs: [] # matchName / matchPattern toServices: [] # Kubernetes Service reference toPorts: [] egressDeny: # explicit egress deny - toEntities: []