Research Readout · April 2026

Cilium Network Policies
Deep Dive

eBPF-powered, identity-aware network security for Kubernetes — from L3 to L7, from label selectors to DNS rules.

🐝 eBPF ☸️ Kubernetes CNI 🔐 Zero Trust 🌐 L3 / L4 / L7 🏷️ Identity-Based 🔭 Hubble
🐝

What is Cilium?

Cilium is an open-source CNCF project that provides networking, security, and observability for cloud-native workloads. Its superpower is eBPF (extended Berkeley Packet Filter) — programs injected directly into the Linux kernel that intercept and process network packets at blazing speed without kernel modifications.

eBPF-Powered

Enforces policies directly in the kernel's network stack — no iptables, no kube-proxy overhead. Dramatically lower latency and CPU overhead.

🏷️

Identity-Based Security

Tracks endpoints by a numeric identity derived from their labels, not their IP. Survives pod restarts and IP churn in dynamic clusters.

🔭

Hubble Observability

Real-time network flow logs, metrics, and tracing via Hubble — deeply integrated with policy enforcement for easy troubleshooting.

🌐

L3 → L7 Enforcement

Enforces policies at all network layers — IP/CIDR (L3), port/protocol (L4), and application protocol (L7: HTTP, gRPC, Kafka, DNS).

☁️

Multi-Cloud Native

Used in EKS, GKE, AKS, and bare-metal clusters. Adopted by Adobe, Datadog, GitLab, Capital One, and thousands more.

🔗

ClusterMesh

Extends policy enforcement across multiple Kubernetes clusters with global service discovery and cross-cluster connectivity.

🔐

Why Cilium Network Policies?

ℹ️
Zero-Trust default: Without any NetworkPolicy enforced, all pod-to-pod communication is allowed by default. Applying even a single policy to a pod isolates it and enforces a deny-by-default posture for that pod.
⚖️

CiliumNetworkPolicy vs. Kubernetes NetworkPolicy

💡
Both are supported simultaneously. Cilium fully implements the standard networking.k8s.io/v1 NetworkPolicy API and also offers its own cilium.io/v2 CiliumNetworkPolicy CRD for extended capabilities.
Feature K8s NetworkPolicy CiliumNetworkPolicy
L3 — CIDR / IP block rules✔ Yes✔ Yes (richer)
L4 — Port / Protocol✔ Yes✔ Yes + port ranges
L7 — HTTP / gRPC / Kafka / DNS✘ No✔ Yes
Label-based endpoint selector✔ podSelector✔ endpointSelector
Namespace selector✔ Yes✔ Yes
Entity-based rules (world, host…)✘ No✔ Yes
DNS-aware egress rules✘ No✔ toFQDNs
Deny rules (explicit deny)✘ No (implicit only)✔ ingressDeny / egressDeny
Cluster-wide (non-namespaced)✘ No✔ CiliumClusterwideNetworkPolicy
Node-level policies✘ No✔ nodeSelector on CCNP
Required label matching✘ No✔ requires / requiresV2
TLS/SNI awareness✘ No✔ Partial (via Hubble)
Wildcard port matching✘ No✔ Yes
Portability (non-Cilium CNI)✔ Portable✘ Cilium only
🏗️

Architecture & Enforcement Flow

Cilium's policy engine operates across the kernel and userspace, using eBPF maps to share policy state efficiently between the Cilium agent and the kernel data plane.

Packet Enforcement Flow
📦
Source
Pod / Ext
eBPF Hook
egress policy
🧠
Kernel
eBPF Maps
eBPF Hook
ingress policy
🎯
Destination
Pod
🤖

Cilium Agent (DaemonSet)

Runs on every node. Watches Kubernetes API for NetworkPolicy / CiliumNetworkPolicy objects, computes endpoint identities, and programs eBPF maps.

🗄️

eBPF Maps (Kernel)

Hash-map structures in the kernel. Contain policy rules keyed by identity. Updated atomically by the agent; the data plane reads them without syscall overhead.

🏷️

Endpoint Identity

Each pod gets a numeric identity = hash of its labels (e.g., k8s:app=frontend). Identities are shared across nodes via distributed state (kvstore or CRDs).

🔒

Envoy Proxy (L7)

For L7 policies, Cilium spawns a managed Envoy proxy instance that performs deep packet inspection and applies HTTP/gRPC/Kafka rules before forwarding.

⚠️
Policy enforcement modes: Cilium supports default (allow-all until a policy matches), always (strict deny-all), and never modes per endpoint. The always mode enforces zero-trust even if no policy explicitly selects the pod.
🔬

Policy Layers: L3 / L4 / L7

L3 — Network

Identity & CIDR Rules

L3 policies control who can communicate with a pod based on endpoint identity (label selectors), CIDR ranges, or named entities.

SELECTORS
fromEndpoints toEndpoints fromRequires fromCIDR toCIDR fromCIDRSet
ENTITIES
world cluster host remote-node kube-apiserver init health
L4 — Transport

Port & Protocol Rules

L4 policies refine how traffic is allowed by restricting specific ports and protocols (TCP, UDP, SCTP). Cilium extends standard K8s with port ranges.

port number port ranges (e.g. 8000-9000) TCP UDP SCTP ANY (wildcard)
L7 — Application

Protocol-Aware Rules

L7 policies are defined inside a toPorts[].rules block. Cilium directs matching traffic through Envoy for deep inspection.

🌐
HTTP
Match on method, path (regex), host header, and arbitrary headers.
gRPC
Allow specific serviceName and methodName pairs in gRPC services.
📨
Kafka
Filter by apiKey (Produce, Fetch, etc.) and topic name patterns.
🔍
DNS (toFQDNs)
Allow egress to FQDNs by matchName or matchPattern glob. Resolves to IPs dynamically.
🏷️

Selectors & Entities Reference

Selector FieldScopeDescription
endpointSelectorPolicy targetSelects which pods this policy applies to (replaces podSelector). Empty = select all.
fromEndpointsIngress L3Pods matching the given labels are allowed to send ingress traffic.
toEndpointsEgress L3Pods matching the given labels are allowed to receive egress traffic.
fromRequiresIngress L3Requires that inbound traffic also matches these labels (logical AND with fromEndpoints).
fromCIDR / toCIDRL3 IPAllows traffic from/to specific CIDR blocks (IPv4 and IPv6).
fromCIDRSet / toCIDRSetL3 IPCIDR block with exceptions (except field to carve out sub-ranges).
fromEntitiesIngress EntityNamed logical groups: world, cluster, host, remote-node, kube-apiserver, init, health.
toEntitiesEgress EntitySame named entity groups for egress direction.
toFQDNsEgress DNSAllow egress to DNS names (matchName exact, matchPattern glob like *.example.com).
toServicesEgress SvcAllow egress to a named Kubernetes Service (resolves ClusterIP).
toPorts[].rulesL7HTTP / Kafka / DNS L7 rules nested under a port selector.
Named Entities Explained
worldAny IP outside the cluster
clusterAll pods inside the cluster
hostThe node's host network namespace
remote-nodeOther cluster nodes (not local)
kube-apiserverThe Kubernetes API server entity
initPods during their init phase
healthCilium health-check endpoints
allMatches everything (use with care)
📋

YAML Examples

💡
Label-based L3/L4 allow: Allows only pods with app=frontend to reach backend pods on TCP 8080.
allow-frontend-to-backend.yamlYAML
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: allow-frontend-to-backend
  namespace: production
spec:
  endpointSelector:
    matchLabels:
      app: backend
  ingress:
  - fromEndpoints:
    - matchLabels:
        app: frontend
    toPorts:
    - ports:
      - port: "8080"
        protocol: TCP
💡
L7 HTTP policy: Only allows GET /api/.* requests from pods with role=api-consumer. All other HTTP methods and paths are denied.
http-l7-policy.yamlYAML
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: api-http-filter
  namespace: production
spec:
  endpointSelector:
    matchLabels:
      app: api-server
  ingress:
  - fromEndpoints:
    - matchLabels:
        role: api-consumer
    toPorts:
    - ports:
      - port: "80"
        protocol: TCP
      rules:
        http:
        - method: GET
          path: "/api/.*"
        - method: POST
          path: "/api/submit"
          headers:
          - "X-App-Token: .*"
💡
DNS egress control: Allows pods to reach AWS S3 and a specific API endpoint by FQDN. All other internet egress is blocked.
dns-egress-policy.yamlYAML
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: allow-s3-egress
  namespace: data-pipeline
spec:
  endpointSelector:
    matchLabels:
      app: uploader
  egress:
  - # Allow DNS resolution first (port 53)
    toEndpoints:
    - matchLabels:
        "k8s:io.kubernetes.pod.namespace": kube-system
        k8s-app: kube-dns
    toPorts:
    - ports:
      - port: "53"
        protocol: ANY
      rules:
        dns:
        - matchPattern: "*.amazonaws.com"
        - matchName: "api.example.com"
  - # Then allow HTTPS to those resolved IPs
    toFQDNs:
    - matchPattern: "*.amazonaws.com"
    - matchName: "api.example.com"
    toPorts:
    - ports:
      - port: "443"
        protocol: TCP
⚠️
Explicit deny rules: Cilium supports ingressDeny and egressDeny. Deny rules always take precedence over allow rules. Useful for carving out exceptions within broader allow policies.
deny-dev-access.yamlYAML
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: deny-dev-to-prod-db
  namespace: production
spec:
  endpointSelector:
    matchLabels:
      app: database
      env: production
  ingress:
  - # Allow all cluster traffic on 5432
    fromEntities:
    - cluster
    toPorts:
    - ports:
      - port: "5432"
        protocol: TCP
  ingressDeny:
  - # But DENY anything from dev namespace
    fromEndpoints:
    - matchLabels:
        "k8s:io.kubernetes.pod.namespace": development
💡
Kafka L7 policy: Allows the payments service to produce only to the payments topic. Consumers are granted Fetch access to specific topics.
kafka-policy.yamlYAML
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: kafka-topic-control
  namespace: messaging
spec:
  endpointSelector:
    matchLabels:
      app: kafka-broker
  ingress:
  - fromEndpoints:
    - matchLabels:
        role: producer
    toPorts:
    - ports:
      - port: "9092"
        protocol: TCP
      rules:
        kafka:
        - apiKey: produce
          topic: payments
  - fromEndpoints:
    - matchLabels:
        role: consumer
    toPorts:
    - ports:
      - port: "9092"
        protocol: TCP
      rules:
        kafka:
        - apiKey: fetch
          topic: payments
        - apiKey: fetch
          topic: notifications
ℹ️
CiliumClusterwideNetworkPolicy (CCNP): A cluster-scoped (non-namespaced) resource. Can target nodes using nodeSelector. Ideal for enforcing baseline security posture across all namespaces.
clusterwide-baseline.yamlYAML
apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
  name: baseline-deny-world-egress
spec:
  # Applies to ALL pods cluster-wide
  endpointSelector: {}
  egressDeny:
  - # Block all internet egress by default
    toEntities:
    - world
---
apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
  name: allow-kube-apiserver
spec:
  # Allow all pods to reach kube-apiserver
  endpointSelector: {}
  egress:
  - toEntities:
    - kube-apiserver
    toPorts:
    - ports:
      - port: "443"
        protocol: TCP

Best Practices

🚫

Start with a Deny-All Baseline

Use a CiliumClusterwideNetworkPolicy with an empty endpointSelector and egressDeny: world to block all internet access by default, then explicitly allow required FQDNs.

🏷️

Label Your Workloads Consistently

Meaningful labels like app, role, tier, and env power selector-based policies. Establish a labeling convention before adopting network policies.

📡

Always Allow DNS First

L7 DNS policies require pods to be able to reach kube-dns on port 53. Add a toEndpoints rule for the DNS service before applying toFQDNs rules.

🔭

Use Hubble to Debug Policy

Run hubble observe --namespace <ns> --verdict DROPPED to see which policy is dropping traffic. Hubble shows the source identity, destination, and matched policy name.

🎯

Layer L3 + L7 Together

Always combine fromEndpoints with toPorts.rules. An L7 rule without an L3 selector would allow all sources to reach the port, only filtering at the application layer.

🧪

Test in Audit Mode First

Cilium supports a policy-audit-mode per namespace that logs-but-doesn't-drop policy violations, allowing you to validate policies before going enforcement-only.

🔒

Use Deny Rules Sparingly

Explicit ingressDeny/egressDeny override allow rules. Use them for carving out exceptions (e.g., block dev namespace from prod DB despite a broader cluster allow).

📊

Monitor Identity Counts

Cilium assigns one identity per unique label-set. With many unique pods, watch identity cardinality via cilium identity list and consolidate labels to reduce overhead.

🚨
Common Gotcha: If you apply a CiliumNetworkPolicy to a pod with only ingress rules, egress from that pod is still unrestricted unless you also define egress rules or a separate egress policy. Both directions must be explicitly controlled.
📚

Quick Reference — CRD Schema

CiliumNetworkPolicy — Top-Level StructureYAML
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy         # or CiliumClusterwideNetworkPolicy
metadata:
  name: <policy-name>
  namespace: <namespace>             # omit for CiliumClusterwideNetworkPolicy
spec:
  endpointSelector:               # required — which pods this policy targets
    matchLabels: {}
    matchExpressions: []

  ingress:                         # list of ingress allow rules
  - fromEndpoints: []             # label-based endpoint selectors
    fromRequires: []              # mandatory additional labels (AND logic)
    fromCIDR: []                  # CIDR blocks
    fromCIDRSet: []               # CIDR with exceptions
    fromEntities: []              # world / cluster / host / etc.
    toPorts:                      # port + optional L7 rules
    - ports: []
      rules:
        http: []                  # method, path, headers
        kafka: []                 # apiKey, topic
        dns: []                   # matchName, matchPattern

  ingressDeny:                     # explicit deny rules (override allows)
  - fromEndpoints: []

  egress:                          # list of egress allow rules
  - toEndpoints: []
    toCIDR: []
    toCIDRSet: []
    toEntities: []
    toFQDNs: []                   # matchName / matchPattern
    toServices: []                # Kubernetes Service reference
    toPorts: []

  egressDeny:                      # explicit egress deny
  - toEntities: []