Skip to main content

Linux eBPF & XDP Networking Primer

Linux eBPF & XDP Networking Primer

A practical guide to BPF programs, XDP hooks, and kernel-bypass packet processing for network engineers.

1. What is eBPF?

eBPF (extended Berkeley Packet Filter) is a Linux kernel subsystem that lets you run sandboxed programs inside the kernel without modifying kernel source code or loading kernel modules. Programs are verified by a kernel bytecode verifier before execution, ensuring safety.

For networking, eBPF programs attach to hook points in the kernel's network stack and can inspect, modify, redirect, or drop packets. The key advantage over iptables or kernel modules is performance and programmability: eBPF programs are JIT-compiled to native code and can share state via maps (key-value stores shared between kernel and userspace).

Hook Location Latency Use Case
XDP NIC driver, before sk_buff allocation Lowest DDoS drop, load balancing
tc ingress/egress After sk_buff allocation Low Traffic shaping, marking, redirect
socket filter Socket receive path Medium tcpdump-style filtering
kprobe/tracepoint Kernel function entry/exit Varies Observability, tracing

2. XDP Hook Points

XDP (eXpress Data Path) programs run at the earliest possible point in the network stack — inside the NIC driver, before the kernel allocates an sk_buff. This means:

  • Native XDP: Driver supports XDP natively (Intel i40e, Mellanox mlx5, etc.). Fastest — runs in driver context.
  • Generic XDP: Fallback for drivers without native support. Runs after sk_buff allocation — still faster than iptables but not as fast as native.
  • Offloaded XDP: Program runs on the NIC ASIC itself. Requires SmartNIC hardware (e.g. Netronome). Zero CPU cost.

An XDP program returns one of five verdicts:

Return Code Action
XDP_DROP Drop packet immediately — lowest latency discard
XDP_PASS Pass up to normal network stack
XDP_TX Transmit back out same interface (bounce)
XDP_REDIRECT Redirect to another interface or AF_XDP socket
XDP_ABORTED Error path — drop with trace event

3. XDP Packet Drop Example

The following program drops all UDP packets from a source IP stored in an eBPF map, allowing a userspace control plane to update the blocklist at runtime.

// xdp_drop_udp.c — Drop UDP from IPs in a BPF map
#include <linux/bpf.h>
#include <linux/if_ether.h>
#include <linux/ip.h>
#include <linux/udp.h>
#include <bpf/bpf_helpers.h>

// BPF map: src IP → drop flag (1 = drop)
struct {
    __uint(type, BPF_MAP_TYPE_HASH);
    __uint(max_entries, 1024);
    __type(key, __u32);    // source IPv4 address
    __type(value, __u32);  // 1 = block
} blocklist SEC(".maps");

SEC("xdp")
int xdp_drop_udp(struct xdp_md *ctx) {
    void *data     = (void *)(long)ctx->data;
    void *data_end = (void *)(long)ctx->data_end;

    // Parse Ethernet header
    struct ethhdr *eth = data;
    if ((void *)(eth + 1) > data_end) return XDP_PASS;
    if (eth->h_proto != __constant_htons(ETH_P_IP)) return XDP_PASS;

    // Parse IPv4 header
    struct iphdr *ip = (void *)(eth + 1);
    if ((void *)(ip + 1) > data_end) return XDP_PASS;
    if (ip->protocol != IPPROTO_UDP) return XDP_PASS;

    // Check blocklist map
    __u32 src = ip->saddr;
    __u32 *val = bpf_map_lookup_elem(&blocklist, &src);
    if (val && *val == 1) return XDP_DROP;

    return XDP_PASS;
}

char _license[] SEC("license") = "GPL";
Bounds checking is mandatory. The eBPF verifier rejects programs that access memory beyond data_end. Every pointer arithmetic operation must be followed by a bounds check or the program will not load.

Load and attach with ip:

# Compile
clang -O2 -target bpf -c xdp_drop_udp.c -o xdp_drop_udp.o

# Attach to interface (native XDP)
ip link set eth0 xdp obj xdp_drop_udp.o sec xdp

# Add an IP to the blocklist via bpftool
bpftool map update name blocklist key 0x01 0x02 0x03 0x04 value 0x01 0x00 0x00 0x00

# Remove XDP program
ip link set eth0 xdp off

4. AF_XDP: Kernel-Bypass

AF_XDP is a socket family that, combined with XDP's XDP_REDIRECT verdict, delivers packets directly to a userspace memory region (UMEM) without kernel involvement per-packet. This is the eBPF ecosystem's answer to DPDK's kernel-bypass model.

Key components:

  • UMEM: A userspace-registered memory region divided into frames. Shared between kernel and userspace via shared memory.
  • Rings: Four lock-free rings per socket: Fill (userspace → kernel with free frames), Completion (kernel → userspace with TX-done frames), RX ring (kernel → userspace with received frames), TX ring (userspace → kernel with frames to send).
  • Zero-copy mode: If the driver supports it, frames are transferred without any copy — just a pointer hand-off.

AF_XDP is ideal for custom packet processing at line rate without the operational complexity of DPDK (no hugepages, no CPU pinning required for basic use).

5. tc BPF: Traffic Shaping & Filtering

tc (traffic control) BPF programs attach at the clsact qdisc and can run on ingress or egress. Unlike XDP, they see the full sk_buff and can access socket metadata, VLANs, and tunnel headers.

// tc_mark.c — Mark packets with DSCP EF (46) for VoIP traffic on port 5060
#include <linux/bpf.h>
#include <linux/if_ether.h>
#include <linux/ip.h>
#include <linux/udp.h>
#include <bpf/bpf_helpers.h>

SEC("classifier")
int tc_mark_voip(struct __sk_buff *skb) {
    void *data     = (void *)(long)skb->data;
    void *data_end = (void *)(long)skb->data_end;

    struct ethhdr *eth = data;
    if ((void *)(eth + 1) > data_end) return TC_ACT_OK;
    if (eth->h_proto != __constant_htons(ETH_P_IP)) return TC_ACT_OK;

    struct iphdr *ip = (void *)(eth + 1);
    if ((void *)(ip + 1) > data_end) return TC_ACT_OK;
    if (ip->protocol != IPPROTO_UDP) return TC_ACT_OK;

    struct udphdr *udp = (void *)(ip + 1);
    if ((void *)(udp + 1) > data_end) return TC_ACT_OK;

    // Mark SIP traffic (port 5060) with DSCP EF (46 = 0xB8 in TOS byte)
    if (udp->dest == __constant_htons(5060) || udp->source == __constant_htons(5060)) {
        // DSCP EF = 46, shifted left 2 bits in TOS field = 184 (0xB8)
        bpf_skb_store_bytes(skb, offsetof(struct iphdr, tos) + sizeof(struct ethhdr),
                            &((__u8){184}), 1, BPF_F_RECOMPUTE_CSUM);
    }
    return TC_ACT_OK;
}

char _license[] SEC("license") = "GPL";
# Attach tc BPF program
tc qdisc add dev eth0 clsact
tc filter add dev eth0 egress bpf da obj tc_mark.o sec classifier

6. Rate Limiting with eBPF Maps

eBPF maps enable stateful processing. The following pattern implements per-source-IP rate limiting using a token bucket stored in a BPF_MAP_TYPE_LRU_HASH:

// Conceptual token bucket per source IP — checks tokens, drops if exceeded
struct ratelimit_entry {
    __u64 tokens;        // current token count
    __u64 last_update;   // nanoseconds timestamp
};

struct {
    __uint(type, BPF_MAP_TYPE_LRU_HASH);
    __uint(max_entries, 65536);
    __type(key, __u32);                     // source IP
    __type(value, struct ratelimit_entry);
} rate_map SEC(".maps");

// In XDP program:
// 1. bpf_ktime_get_ns() — get current time
// 2. Lookup entry for src IP
// 3. Refill tokens: tokens += (elapsed_ns / 1e9) * rate_pps
// 4. If tokens >= 1: decrement and XDP_PASS
// 5. Else: XDP_DROP

7. bpftool & bpftrace Introspection

Two essential tools for working with live eBPF programs:

# bpftool — inspect loaded programs and maps
bpftool prog list                         # list all loaded eBPF programs
bpftool prog show id 42                   # details for program ID 42
bpftool prog dump xlated id 42            # disassemble to eBPF bytecode
bpftool prog dump jited id 42            # dump JIT-compiled native code
bpftool map list                          # list all BPF maps
bpftool map dump name blocklist           # dump all entries in map "blocklist"
bpftool map update name blocklist \
    key 192 168 1 100 value 1 0 0 0       # add entry (network byte order)
# bpftrace — DTrace-style one-liners for kernel tracing
# Count XDP drops per second
bpftrace -e 'tracepoint:xdp:xdp_exception { @drops[args->action] = count(); } interval:s:1 { print(@drops); clear(@drops); }'

# Trace tcp_retransmit_skb — show retransmit events with comm name
bpftrace -e 'kprobe:tcp_retransmit_skb { printf("%s retransmit\n", comm); }'

# Histogram of packet sizes on eth0
bpftrace -e 'tracepoint:net:netif_receive_skb /args->name == "eth0"/ { @size = hist(args->len); }'

8. Comparison: eBPF/XDP vs DPDK vs RDMA

Feature eBPF/XDP DPDK RDMA
Kernel involvement Minimal (XDP in driver) None (full bypass) None (RDMA NIC)
Memory model Standard + AF_XDP UMEM Hugepages required Registered memory regions
Max throughput ~100 Gbps native XDP >100 Gbps 200+ Gbps (InfiniBand)
CPU usage Low (event-driven) High (busy-poll cores) Near zero (offloaded)
Ops complexity Low — standard tools High — dedicated cores, hugepages High — fabric management
Use case DDoS mitigation, LB, observability Virtual routers, NFV, packet gen Storage (NVMe-oF), HPC MPI
Language Restricted C / Rust C / Rust Verbs API (C)
Rule of thumb: Start with eBPF/XDP — it integrates with existing kernel tools, requires no special hardware or huge pages, and handles most high-performance networking use cases under 100 Gbps. Move to DPDK only when you need dedicated CPU cores and cannot tolerate any kernel scheduling overhead.