eBPF (extended Berkeley Packet Filter) is a highly flexible and efficient virtual machine-like component in the Linux kernel. It can run directly in the Linux Kernel Mode and can quickly complete processing of packets and system calls, avoiding the overhead of switching between Kernel Mode/User Mode and data copying. Due to its strong security and stability, there are more and more eBPF-based projects have sprung up. Currently, many kernel subsystems have already used eBPF, such as common areas in networking, load balancing, tracking, and security. In addition, some widely used open-source projects, such as Cilium, Falco, Bcc, Katran, Bpftrace, Kubectl-trace, have adopted this technology. This forum will fully demonstrate the eBPF technology and share how to integrate it into real-world work.
Achieve zero-intrusive cloud-native observability with eBPF
The driving forces of microservices and cloud-native have brought about a huge transformation in application architecture. While the number of services has increased and the complexity of individual services has decreased, the overall complexity of distributed applications has sharply increased. In a cloud-native environment, how to achieve application observability and make business controllable has become an important challenge for developers. Leveraging the new kernel programmability released by eBPF, DeepFlow innovatively implements AutoMetrics, AutoTracing, and AutoLogging capabilities without requiring developers to manually insert code or instrumentation, enabling full-stack observability for cloud-native applications.
● From cBPF to eBPF: AutoMetrics capability
● From InProcess to Distributed: AutoTracing capability
● From kprobe to uprobe: AutoLogging capability
Target audience and benefits:
● Obtain best practices for eBPF in observability field
● Understand DeepFlow's cloud-native observability platform
Xiang Yang | R&D VP of YUNSHAN Networks
BPF cold upgrade - Let low version kernels use new features
As one of the popular areas in the kernel in recent years, eBPF has been developing rapidly in upstream. However, in production environments, stability is often pursued by the kernel, and business users often want to use stable old versions of the kernel while also using some newer BPF features. Based on the plugsched scheduler hot upgrade technology, we have developed a modularized BPF subsystem that can adapt to flexible development needs on a stable low version of the kernel, achieving both goals.
The cold upgrade of BPF (plugbpf) inherits the advantages of plugsched without requiring machine restarts and with millisecond-level downtime. By replacing internal syscall and interface functions, users are unaware that an upgrade has occurred; they only need to run their own BPF programs normally and directly as if they were running them on a higher version of the kernel.
Plugbpf works as a module and currently supports versions Linux 4.19 and 5.10 on x86 platforms. Users can load modules after ensuring that there are no active BPF programs in their original system.
Tianchen Ding | Alibaba Cloud Operating System R&D Engineer
The combination of eBPF and confidential computing ecosystem
In this topic, we will discuss the basics of eBPF and Confidential Computing, some ecological combinations based on open source practices in these two fields, as well as thoughts on the future development of eBPF and Confidential Computing.
Zhenyu Zheng | Senior Software Engineer of Huawei
Operation and maintenance North Star Metric system based on eBPF Trace Profiling
Observable technology still faces difficulties in root cause analysis despite the existence of trace, log, and metrics technologies. Internet companies face significant challenges due to the immaturity of current root cause analysis technologies, with most relying on technical personnel experience.
Continues Profiling based on eBPF technology is a hot topic abroad because it is expected to find problem roots. However, according to our research, Continues profiling can only solve single-dimensional CPU problems and is difficult to reach trace level. It is challenging to use in production environments with multiple user accesses.
Kindling has pioneered trace_profiling technology based on eBPF technology by reducing the granularity of profiling to the trace level. This helps users locate a single request and convert the execution process of trace code into a resource consumption process at the trace level through eBPF technology, providing standardized ways for root cause analysis.
This session will discuss how Kindling builds trace_profiling and its applicable scenarios.
Cheng chang | Founder of Kindling open source project founder & CTO of HarmonyCloud
In the service mesh scenario, in order to utilize Sidecar for traffic management without affecting the application, it is necessary to forward both inbound and outbound traffic of Pods to Sidecar. The most common solution in this case is to use iptables (netfilter) redirection capability. The disadvantage of this approach is increased network latency because iptables intercepts both outbound and inbound traffic. For example, for inbound traffic that originally flows directly to the application, it needs to be forwarded by iptables to the Sidecar first and then by the Sidecar to the actual application. What used to require only two kernel-level processing links now becomes four times, resulting in a significant loss of performance.
This presentation will introduce Merbridge project's implementation principle and explain how it uses eBPF for network acceleration in various service meshes such as Istio, kuma, linkerd2.
Qijun Liu | Service Mesh expert of DaoCloud Merbridge project initiator and Istio Maintainer