Creating a Network Function Virtualization (NFV) lab using Docker containers is an excellent way to teach networking concepts in a practical and hands-on manner. This approach allows students to explore the principles of NFV, including the virtualization of network functions, resource management, and network orchestration, all within a flexible and scalable environment. This guide provides a comprehensive overview of how to set up such a lab, covering essential components, configurations, best practices, and example setups.
NFV is a network architecture concept that decouples network functions from dedicated hardware, enabling them to run as software on commodity servers. This virtualization allows for greater flexibility, scalability, and cost-effectiveness in network deployments. Docker containers are a lightweight and efficient way to package and deploy these virtualized network functions (VNFs). Docker provides application-level isolation, making it ideal for running multiple VNFs on a single host. The portability of Docker containers ensures that the lab environment can be easily replicated across different systems, and their resource efficiency allows for more VNFs to be deployed on limited hardware.
Before setting up the NFV lab, ensure the following hardware and software requirements are met:
Processor: A multi-core CPU (quad-core or higher) is recommended to handle multiple containers. Intel i5/i7 or AMD equivalents are suitable.
RAM: At least 16 GB of RAM is necessary, with 32 GB or more recommended for more complex setups. This ensures smooth operation of multiple VNFs.
Storage: A minimum of 100 GB of free disk space is required, preferably an SSD for faster performance. This space will be used for container images and configurations.
Network Interface Cards (NICs): At least two NICs are recommended for simulating network traffic and isolating different network segments. This allows for more realistic network simulations.
Operating System: A Linux distribution such as Ubuntu 22.04 LTS or CentOS 8 is recommended. Linux provides better support for Docker and networking tools.
Docker: Install the latest version of Docker Engine. This is the core component for running containers. Installation guides are available at https://docs.docker.com/engine/install/.
Docker Compose: Install Docker Compose for managing multi-container setups. This tool simplifies the deployment and management of complex lab topologies. Installation guides are available at https://docs.docker.com/compose/install/.
Git: Install Git for cloning repositories and managing configuration files. Installation guides are available at https://git-scm.com/book/en/v2/Getting-Started-Installing-Git.
Open vSwitch (OVS): Install Open vSwitch for virtual networking. OVS is a multilayer virtual switch that enables the creation of complex network topologies. The official website is https://www.openvswitch.org/.
Python: Version 3.8 or higher is recommended for automation and orchestration scripts. Python is a versatile language for scripting and automation tasks.
Mininet: For simulating Software-Defined Networking (SDN). Mininet is a network emulator that can be used to create virtual networks. Installation guides are available at http://mininet.org/download/.
Comnetsemu VM: For hands-on NFV and SDN tutorials. Comnetsemu provides a virtual environment for learning NFV and SDN. Instructions are available at https://git.comnets.net/public-repo/comnetsemu.
An NFV lab typically consists of the following components:
VNFs are software implementations of network functions, such as routers, firewalls, and load balancers. These functions are deployed as Docker containers. Examples include:
Open vSwitch (OVS): A virtual switch for managing network traffic. The Docker image is available at https://hub.docker.com/r/openvswitch/ovs.
NGINX: A web server that can act as a load balancer. The Docker image is available at https://hub.docker.com/_/nginx.
FRRouting (FRR): A routing suite that can be used to create containerized routers. The Docker image is available at https://hub.docker.com/r/frrouting/frr.
pfSense: A firewall that can be deployed as a container. The Docker image is available at https://hub.docker.com/r/pfsense/pfsense.
iptables: A firewall that can be configured within a custom container.
HAProxy: A load balancer that can be deployed as a container.
An orchestrator manages the deployment and lifecycle of VNFs. For Docker-based labs, Docker Compose can serve as a lightweight orchestrator. For more complex setups, tools like Inmanta Connect can be used for service orchestration.
Prometheus and Grafana: For monitoring container performance. Prometheus collects metrics, and Grafana visualizes them. The Prometheus Docker image is available at https://hub.docker.com/r/prom/prometheus, and the Grafana Docker image is available at https://hub.docker.com/r/grafana/grafana.
Docker Networking: Use Docker’s bridge or overlay networks to connect VNFs. Docker networks allow for isolated communication between containers.
Open vSwitch (OVS): Integrate OVS for more complex virtual networking scenarios. OVS allows for the creation of virtual switches and VLANs.
VXLAN: Use VXLAN for creating overlay networks. VXLAN is a network virtualization technology that can be used to create isolated networks on top of existing infrastructure.
SDN Controller (Optional): Tools like ONOS or OpenDaylight can be integrated for advanced SDN functionalities. These controllers allow for centralized management of network resources.
Follow the official Docker installation guide for your operating system. Verify the installation using the following commands:
docker --version
docker-compose --version
Create custom Docker networks for connecting VNFs. For example, create a bridge network:
docker network create --driver bridge nfv-network
Pull and run Docker images for VNFs. Here are examples for Open vSwitch and NGINX:
Open vSwitch VNF:
docker pull openvswitch/ovs
docker run --name ovs-vnf --net nfv-network -d openvswitch/ovs
NGINX VNF:
docker pull nginx
docker run --name nginx-vnf --net nfv-network -d nginx
Create a docker-compose.yml
file to define multi-container setups. Here's an example:
version: '3.8'
services:
ovs:
image: openvswitch/ovs
container_name: ovs-vnf
networks:
- nfv-network
nginx:
image: nginx
container_name: nginx-vnf
networks:
- nfv-network
networks:
nfv-network:
driver: bridge
Deploy the setup using:
docker-compose up -d
Deploy Prometheus and Grafana for monitoring. Create a docker-compose.yml
file for Prometheus and Grafana:
version: '3.8'
services:
prometheus:
image: prom/prometheus
container_name: prometheus
ports:
- "9090:9090"
grafana:
image: grafana/grafana
container_name: grafana
ports:
- "3000:3000"
Deploy the monitoring stack:
docker-compose up -d
Access the dashboards:
http://<host-ip>:9090
http://<host-ip>:3000
Install Open vSwitch and create a virtual switch:
sudo apt install -y openvswitch-switch
sudo ovs-vsctl add-br br0
sudo ovs-vsctl add-port br0 ens33
Create a Docker network that integrates with Open vSwitch:
docker network create \
--driver=bridge \
--subnet=192.168.1.0/24 \
nfvlab-net
Attach Docker containers to OVS:
sudo ovs-docker add-port br0 eth1 firewall
sudo ovs-docker add-port br0 eth1 router
sudo ovs-docker add-port br0 eth1 loadbalancer
Resource Allocation: Limit container resources using Docker’s --memory
and --cpus
flags. This prevents any single container from consuming excessive resources.
Security: Use Docker’s built-in security features like user namespaces and SELinux. This enhances the security of the lab environment.
Backup: Regularly back up Docker volumes and configurations. This ensures that the lab can be restored in case of any issues.
Documentation: Maintain clear documentation for lab setups and configurations. This helps in understanding and troubleshooting the lab environment.
Server Homogeneity: Ensure all servers have identical configurations to simplify operations and maintenance. This is crucial for consistent performance and easier management.
NUMA Node Alignment: Align NUMA nodes to optimize resource allocation and performance. This is important for data-plane intensive workloads.
NIC Selection: Choose NICs that support high data plane performance, including throughput, overlay offloading, and PCIe speeds. This ensures optimal network performance.
Data Plane Traffic Path: Isolate data plane traffic from control and management plane traffic to maximize performance. This prevents interference between different types of traffic.
Comprehensive Automation: Use an automation platform like CloudShell to manage resources, provision, automate tests, and integrate reporting and business intelligence. This streamlines the management of the lab.
Scalability Testing: Ensure VNFs have auto-scale features to scale resources automatically in response to varying network function performance needs. This ensures that the lab can handle varying workloads.
Compliance Testing: Follow ETSI ISG standards and guidelines for compliance testing, including analyzing current standards, preparing compliance requirements, and defining gap points. This ensures that the lab adheres to industry standards.
curl
and ping
.Setting up an NFV lab using Docker containers provides a flexible, scalable, and cost-effective way to teach networking concepts. By following the steps outlined in this guide, educators can create a robust learning environment for students to explore NFV principles, experiment with different network functions, and gain hands-on experience with network virtualization technologies. The use of Docker, Docker Compose, Open vSwitch, and other tools allows for the creation of complex network topologies and the simulation of real-world scenarios. This approach not only enhances the learning experience but also prepares students for the challenges of modern network environments.