This fork of Mininet allows to use Docker containers as Mininet hosts. This enables interesting functionalities to built networking/cloud testbeds. The integration is done by subclassing the original Host class.
Based on: Mininet 2.2.2
- Containernet website: https://containernet.github.io/
- Mininet website: http://mininet.org
- Original Mininet repository: https://github.com/mininet/mininet
If you use Containernet for your research and/or other publications, please cite (beside the original Mininet paper) the following paper to reference our work:
- M. Peuster, H. Karl, and S. v. Rossem: MeDICINE: Rapid Prototyping of Production-Ready Network Services in Multi-PoP Environments. IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), Palo Alto, CA, USA, pp. 148-153. doi: 10.1109/NFV-SDN.2016.7919490. (2016)
There is an extension of Containernet called son-emu which is a full-featured multi-PoP emulation platform for NFV scenarios which is developed as part of the SONATA project.
- Add, remove Docker containers to Mininet topologies
- Connect Docker containers to topology (to switches, other containers, or legacy Mininet hosts)
- Execute commands inside Docker containers by using the Mininet CLI
- Dynamic topology changes (lets behave like a small cloud ;-))
- Add Hosts/Docker containers to a running Mininet topology
- Connect Hosts/Docker containers to a running Mininet topology
- Remove Hosts/Docker containers/Links from a running Mininet topology
- Resource limitation of Docker containers
- CPU limitation with Docker CPU share option
- CPU limitation with Docker CFS period/quota options
- Memory/swap limitation
- Change CPU/mem limitations at runtime!
- Traffic control links (delay, bw, loss, jitter)
- (missing: TCLink support for dynamically added containers/hosts)
- Automated unit tests for all new features
- Automated installation based on Ansible playbook
Containernet comes with three installation and deployment options.
Automatic installation is provided through an Ansible playbook.
- Requires: Ubuntu 16.04 LTS
sudo apt-get install ansible git aptitude
git clone https://github.com/containernet/containernet.git
cd containernet/ansible
sudo ansible-playbook -i "localhost," -c local install.yml
Wait (and have a coffee) ...
Containernet can be executed within a privileged Docker container (nested container deployment). There is also a pre-build Docker image available on DockerHub.
# build the container locally
docker build -t containernet .
# or pull the latest pre-build container
docker pull containernet/containernet
# run the container
docker run --name containernet -it --rm --privileged --pid='host' -v /var/run/docker.sock:/var/run/docker.sock containernet /bin/bash
Using the provided Vagrantfile is the another way to run and test Containernet:
vagrant up
vagrant ssh
Start example topology with some empty Docker containers connected to the network.
cd containernet
- run:
sudo python examples/containernet_example.py
- use:
containernet> d1 ifconfig
to see config of container d1
In your custom topology script you can add Docker hosts as follows:
info('*** Adding docker containers\n')
d1 = net.addDocker('d1', ip='10.0.0.251', dimage="ubuntu:trusty")
d2 = net.addDocker('d2', ip='10.0.0.252', dimage="ubuntu:trusty", cpu_period=50000, cpu_quota=25000)
d3 = net.addHost('d3', ip='11.0.0.253', cls=Docker, dimage="ubuntu:trusty", cpu_shares=20)
d4 = net.addDocker('d4', dimage="ubuntu:trusty", volumes=["/:/mnt/vol1:rw"])
There is a set of Containernet specific unit tests located in mininet/test/test_containernet.py
. To run these, do:
sudo py.test -v mininet/test/test_containernet.py
If you have any questions, please use GitHub's issue system or Containernet's Gitter channel to get in touch.
Your contributions are very welcome! Please fork the GitHub repository and create a pull request. We use Travis-CI to automatically test new commits.
Manuel Peuster
- Mail: <manuel (dot) peuster (at) upb (dot) de>
- GitHub: @mpeuster
- Website: Paderborn University