Lee Calcote | Tony Burke
December 7th, 2016
developer-friendly and application-driven
simple to use and deploy for developers and operators
better or at least on par with their existing virtualized data center networking
but no need to actually know these
...is a specification proposed by Docker, adopted by projects such as libnetwork
Plugins built by projects such as Weave, Project Calico, Plumgrid and Kuryr.
...is a specification proposed by CoreOS and adopted by projects such as rkt, Kurma, Kubernetes, Cloud Foundry, and Apache Mesos
Plugins created by projects such as Weave, Project Calico, Plumgrid, Midokura and Contiv Networking
Text
Remote Drivers
Local Drivers
Network Sandbox
Endpoint
Backend Network
Docker Container
Network Sandbox
Endpoint
Docker Container
Network Sandbox
Endpoint
Docker Container
Endpoint
Frontend Network
Container runtime needs to:
allocate a network namespace to the container and assign a container ID
pass along a number of parameters (CNI config) to network driver.
Network driver attaches container to a network and then reports the assigned IP address back to the container runtime (via JSON schema)
{
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "10.22.0.0/16",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
(JSON)
Similar in that each...
...allow multiple network drivers to be active and used concurrently
each provide a one-to-one mapping of network to that network’s driver
...allow containers to join one or more networks.
...allow the container runtime to launch the network in its own namespace
segregate the application/business logic of connecting the container to the network to the network driver.
Different in that...
None
Links and Ambassadors
Container-mapped
Bridge
Host
Overlay
Underlay
Point-to-Point
Fan Networking
container receives a network stack, but lacks an external network interface.
it does, however, receive a loopback interface.
Web Host
MySQL
Ambassador
PHP
DB Host
PHP
Ambassador
MySQL
link
link
one container reuses (maps to) the networking namespace of another container.
may only be invoked when running a docker container (cannot be defined in Dockerfile):
--net=container=some_container_name_or_id
Ah, yes, docker0
container created shares its network namespace with the host
use networking tunnels to delivery communication across hosts
Docker -
1.11 requires K/V store
built-in as of 1.12 (raft implementation taken from etcd)
WeaveMesh - does not require K/V store
WeaveNet - limited to single network; requires K/V store
Flannel - requires K/V store
Plumgrid - requires K/V store; built-in and not pluggable
Midokura - requires K/V store; built-in and not pluggable
expose host interfaces (i.e. the physical network interface at eth0) directly to containers running on the host
not necessarily public cloud friendly
Internet
allows creation of multiple virtual network interfaces behind the host’s single physical interface
Each virtual interface has unique MAC and IP addresses assigned
with restriction: the IP address needs to be in the same broadcast domain as the physical interface
eliminates the need for the Linux bridge, NAT and port-mapping
allowing you to connect directly to physical interface
allows creation of multiple virtual network interfaces behind the host’s single physical interface
Each virtual interface has unique IP addresses assigned
Same MAC address used for all containers
L2-mode containers must be on same network as host (similar to MACvlan)
L3-mode containers must be on different network than host
Network advertisement and redistribution into the network still needs to be done.
While multiple modes of networking are supported on a given host, MACvlan and IPvlan can’t be used on the same physical interface concurrently.
ARP and broadcast traffic, the L2 modes of these underlay drivers operate just as a server connected to a switch does by flooding and learning using 802.1d packets
IPvlan L3-mode - No multicast or broadcast traffic is allowed in.
In short, if you’re used to running trunks down to hosts, L2 mode is for you.
If scale is a primary concern, L3 has the potential for massive scale.
Benefits of pushing past L2 to L3
resonates with network engineers
leverage existing network infrastructure
use routing protocols for connectivity; easier to interoperate with existing data center across VMs and bare metal servers
Better scaling
More granular control over filtering and isolating network traffic
Easier traffic engineering for quality of service
Easier to diagnose network issues
a way of gaining access to many more IP addresses, expanding from one assigned IP address to 250 more IP addresses
“address expansion” - multiplies the number of available IP addresses on the host, providing an extra 253 usable addresses for each host IP
Fan addresses are assigned as subnets on a virtual bridge on the host,
IP addresses are mathematically mapped between networks
uses IP-in-IP tunneling; high performance
particularly useful when running containers in a public cloud
where a single IP address is assigned to a host and spinning up additional networks is prohibitive or running another load-balancer instance is costly
IPAM, multicast, broadcast, IPv6, load-balancing, service discovery, policy, quality of service, advanced filtering and performance are all additional considerations to account for when selecting networking that fits your needs.
IPv6
IPAM
Text
GTM
Success
Purpose
Functions / Benefit
See additional research.