March 2018
Resiliency, security and observability at Layer 5
clouds, containers, functions, applications and their management
The first few services are relatively easy
Democratization of language and technology choice
Faster delivery, service teams running independently, rolling updates
The next 10 or so may introduce pain
Language and framework specific libraries
Distributed environments, ephemeral infrastructure, out-moded tooling
Cluster Management
Host Discovery
Host Health Monitoring
Scheduling
Orchestrator Updates and Host Maintenance
Service Discovery
Networking and Load-Balancing
Stateful services
Multi-tenant, multi-region
Application Health & Performance Monitoring
Application Deployments
Application Secrets
• Observability
• Logging
• Metrics
• Tracing
• Traffic Control
• Resiliency
• Efficiency
• Security
• Policy
a dedicated layer for managing service-to-service communication
so, a microservices platform?
obviously.
Orchestrators don't bring all that you need
and neither do service meshes,
but they do get you closer.
Missing: application lifecycle management, but not by much
partially.
Missing: distributed debugging; provide nascent visibility (topology)
where Dev and Ops meet
Problem: too much infrastructure code in services
to avoid...
Bloated service code
Duplicating work to make services production-ready
load balancing, auto scaling, rate limiting, traffic routing, ...
Inconsistency across services
retry, tls, failover, deadlines, cancellation, etc, for each language, framework
silo'ed implementations lead to fragmented, non-uniform policy application and difficult debugging
Diffusing responsibility of service management
Can modernize your IT inventory without:
Rewriting your applications
Adopting microservices, regular services are fine
Adopting new frameworks
Moving to the cloud
Address the long-tail of IT services
Get there for free
An open platform to connect, manage, and secure microservices
Observability
Resiliency
Traffic Control
Security
Policy Enforcement
@IstioMesh
is what gets people hooked on service metrics
Metrics without instrumenting apps
Consistent metrics across fleet
Trace flow of requests across services
Portable across metric backend providers
You get a metric! You get a metric! Everyone gets a metric!
control over chaos
Timeouts and Retries with timeout budget
Circuit breakers and Health checks
Control connection pool size and request load
content-based traffic steering
Control Plane
Data Plane
Touches every packet/request in the system. Responsible for service discovery, health checking, routing, load balancing, authentication, authorization and observability.
Provides policy and configuration for services in the mesh.
Takes a set of isolated stateless sidecar proxies and turns them into a service mesh.
Does not touch any packets/requests in the system.
Pilot
Auth
Mixer
Control Plane
Data Plane
istio-system namespace
policy check
Foo Pod
Proxy sidecar
Service Foo
tls certs
discovery & config
Foo Container
Bar Pod
Proxy sidecar
Service Bar
Bar Container
Out-of-band telemetry propagation
telemetry
reports
Control flow during request processing
application traffic
application traffic
application namespace
telemetry reports
provides service discovery to sidecars
manages sidecar configuration
Pilot
Auth
Control Plane
the head of the ship
Mixer
istio-system namespace
system of record for service mesh
}
provides abstraction from underlying platforms
Pilot
Auth
Mixer
Control Plane
istio-system namespace
an attribute-processing and routing machine
operator-focused
Mixer
Control Plane
Data Plane
istio-system namespace
Foo Pod
Proxy sidecar
Service Foo
Foo Container
Out-of-band telemetry propagation
Control flow during request processing
application traffic
application traffic
application namespace
telemetry reports
an attribute processing engine
Pilot
Auth
Control Plane
security at scale
Mixer
istio-system namespace
security by default
Orchestrate Key & Certificate:
A C++ based L4/L7 proxy
Low memory footprint
In production at Lyft
Capabilities:
the included battery
Data Plane
Pod
Proxy sidecar
App Container
Envoy, Linkerd, Nginx, Conduit
Based on your operational expertise and need for battle-tested proxy. You may be looking for caching, WAF or other functionality available in NGINX Plus.
If you're already running Linkerd and want to start adopting Istio control APIs like CheckRequest.
Conduit not currently designed a general-purpose proxy, but lightweight and focused with extensibility via gRPC plugin.
Currently
Roadmap
See sidecar-related limitations as well as supported traffic management rules --> here .
Considered beta quality
Soliciting feedback and participation from community
Architecture
agent
Pilot
Auth
Mixer
Control Plane
config file
Data Plane
Mixer Module
"istio-proxy" container
route rules
tcp server
istio-system namespace
check
report
listener
dest
module
tcp
http
Out-of-band telemetry propagation
Control flow during request processing
application traffic
application traffic
http servers
Recording - O'Reilly: Istio & nginMesh
AppOptics
types: logs, metrics, access control, quota
Papertrail
Prometheus
Grafana
Fluentd
Statsd
Stackdriver
Open Policy Agent
Let's look at Istio's canonical sample app.
Reviews v1
Reviews Pod
Reviews v2
Reviews v3
Product Pod
Details Container
Details Pod
Ratings Container
Ratings Pod
Product Container
Reviews Service
Reviews v1
Reviews Pod
Reviews v2
Reviews v3
Product Pod
Details Container
Details Pod
Ratings Container
Ratings Pod
Product Container
Nginx sidecar
Nginx sidecar
Nginx sidecar
Nginx sidecar
Nginx sidecar
Reviews Service
Nginx sidecar
Envoy ingress
kubectl version
kubectl get ns
kubectl apply -f ../istio-appoptics-0.5.1-solarwinds-v01.yaml
# If using admission controller and initializers
kubectl apply -f ./install/kubernetes/istio-sidecar-injector.yaml
deploy Istio
kubectl get ns
watch kubectl get po,svc -n istio-system
kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo.yaml)
confirm deployment Istio; deploy sample app
running Istio
running Istio
watch kubectl get po,svc
kubectl describe po/
echo "http://$(kubectl get nodes -o template --template='{{range.items}}{{range.status.addresses}}{{if eq .type "InternalIP"}}{{.address}}{{end}}{{end}}{{end}}'):$(kubectl get svc istio-ingress -n istio-system -o jsonpath='{.spec.ports[0].nodePort}')/productpage"
See "reviews" v1, v2 and v3
# From Docker's perspective
docker ps | grep istio-proxy
# From Kubernetes' perspective
kubectl get po
kubectl describe <a pod>
Verify mesh deployment
# exec into 'istio-proxy'
kubectl exec -it <a pod> -c istio-proxy /bin/bash
envoy --version
Connect to proxy sidecar
istioctl get routerules
# Generate load for Mixer telemetry adapter
docker run --rm istio/fortio load -c 1 -t 10m \
`echo "http://$(kubectl get nodes -o template --template='{{range.items}}{{range.status.addresses}}{{if eq .type "InternalIP"}}{{.address}}{{end}}{{end}}{{end}}'):$(kubectl get svc istio-ingress -n istio-system -o jsonpath='{.spec.ports[0].nodePort}')/productpage"`
Generate load
running Istio
#Deploy new configuration to Proxy
cd samples/bookinfo/kube
istioctl get routerules
istioctl create -f route-rule-all-v1.yaml
istioctl delete -f route-rule-all-v1.yaml
#A/B testing for user "lee"
cat route-rule-reviews-test-v2.yaml
istioctl create -f route-rule-reviews-test-v2.yaml
Apply traffic routing policy
See Mixer telemetry
Try it out
clouds, containers, functions,
applications and their management