[Switching Istio Egress Traffic to Wildcard Destinations From Nginx SNI Proxy to Envoy]
If you are using Istio Egress in K8s to process and control outgoing traffic to wildcard destinations version <=1.18 and plan to upgrade to the latest version available, then first you need to prepare the configuration for a similar approach using version-independent Istio components and Envoy functions, without separate Nginx SNI proxy.
The use case of configuring an Egress gateway to handle arbitrary wildcard domains had been included in the official Istio docs – up until version 1.13, but was subsequently removed because the documented solution was not officially supported or recommended and was subject to breakage in future versions of Istio. Nevertheless, the old solution was still usable with Istio versions before 1.18 inclusive. Istio 1.19, however, dropped some Envoy functionality that was required for the approach to work.
To switch to the new solution in K8s we need to run the new Istio Egress gateway and its components in accordance with the documentation https://istio.io/latest/docs/tasks/traffic-management/egress/wildcard-egress-hosts/.
Look at the code example
kubectl apply -f - <<EOF
# New k8s cluster service to put egressgateway into the Service Registry,
# so application sidecars can route traffic towards it within the mesh.
apiVersion: v1
kind: Service
metadata:
name: egressgateway
namespace: istio-system
spec:
type: ClusterIP
selector:
istio: egressgateway
ports:
- port: 443
name: tls-egress
targetPort: 8443
---
# Gateway deployment with injection method
apiVersion: apps/v1
kind: Deployment
metadata:
name: istio-egressgateway
namespace: istio-system
spec:
selector:
matchLabels:
istio: egressgateway
template:
metadata:
annotations:
inject.istio.io/templates: gateway
labels:
istio: egressgateway
sidecar.istio.io/inject: "true"
spec:
containers:
- name: istio-proxy
image: auto # The image will automatically update each time the pod starts.
securityContext:
capabilities:
drop:
- ALL
runAsUser: 1337
runAsGroup: 1337
---
# Set up roles to allow reading credentials for TLS
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: istio-egressgateway-sds
namespace: istio-system
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
- apiGroups:
- security.openshift.io
resourceNames:
- anyuid
resources:
- securitycontextconstraints
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: istio-egressgateway-sds
namespace: istio-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: istio-egressgateway-sds
subjects:
- kind: ServiceAccount
name: default
---
# Define a new listener that enforces Istio mTLS on inbound connections.
# This is where sidecar will route the application traffic, wrapped into
# Istio mTLS.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: egressgateway
namespace: istio-system
spec:
selector:
istio: egressgateway
servers:
- port:
number: 8443
name: tls-egress
protocol: TLS
hosts:
- "*"
tls:
mode: ISTIO_MUTUAL
---
# VirtualService that will instruct sidecars in the mesh to route the outgoing
# traffic to the egress gateway Service if the SNI target hostname matches
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-wildcard-through-egress-gateway
namespace: istio-system
spec:
exportTo:
- "."
hosts:
- "*.example.com"
gateways:
- mesh
- egressgateway
tls:
- match:
- gateways:
- mesh
port: 443
sniHosts:
- "*.example.com"
route:
- destination:
host: egressgateway.istio-egress.svc.cluster.local
subset: wildcard
# Dummy routing instruction. If omitted, no reference will point to the Gateway
# definition, and istiod will optimise the whole new listener out.
tcp:
- match:
- gateways:
- egressgateway
port: 8443
route:
- destination:
host: "dummy.local"
weight: 100
---
# Instruct sidecars to use Istio mTLS when sending traffic to the egress gateway
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: egressgateway
namespace: istio-system
spec:
host: egressgateway.istio-egress.svc.cluster.local
subsets:
- name: wildcard
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
---
# Put the remote targets into the Service Registry
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: wildcard
namespace: istio-system
spec:
hosts:
- "*.wikipedia.org"
ports:
- number: 443
name: tls
protocol: TLS
---
# Access logging for the gateway
apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
name: mesh-default
namespace: istio-system
spec:
accessLogging:
- providers:
- name: envoy
---
# And finally, the configuration of the SNI forwarder,
# it's internal listener, and the patch to the original Gateway
# listener to route everything into the SNI forwarder.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: sni-magic
namespace: istio-system
spec:
configPatches:
- applyTo: CLUSTER
match:
context: GATEWAY
patch:
operation: ADD
value:
name: sni_cluster
load_assignment:
cluster_name: sni_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
envoy_internal_address:
server_listener_name: sni_listener
- applyTo: CLUSTER
match:
context: GATEWAY
patch:
operation: ADD
value:
name: dynamic_forward_proxy_cluster
lb_policy: CLUSTER_PROVIDED
cluster_type:
name: envoy.clusters.dynamic_forward_proxy
typed_config:
"@type": type.googleapis.com/envoy.extensions.clusters.dynamic_forward_proxy.v3.ClusterConfig
dns_cache_config:
name: dynamic_forward_proxy_cache_config
dns_lookup_family: V4_ONLY
- applyTo: LISTENER
match:
context: GATEWAY
patch:
operation: ADD
value:
name: sni_listener
internal_listener: {}
listener_filters:
- name: envoy.filters.listener.tls_inspector
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.listener.tls_inspector.v3.TlsInspector
filter_chains:
- filter_chain_match:
server_names: []
filters:
- name: envoy.filters.network.sni_dynamic_forward_proxy
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.sni_dynamic_forward_proxy.v3.FilterConfig
port_value: 443
dns_cache_config:
name: dynamic_forward_proxy_cache_config
dns_lookup_family: V4_ONLY
- name: envoy.tcp_proxy
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
stat_prefix: tcp
cluster: dynamic_forward_proxy_cluster
access_log:
- name: envoy.access_loggers.file
typed_config:
"@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: "/dev/stdout"
log_format:
text_format_source:
inline_string: '[%START_TIME%] "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%
%PROTOCOL%" %RESPONSE_CODE% %RESPONSE_FLAGS% %RESPONSE_CODE_DETAILS% %CONNECTION_TERMINATION_DETAILS%
"%UPSTREAM_TRANSPORT_FAILURE_REASON%" %BYTES_RECEIVED% %BYTES_SENT% %DURATION%
%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% "%REQ(X-FORWARDED-FOR)%" "%REQ(USER-AGENT)%"
"%REQ(X-REQUEST-ID)%" "%REQ(:AUTHORITY)%" "%UPSTREAM_HOST%" %UPSTREAM_CLUSTER%
%UPSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_REMOTE_ADDRESS%
%REQUESTED_SERVER_NAME% %ROUTE_NAME%
'
- applyTo: NETWORK_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.filters.network.tcp_proxy"
patch:
operation: MERGE
value:
name: envoy.tcp_proxy
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
stat_prefix: tcp
cluster: sni_cluster
This configuration does not conflict with the previous running solution.
To switch outgoing traffic to the new configuration without downtime, we need to replace the selectors in the istio-egressgateway-with-sni-proxy service with the selectors of the new istio-egress after which outgoing traffic will be directed to the pods of the new istio-egress-gateway:
kubectl edit svc istio-egressgateway-with-sni-proxy -n istio-system:
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: istio-egressgateway-with-sni-proxy
meta.helm.sh/release-namespace: istio-system
labels:
app: istio-egressgateway-with-sni-proxy
istio: egressgateway-with-sni-proxy
name: istio-egressgateway-with-sni-proxy
namespace: istio-system
spec:
clusterIP: 172.20.214.225
clusterIPs:
- 172.20.214.225
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: status-port
port: 15021
protocol: TCP
targetPort: 15021
- name: http2
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 8443
selector:
app: istio-egressgateway ### Specify the selectors for the new istio-egress here
istio: egressgateway ### Specify the selectors for the new istio-egress here
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
After this, in the logs of the istio-egressgateway pod you can see that traffic is going through it, in the logs of the istio-egressgateway-with-sni-proxy pod there will be no traffic records. So we can update istio to the latest available version.
In the future, you can reconfigure your DestinationRules and Visrtualservice of your applications to new service and gateway objects, and delete the current istio-egressgateway-with-sni-proxy objects.