ThoughtWorks
  • 联系我们
  • Español
  • Português
  • Deutsch
  • English
概况
  • 工匠精神和科技思维

    采用现代的软件开发方法,更快地交付价值

    智能驱动的决策机制

    利用数据资产解锁新价值来源

  • 低摩擦的运营模式

    提升组织的变革响应力

    企业级平台战略

    创建与经营战略发展同步的灵活的技术平台

  • 客户洞察和数字化产品能力

    快速设计、交付及演进优质产品和卓越体验

    合作伙伴

    利用我们可靠的合作商网络来扩大我们为客户提供的成果

概况
  • 汽车企业
  • 清洁技术,能源与公用事业
  • 金融和保险企业
  • 医疗企业
  • 媒体和出版业
  • 非盈利性组织
  • 公共服务机构
  • 零售业和电商
  • 旅游业和运输业
概况

特色

  • 技术

    深入探索企业技术与卓越工程管理

  • 商业

    及时了解数字领导者的最新业务和行业见解

  • 文化

    分享职业发展心得,以及我们对社会公正和包容性的见解

数字出版物和工具

  • 技术雷达

    对前沿技术提供意见和指引

  • 视野

    服务数字读者的出版物

  • 数字化流畅度模型

    可以将应对不确定性所需的数字能力进行优先级划分的模型

  • 解码器

    业务主管的A-Z技术指南

所有洞见

  • 文章

    助力商业的专业洞见

  • 博客

    ThoughtWorks 全球员工的洞见及观点

  • 书籍

    浏览更多我们的书籍

  • 播客

    分析商业和技术最新趋势的精彩对话

概况
  • 申请流程

    面试准备

  • 毕业生和变换职业者

    正确开启技术生涯

  • 搜索工作

    在您所在的区域寻找正在招聘的岗位

  • 保持联系

    订阅我们的月度新闻简报

概况
  • 会议与活动
  • 多元与包容
  • 新闻
  • 开源
  • 领导层
  • 社会影响力
  • Español
  • Português
  • Deutsch
  • English
ThoughtWorks菜单
  • 关闭   ✕
  • 产品及服务
  • 合作伙伴
  • 洞见
  • 加入我们
  • 关于我们
  • 联系我们
  • 返回
  • 关闭   ✕
  • 概况
  • 工匠精神和科技思维

    采用现代的软件开发方法,更快地交付价值

  • 客户洞察和数字化产品能力

    快速设计、交付及演进优质产品和卓越体验

  • 低摩擦的运营模式

    提升组织的变革响应力

  • 智能驱动的决策机制

    利用数据资产解锁新价值来源

  • 合作伙伴

    利用我们可靠的合作商网络来扩大我们为客户提供的成果

  • 企业级平台战略

    创建与经营战略发展同步的灵活的技术平台

  • 返回
  • 关闭   ✕
  • 概况
  • 汽车企业
  • 清洁技术,能源与公用事业
  • 金融和保险企业
  • 医疗企业
  • 媒体和出版业
  • 非盈利性组织
  • 公共服务机构
  • 零售业和电商
  • 旅游业和运输业
  • 返回
  • 关闭   ✕
  • 概况
  • 特色

  • 技术

    深入探索企业技术与卓越工程管理

  • 商业

    及时了解数字领导者的最新业务和行业见解

  • 文化

    分享职业发展心得,以及我们对社会公正和包容性的见解

  • 数字出版物和工具

  • 技术雷达

    对前沿技术提供意见和指引

  • 视野

    服务数字读者的出版物

  • 数字化流畅度模型

    可以将应对不确定性所需的数字能力进行优先级划分的模型

  • 解码器

    业务主管的A-Z技术指南

  • 所有洞见

  • 文章

    助力商业的专业洞见

  • 博客

    ThoughtWorks 全球员工的洞见及观点

  • 书籍

    浏览更多我们的书籍

  • 播客

    分析商业和技术最新趋势的精彩对话

  • 返回
  • 关闭   ✕
  • 概况
  • 申请流程

    面试准备

  • 毕业生和变换职业者

    正确开启技术生涯

  • 搜索工作

    在您所在的区域寻找正在招聘的岗位

  • 保持联系

    订阅我们的月度新闻简报

  • 返回
  • 关闭   ✕
  • 概况
  • 会议与活动
  • 多元与包容
  • 新闻
  • 开源
  • 领导层
  • 社会影响力
博客
选择主题
查看所有话题关闭
技术 
敏捷项目管理 云 持续交付 数据科学与工程 捍卫网络自由 演进式架构 体验设计 物联网 语言、工具与框架 遗留资产现代化 Machine Learning & Artificial Intelligence 微服务 平台 安全 软件测试 技术策略 
商业 
金融服务 全球医疗 创新 零售行业 转型 
招聘 
职业心得 多元与融合 社会改变 
博客

话题

选择主题
  • 技术
    技术
  • 技术 概观
  • 敏捷项目管理
  • 云
  • 持续交付
  • 数据科学与工程
  • 捍卫网络自由
  • 演进式架构
  • 体验设计
  • 物联网
  • 语言、工具与框架
  • 遗留资产现代化
  • Machine Learning & Artificial Intelligence
  • 微服务
  • 平台
  • 安全
  • 软件测试
  • 技术策略
  • 商业
    商业
  • 商业 概观
  • 金融服务
  • 全球医疗
  • 创新
  • 零售行业
  • 转型
  • 招聘
    招聘
  • 招聘 概观
  • 职业心得
  • 多元与融合
  • 社会改变
语言、工具与框架技术

Building a Service Mesh with Envoy

Arvind Thangamani Arvind Thangamani

Published: Sep 30, 2019

Service Mesh is the communication layer in a microservice setup. All requests, to and from each of the services go through the mesh. Also known as an infrastructure layer in a microservices setup, the service mesh makes communication between services reliable and secure.  

Each service has its own proxy service (sidecars) and all the proxy services together form the service mesh. The side cars handle communication between services, which means all the traffic goes through the mesh and this transparent layer can now control how services interact. 

The service mesh provides observability, service discovery, load balancing through components controlled by APIs.

In effect, if a service wants to call another service, there is no direct call to the destination service. The request is routed to the local proxy and the proxy routes it to the destination service. This means, the service instance isn’t aware of the outside world and is only aware of the local proxy.

Image icon Services with local proxies
 
Services with local proxies
Another resource, the ThoughtWorks Technology Radar, a biannual document to assess the risks and rewards of existing and nascent technologies, “A service mesh offers consistent discovery, security, tracing, monitoring and failure handling without the need for a shared asset such as an API gateway or ESB. A typical implementation involves lightweight reverse-proxy processes deployed alongside each service process, perhaps in a separate container.” 

When talking about the service mesh, one inevitably hears of the ‘Sidecar’- a proxy available to every instance of a service. Each sidecar takes care of one instance of one service. We will talk about sidecars in a little more detail further into the article.

The sidecar
 

What can a service mesh deliver?

All that a service mesh can deliver
 
More and more organizations are moving to the microservices architecture. Such organizations require all of the above mentioned capabilities that a service mesh can deliver. A decoupled approach that severs the use of a library or custom code is a clear winner. 
 

Why Envoy?

Envoy isn’t the only choice when building a service mesh, other proxies like Nginx, Traefik and more are perfectly suitable. I have chosen Envoy, a high performance proxy written in C++ because I prefer Envoy’s light footprint, powerful routing, observability and extensibility.

Let’s begin by building a service mesh setup with three services. Here’s a graphic representation of what we are trying to build:
services setup with sidecar proxies Services setup with sidecar proxies
 

Front Envoy

Front Envoy is the edge proxy in our setup where we usually carry out TLS termination, authentication, generate request headers and more.

Here’s a look at the Front Envoy configuration:

---
admin:
  access_log_path: "/tmp/admin_access.log"
  address: 
    socket_address: 
      address: "127.0.0.1"
      port_value: 9901
static_resources: 
  listeners:
    - 
      name: "http_listener"
      address: 
        socket_address: 
          address: "0.0.0.0"
          port_value: 80
      filter_chains:
          filters: 
            - 
              name: "envoy.http_connection_manager"
              config:
                stat_prefix: "ingress"
                route_config: 
                  name: "local_route"
                  virtual_hosts: 
                    - 
                      name: "http-route"
                      domains: 
                        - "*"
                      routes: 
                        - 
                          match: 
                            prefix: "/"
                          route:
                            cluster: "service_a"
                http_filters:
                  - 
                    name: "envoy.router"
  clusters:
    - 
      name: "service_a"
      connect_timeout: "0.25s"
      type: "strict_dns"
      lb_policy: "ROUND_ROBIN"
      hosts:
        - 
          socket_address: 
            address: "service_a_envoy"
            port_value: 8786


Envoy configuration mainly consists of:
  1. Listeners
  2. Routes
  3. Clusters
  4. Endpoints


Listeners

One or more listener can run in a single Envoy instance. Lines 9 to 36 mention the address and the port of the current listener. Each listener can have one or more network filters as well. These filters enable routing, tls termination, traffic shifting and similar activities.  Apart from “envoy.http_connection_manager” which is one of the inbuilt filters employed, Envoy has several other filters.
 

Routes

Lines 22 to 34 configure the route specification for the filter. It also specifies the domains from which we should accept requests, and a route matcher that matches each request as per the rules configured and forwards the request to the appropriate cluster.
 

Clusters

Clusters are specifications for upstream services to which Envoy routes traffic. Lines 41 to 50 define ‘Service A,’ the only upstream to which Front Envoy will talk. “connect_timeout” is the time limit within which to establish a connection to the upstream service before returning a 503.

There are usually more than one instance of Service A, and Envoy supports multiple load balancing algorithms to route the traffic. In our example, we weild a simple round robin algorithm.
 

Endpoints

‘hosts’ specify the instances of Service A to which we want to route traffic. In our case, we have only one.

Line 48 does not directly talk to Service A. We talk to an instance of Service A’s Envoy proxy instead, which routes to the local Service A instance. 

We could also leverage a service name that would return all the instances of Service A, like a headless service in kubernetes. Yes, we are carrying out the client side load balancing. Envoy caches all the hosts of Service A, and refreshes the hosts list every 5 seconds.

Envoy also supports both active and passive health checking. So, if we want active health checks, we configure it in the cluster configuration section.
 

Others

Lines 2 to 7 configure the admin server which helps in viewing configurations, changing log levels, viewing stats and more.

Line 8’s “static_resources” indicates manual loading of all the configurations. We will discuss how to do this dynamically a little later in this article.

While, there are many more configurations that described here, our aim is not to go through all, but to get started with minimal configuration.
 

Service A

Here is the Envoy configuration for ‘Service A’:

admin:
  access_log_path: "/tmp/admin_access.log"
  address: 
    socket_address: 
      address: "127.0.0.1"
      port_value: 9901
static_resources:
  listeners:

    -
      name: "service-a-svc-http-listener"
      address:
        socket_address:
          address: "0.0.0.0"
          port_value: 8786
      filter_chains:
        -
          filters:
            -
              name: "envoy.http_connection_manager"
              config:
                stat_prefix: "ingress"
                codec_type: "AUTO"
                route_config:
                  name: "service-a-svc-http-route"
                  virtual_hosts:
                    -
                      name: "service-a-svc-http-route"
                      domains:
                        - "*"
                      routes:
                        -
                          match:
                            prefix: "/"
                          route:
                            cluster: "service_a"
                http_filters:
                  -
                    name: "envoy.router"
    -
      name: "service-b-svc-http-listener"
      address:
        socket_address:
          address: "0.0.0.0"
          port_value: 8788
      filter_chains:
        -
          filters:
            -
              name: "envoy.http_connection_manager"
              config:
                stat_prefix: "egress"
                codec_type: "AUTO"
                route_config:
                  name: "service-b-svc-http-route"
                  virtual_hosts:
                    -
                      name: "service-b-svc-http-route"
                      domains:
                        - "*"
                      routes:
                        -
                          match:
                            prefix: "/"
                          route:
                            cluster: "service_b"
                http_filters:
                  -
                    name: "envoy.router"

    -
      name: "service-c-svc-http-listener"
      address:
        socket_address:
          address: "0.0.0.0"
          port_value: 8791
      filter_chains:
        -
          filters:
            -
              name: "envoy.http_connection_manager"
              config:
                stat_prefix: "egress"
                codec_type: "AUTO"
                route_config:
                  name: "service-b-svc-http-route"
                  virtual_hosts:
                    -
                      name: "service-b-svc-http-route"
                      domains:
                        - "*"
                      routes:
                        -
                          match:
                            prefix: "/"
                          route:
                            cluster: "service_c"
                http_filters:
                  -
                    name: "envoy.router"                                
  clusters:
      -
        name: "service_a"
        connect_timeout: "0.25s"
        type: "strict_dns"
        lb_policy: "ROUND_ROBIN"
        hosts:
          -
            socket_address:
              address: "service_a"
              port_value: 8081  
      -
        name: "service_b"
        connect_timeout: "0.25s"
        type: "strict_dns"
        lb_policy: "ROUND_ROBIN"
        hosts:
          -
            socket_address:
              address: "service_b_envoy"
              port_value: 8789

      -
        name: "service_c"
        connect_timeout: "0.25s"
        type: "strict_dns"
        lb_policy: "ROUND_ROBIN"
        hosts:
          -
            socket_address:
              address: "service_c_envoy"
              port_value: 8790


Lines 11 to 39 define a listener to route traffic to the actual Service A instance. You can find the respective cluster definition for service_a instance in Lines 103 to 111.

Service A talks to Service B and Service C, which points to two more listeners and clusters, respectively. In our example, we have separate listeners for each of our upstreams (localhost, Service B and Service C). An alternate approach would be to have a single listener and route based on the url or headers to any of the upstreams.
 

Service B & Service C

Service B and Service C are at the leaf level and do not talk to any other upstreams apart from local host service instance. So the configuration is going to be simple.

[codeblock 3 markdown}

What could make things complicated is a single listener and a single cluster in the above configuration.

Once configurations are done, we deploy this setup to kubernetes or use a docker-compose to test it. You could run docker-compose build and docker-compose up and hit localhost:8080 to see the request pass through all the services and Envoy proxies successfully. We could also use the logs to verify the same.
 

Envoy xDS

We provided configurations to each of the sidecars, and depending on the service, the configuration varied between the services. While we could hand craft and manage the sidecar configurations manually, atleast for the initial 2 or 3 services - as the number of services grow, it will become difficult. Also, everytime a sidecar configuration changes, you have to restart the Envoy instance for the changes to take effect.

We can avoid the manual configuration, as discussed earlier, and load all the components, Clusters(CDS), Endpoints(EDS), Listeners(LDS) and Routes(RDS) using an API server. So each sidecar will talk to the API server and receive configurations. When new configurations are updated to the API server, it will automatically be reflected in the Envoy instances, thus avoiding a restart.

Here is a little more information about dynamic configurations. And, here is a sample xDS server that you could use.
 

Kubernetes

This section will evaluate a what-if situation -  what if we were to implement the discussed setup in Kubernetes.
single service with envoy side car
Single service with envoy side car
 
So, changes will be needed in
  1. Pod
  2. Service
     

Pod

While the Pod spec has only one container defined within - as per definition, a Pod can hold one or more containers. And, to run a sidecar proxy with each of our service instances, we add the Envoy container to every Pod. To communicate with the outside world, the service container will talk to the Envoy container over the localhost.

This is what the deployment file would look like:

admin:
  access_log_path: "/tmp/admin_access.log"
  address: 
    socket_address: 
      address: "127.0.0.1"
      port_value: 9901
static_resources:
  listeners:

    -
      name: "service-b-svc-http-listener"
      address:
        socket_address:
          address: "0.0.0.0"
          port_value: 8789
      filter_chains:
        -
          filters:
            -
              name: "envoy.http_connection_manager"
              config:
                stat_prefix: "ingress"
                codec_type: "AUTO"
                route_config:
                  name: "service-b-svc-http-route"
                  virtual_hosts:
                    -
                      name: "service-b-svc-http-route"
                      domains:
                        - "*"
                      routes:
                        -
                          match:
                            prefix: "/"
                          route:
                            cluster: "service_b"
                http_filters:
                  -
                    name: "envoy.router"
    
  clusters:
      -
        name: "service_b"
        connect_timeout: "0.25s"
        type: "strict_dns"
        lb_policy: "ROUND_ROBIN"
        hosts:
          -
            socket_address:
              address: "service_b"
              port_value: 8082


The containers section has an Envoy side car added to it. And, we mount our Envoy configuration file from configmap in lines 33 to 39.
 

Service

Kubernetes services take care of maintaining the list of Pod endpoints it can route traffic to. While kube-proxy usually handles the load balancing between the Pod endpoints, in our case, we are carrying out the client side load balancing, and we do not want the kube-proxy to load balance. We want to extract the list of Pod endpoints and load balance it ourselves. For which, a “headless service” is utilized to return the list of endpoints. 

This is what it would look like:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: servicea
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: servicea
    spec:
      containers:
      - name: servicea
        image: dnivra26/servicea:0.6
        ports:
        - containerPort: 8081
          name: svc-port
          protocol: TCP
      - name: envoy
        image: envoyproxy/envoy:latest
        ports:
          - containerPort: 9901
            protocol: TCP
            name: envoy-admin
          - containerPort: 8786
            protocol: TCP
            name: envoy-web
        volumeMounts:
          - name: envoy-config-volume
            mountPath: /etc/envoy-config/
        command: ["/usr/local/bin/envoy"]
        args: ["-c", "/etc/envoy-config/config.yaml", "--v2-config-only", "-l", "info","--service-cluster","servicea","--service-node","servicea", "--log-format", "[METADATA][%Y-%m-%d %T.%e][%t][%l][%n] %v"]
      volumes:
        - name: envoy-config-volume
          configMap:
            name: sidecar-config
            items:
              - key: envoy-config
                path: config.yaml


There are two things to note here. One is that line 6 makes the service headless and two is that we are not mapping the kubernetes service port to the app’s service port, but to the Envoy’s listener port. Meaning the traffic goes to Envoy first. At this point, kubernetes would work perfectly as well.

In this post we saw how to build a service mesh using Envoy proxy. We have a setup where all the communication happens through the mesh.

Now, the mesh not only has a lot of data about the traffic but also control. With this scenario, upcoming posts will delve into how one can leverage the benefits of observability, release management and more. Also, in the current scenario, Envoy's configuration is managed manually, and in later posts we will look at using a control plane like Istio to manage the configurations.

You can find all the discussed configurations and code here.

Technology Radar

Don't miss our opinionated guide to technology frontiers.

Subscribe
相关博客
云

Kubernetes: an exciting future for developers and infrastructure engineering

Inny So
James Gregory
了解更多
微服务

[Podcast] DevOps and Microservices - Architectural Considerations

Johannes Thönes
了解更多
云

Macro trends in the tech industry | Nov 2018

Mike Mason
了解更多
  • 产品及服务
  • 合作伙伴
  • 洞见
  • 加入我们
  • 关于我们
  • 联系我们

WeChat

×
QR code to ThoughtWorks China WeChat subscription account

媒体与第三方机构垂询 | 政策声明 | Modern Slavery statement ThoughtWorks| 辅助功能 | © 2021 ThoughtWorks, Inc.