Categories
Uncategorized

Kubernetes + Docker + Istio container cloud practice

With the development of technology and society, people have a more urgent need for efficient use of resources. In recent years, the Internet, mobile Internet and the rapid development of sophisticated, large-scale application of micro-services also attracted enthusiastic attention of enterprises, and based on Kubernetes + Docker containers cloud solution also entered the public view. Kepler is a cloud-based Kubernetes + Docker + Istio micro-service management solutions.

A, Microservices

1.1 to resolve the problem of large applications of micro-services

Now major companies are talking about micro-services, under the trend of micro-services technology circles every person will talk about micro-services, and the services of a variety of micro-solutions.

1.2 When we are talking about micro-services what are we discussing?

The use of micro-services architecture there are many good reasons, but there is no free lunch, micro-services, despite the many advantages, but also adds complexity. The team should actively respond to this complexity, provided that the application be able to benefit from micro-services.

1.2.1 How the problem of micro-services

    How to split Micro Services

    API Business Rules

    Ensure data consistency

    Late scalability considerations

Of course, this is not the main problem discussed in this article, I do not speak to how to split the specific micro-services, the circumstances of each application for each business are not the same for their own program is the best program split. We mainly to solve some of the problems of the micro-service brings.

1.2.2 Micro service brings problems

    Environmental Compliance

    How fast the allocation of resources

    How to quickly deploy degree

    How to do basic monitoring

    Service registration and discovery

    How do load balancing

These are the basic problems of the largest application of micro-services need to be addressed, if it is the traditional way using virtual machines to achieve, resources spending will be very large. So these issues to how to solve it? such as:

    Traffic Management

    Service downgraded

    certified

Of course, the face of these problems the majority of our faithful ape is definitely a solution.

1.3 Service governance

1.3.1 Java system

Suppose we are Java application system, it is very convenient to solve, for example, we can consider using SpringCloud family bucket series. Use can also be split:

  • Eureka
  • Hystrix
  • Zuul
  • Spring-cloud
  • Spring-boot
  • ZipKin

Under Java system can be very convenient to do fundamental part of the service of our micro, but still not very comfortable to address environmental consistency, and if there are other language services will be difficult to integrate into it.

Let’s look at what basic programming languages ​​generally have a combination of ways to solve the basic problems.

1.3.2 Other systems

  • Consul
  • Kong
  • Go-kit
  • Jaeger/Zipkin

Suppose we are using Golang language, Zaipeng here Golang language. go is simply a natural language service for the micro-born language, it is not too convenient. Efficient development speed and pretty good performance, lean and simple.

Digress – we use the above tools can also be composed of a set of good micro-services architecture.

    Consul: as a service center to the discovery and configuration

    Kong: Gateway as a service

    Jaeger: As a link to the track

    Go-kit: Development Components

However, this program is also a problem, the service is too intrusive, and each service needs to embed a lot of code, this is a headache.

Two, Docker & Kubernetes

Build practices based platform Docker + k8s.

2.1 Docker

Docker is a very powerful vessel.

    Enhance resource utilization

    Environmental consistency, portability

    Rapid expansion of the stretch

    version control

After using Docker, we found something playable becomes much more flexible. Not only to enhance resource utilization, environmental consistency is guaranteed, version control has become more convenient.

Before we use Jenkins to build, when the need to roll back, they need to go once jenkins Build process is very cumbersome. If it is a Java application, it will become a very long time to build.

After using Docker, it all becomes simple, just need to pull down a version of the image starts to get away (if there is a local cache directly start a version on the line), this improvement is very efficient.

(Source network)

Since the use of a Docker container as a basic service, then we certainly need to arrange the containers, if there is no arrangement that would be very scary. As for the choreography Docker containers, we have several options: Docker Swarm, Apache Mesos, Kubernetes, among these layout tools, we have chosen the service orchestration king Kubernetes.

2.1.1 Docker VS VM

    VM: create a virtual machine takes 1 minute, 3 minutes deployment environment, the deployment code for 2 minutes.

    Docker: Start the container 30 seconds.

2.2 Why choose Kubernetes

Let’s compare these three containers orchestration tools.

2.2.1 Apache Mesos

Mesos aim is to establish an efficient and scalable system, and the system can support a variety of frameworks, whether now or in the future framework, it can support. It is now a big problem: Hadoop and similar independent MPI these frameworks are open, which led to share some fine-grained want to do is not possible between the frames.

But it is based on language is not Golang, not in our technology stack, we will increase the cost of its maintenance, so we excluded it first.

2.2.2 Docker Swarm

Docker Docker the Swarm is developed by the scheduling framework. Docker benefit from the development of one’s own is to use the standard Docker API. Swarm architecture consists of two parts:

(Source network)

Its use, and is not specifically introduced.

2.2.3 Kubernetes

Kubernetes Docker is a container filing system, which uses the concept of the label to the container and the transducer pod into logical units. Pods are co-located (co-located) collection containers, these containers are commonly deployed and scheduling, forming a service, which is the main difference between the two frames and other Kubernetes. Compared to the similarity of container-based scheduling (like Swarm and Mesos), this approach simplifies the management of the cluster.

Not only that, it also provides a very rich API, to help us operate it, and play more tricks. In fact, there is a big focus is in line with our Golang technology stack, and manufacturers have support.

Kubernetes specific use there is no longer much introduction, there is a lot of information can refer to the website.

2.3 Kubernetes in kubernetes

kubernetes (k8s) is an open-source platform for automated container operations, these operations include inter deployment, scheduling and node cluster expansion.

    Deployment and replication of automation container

    Feel free to expand or contract the size of the container

    The containers are organized into groups, and to provide load balancing between the containers

    The new version is easily upgraded application container

    Provide elastic container, if the container fails to replace it, and so on …

2.4 Kubernetes is not enough either

Here we address the following issues:

    Docker: Environmental Compliance, and rapid deployment.

    Kubernetes: service registration and discovery, load balancing, rapid allocation of resources.

Of course there is monitoring, say this behind us. Let’s look at some of the higher level to solve the problem of how to do it?

The case of invasive code changes in the right service, certification services, link tracking, log management, circuit breakers, traffic management, fault injection and so on to how to solve it?

This year is very popular one solution: Service Mesh.

Three, Service Mesh

Inter-process communication layer service infrastructure, a request for reliable delivery cloud native application service complex topologies.

    Dedicated infrastructure for inter-layer communication processing services, through complex topologies allow the transfer request process becomes more reliable.

    As a set of lightweight high-performance network agent, and with deployment, the application need not know it exists.

Reliably delivered in the cloud native application requests can be very complex, through a series of powerful technology to manage this complexity: the fuse link, perceived delays, load balancing, service discovery, service contract and offline and removed.

ServiceMesh framework on the market are many, we chose to stand outlet Istio.

3.1 Istio

Connected, open platform management and protection of micro-services.

    Platform Support: Kubernetes, Mesos, Cloud Foundry.

    Observability: Metrics, logs, traces, dependency. visualisation.

    Service Identity & Security: service, service to identity verification services provide verifiable identification.

    Traffic Management: communication, an inlet / outlet route, between the dynamic fault injection control service.

    Policy Execution: premise check, quota management between services.

3.2 Why do we choose Istio?

Because there are still major manufacturers support – in fact, it is a very good idea.

Although it only to the 1.0 version, we are trying to start from the 0.6 version to experience, test environment to run, and then out of the version 0.7.1, we upgraded to version 0.7.1 running, and later 0.8.0LTS out, we began to use the official 0.8. 0 version, and made an upgrade program.

The latest version 1.0.4 has arrived, but we are not ready to upgrade, I wanted to wait until after it is upgraded to 1.2, and then begin the formal large-scale application. 0.8.0LTS now look at a small scale is still possible.

3.3 Istio architecture

We first look at Istio architecture.

Istio wherein the control panel is divided into three blocks, Pilot, Mixer, Istio-Auth.

    Pilot: mainly as service discovery and routing rules, and manages all Envoy, its consumption of resources is very large.

    Mixer: mainly responsible for policy request and quota management, as well Tracing, all requests will be reported to the Mixer.

    Istio-Auth: upgrade traffic, authentication, etc. functions, currently we do not have this feature enabled, the demand is not particularly large, because the cluster itself is isolated from the outside.

Each injection will be a Pod Sidecar, container traffic through iptables Envoy to all processes.

Four, Kubernetes & Istio

Istio can be deployed independently, but apparently it is combined with Kuberntes is a better choice. Based Kubernetes small-scale architecture. Some people worry about its performance, in fact, after a production test, thousands of QPS is no problem.

4.1 Kubernetes Cluster

In the case of scarce resources, we k8s cluster is kind of how?

4.1.1 Master Cluster

  • Master Cluster:

    • ETCD、Kube-apiserver、kubelet、Docker、kube-proxy、kube-scheduler、kube-controller-manager、Calico、 keepalived、 IPVS。

4.1.2 Node node

  • Node:

    • Kubelet、 kube-proxy 、Docker、Calico、IPVS。

(Source network)

Master of API calls that we are through keepalived management, a master fails, to ensure smooth migrating to other master’s API, does not affect the operation of the entire cluster.

Of course, we also equipped with two edge nodes.

4.1.3 Edge Node

    Edge node

    Traffic entrance

The main function is to allow the edge node cluster nodes to provide service capabilities of external exposure, so it does not need a stable, our IngressGateway is deployed at both the top edge nodes, and is managed by Keeplived.

4.2 external service request process

The outer layer is DNS, through the Pan resolve to Nginx, Nginx will flow to VIP clusters, HAproxy VIP and then cluster, the external traffic sent to our edge node Gateway.

Each VirtualService will be bound to the Gateway, it can load services through VirtualService, current limiting, fault handling, routing rules and canary deployment. Through the Service to the ultimate Pods service is located.

This is the process in the case of no detection of Mixer with the strategy, using only Istio-IngressGateway. If you use all Istio components will vary, but the main flow or so.

4.3 Logging

We use the log collection loosely coupled, scalable, easy to maintain and upgrade programs.

    Host node Filebeat collected logs.

    Pods were collected for each injection Filebeat service container logs.

Filebeat deployed together with an application container, applications do not need to know it’s there, you only need to specify the directory log input on it. Filebeat configuration is used to read from ConfigMap, only need to maintain a good rule to collect logs.

The figure we can see the collected logs from the Kibana.

4.4 Prometheus + Kubernetes

    Monitoring system based on time series.

    Seamless integration with kubernetes infrastructure and application levels.

    Key-value data model with powerful functionality.

    Manufacturers support.

4.4.1 Grafana

4.4.2 Alarm

We currently support alarm has Wechat, kplcloud, Email, IM. All alarms can be configured to send various places on the internet.

4.4.3 overall architecture

The whole structure by the peripheral services and infrastructure services in the cluster composed of peripheral services include:

    Consul used as a distribution center.

    Prometheus + Grafana used to monitor K8s cluster.

    Zipkin provide their own definition of link tracking.

    ELK log collection, analysis, all the logs are pushed to our cluster here.

    Gitlab code repository.

    Jenkins and packaged into the code used to construct and uploaded to the image repository Docker.

    Repository Mirror warehouse.

Clusters are:

    HAProxy + keeprlived responsible for traffic forwarding.

    Network is Calico, Calico has a beta-level support for ipvs proxy mode kube-proxy’s. If Calico detected kube-proxy is running in this mode, it will automatically activate Calico ipvs support, so we enable IPVS.

    Internal DNS cluster is CoreDNS.

    We deployed two gateways, mainly used Istio of IngressGateway, TraefikIngress spare. Once IngressGateway hung up we can switch to TraefikIngress fast.

    The above is Istio related components.

    Finally, our APP service.

    Filebeat cluster by collecting logs sent to the outside of the ES.

  • 集群内部的监控有:

      State-Metrics is mainly used to automatically monitor the telescopic assembly

      Mail & Wechat since the inquiry of the police service

      Prometheus + Grafana + internal AlertManager cluster monitor, the main monitoring services and related infrastructure components

      InfluxDB + Heapster flow monitoring database stores information on all services

4.5 With Kubernetes how to deploy applications that do?

4.5.1 R & D packaged into a mirror, pass the warehouse, manage versions

    Learning Docker.

    Learn to configure the warehouse, packaging manually upload trouble.

    Learn k8s knowledge.

4.5.2 with Jenkins to be responsible for packing, pass a mirror, an updated version

    Operation and maintenance work increased considerably, the application needs to be configured, the service needs to be done to change had to look for operation and maintenance.

    The need to manage a bunch of YAML file.

Is there a fool, too much need to learn technology, you can easily use solution?

Five, Kplcloud platform

5.1 Kepler cloud platform

Kepler cloud platform is a lightweight PaaS platform.

    Provide a platform for the micro-controlled management service of the project.

    The implementation of each separate service deployment, maintenance, expansion.

    Simplify the process, the application is no longer required cumbersome process, maximum automation.

    Achieve rapid release of micro-service, independent monitoring, configuration.

    Achieve zero-intrusive service micro services of discovery, service gateways, link tracking and other functions.

    Provides configuration center, unified management configuration.

    Research and development, product testing, operation and maintenance and even the boss can publish their own applications.

5.2 Kepler platform deployment services

In order to reduce learning costs and the difficulty of deployment, to deploy applications on the Kepler platform is very simple, just add a Dockerfile enough.

Dockerfile Reference:

These are the normal mode, Jenkins Build codes and Docker build.

This deployment is a relatively free and can be customized according to their needs, of course, learning costs.

5.2.1 Why do not automatically generate Dockerfile it?

In fact, it can be done automatically generate Dockerfile, but requires that each service may be different, some need to add a file, and some need to increase parameters in the Build and so on. We can not expect all of the items are the same, it will hinder the development of technology. So the next best thing, we give templates, developed to adjust according to their needs.

5.3 tool integration

    Kepler cloud platform integrates gitlab, Jenkins, repo, k8s, istio, promtheus, email, WeChat and other API.

    Manage the entire life cycle of services.

    Service management, create, publish, release, monitoring, alarms, logs and some of the surrounding additional features, message center, distribution center, but also log on to the container, the service off the assembly line and so on.

    Health service may be a service model adjustment, service type, a telescopic expansion key, and a management service API rollback storage management operations.

5.4 release process

Users put their Dockerfile with the code submitted to Gitlab, then fill in some of the parameters of Kepler cloud platform to create their own applications.

After the application creates a Job creation in Jenkins, pulling down the code and executes Docker build (if not selected to perform multi-level building will first go build or mvn), then packaged Docker image pushed to mirror the warehouse, and finally callback platform or API calls k8s notification pull the latest version.

Users only need to manage their own applications on the cloud platform Kepler can, all of the other automated processing.

5.5 by creating a service start

We start by creating a platform for service introduction.

The main interface platform:

Click “Create service” into the creation page.

Basic Information:

Fill in the details:

Basic information Golang example, when you select another language to fill in the required parameters will be slightly different.

If you choose to provide services, then, will enter the third step, the third step is to fill the routing rules, such as no special requirements directly submitted to default on the line.

5.5.1 Service Details

Build upgraded application version:

Call service mode can be adjusted between normal service with the grid.

Whether to provide service capabilities of external services:

Expansion adjust CPU, memory:

Pod start adjusting the number of:

Web page version of the terminal:

5.5.2 regular tasks

5.5.3 persistent store

Administrators create StorageClass with PersistentVolumeClaim, the user need only select the relevant services in their own PVC tied written on the line.

Storage using NFS.

5.5.4 Tracing

5.5.5 Consul

Consul as a distribution center to use, and we provide the client Golang.

$ go get github.com/lattecake/consul-kv-client

 

It automatically synchronizes directory configuration consul in memory, just to get the configuration directly from memory to get on the line.

5.5.6 Repository

  • Github: https://github.com/kplcloud/kplcloud
  • Document: https://docs.nsini.com
  • Demo: https://kplcloud.nsini.com

Author: Cong

Starter: long should TECHNOLOGY

Leave a Reply