Represents the tree and the tree

table of Contents

    First, what is the tree

  • 二、查找
    • 2.1 静态查找

        2.1.1 Method 1: sequential search

        2.1.2 Method 2: binary search (Binary Search)

  • Third, the determination binary search tree

    Fourth, the definition of the tree

  • 五、树与非树

      5.1 Non-tree

      5.2 tree

  • Six, some basic terminology tree

  • 七、树的表示

      7.1 tree representation of the list

      7.2 list tree (son – brother) notation

First, what is the tree

There is hierarchy of the objective world in many things

    Genealogy human society

    Social structure

    Book Information Management

Wherein the human society family tree as shown below:

Through the above mentioned hierarchical organization, it can make us more efficient in the management of data! So, for basic data management operations – look, we have to find how efficient it?

Second, look for

Finding: According to a given keyword K, K with the same keywords to find the records from the set R

Static lookup: the collection record is fixed, there is no set of operations that is inserted and removed, only to find

Dynamic look: the collection record is dynamic, that set of operations of both the search, insertion and deletion may also occur (dynamic we find not taken into account)

2.1 static lookup

2.1.1 Method 1: sequential search

/* c语言实现 */

int SequentialSearch (StaticTable *Tbl, ElementType K)
// 在表Tbl[1]~Tb1[n]中查找关键字为K的数据元素
  int i;
  Tbl->Element[0] = K; // 建立哨兵,即没找到可以返回哨兵的索引0表示未找到
  for (i = Tbl->Length; Tbl->Element[i] != K; i--); // 查找成功返回所在单元下标;不成功放回0
  return i;

Sequential search algorithm of time complexity of O (n)

2.1.2 Method 2: binary search (Binary Search)

Let n data elements satisfy ordered keywords (for example: small to large), namely \ (k_1

Example: Suppose there are 13 data elements, in ascending order according to keywords stored. Binary search keywords for data elements of process 444 as shown below:

Still above 13 ordered linear elements constituting the table data example, binary search key data elements 43 as shown below:

/* c语言实现 */

int BinarySearch (StaticTable *Tbl, ElementType K)
  // 在表中Tbl中查找关键字为K的数据元素
  int left, right, mid, NoFound = -1;
  left = 1; // 初始左边界
  right = Tbl->Length; // 初始右边界
  while (left <= right)
    mid = (left + right) / 2; // 计算中间元素坐标
    if (K < Tbl->Element[mid]) right = mid - 1; // 调整右边界
    else if (K > Tbl->Element[mid]) left = mid + 1; // 调整左边界
    else return mid; // 查找成功,返回数据元素的下标
  return NotFound; // 查找不成功,返回-1
# python语言实现

def binary_chop(alist, data):
    n = len(alist)
    first = 0
    last = n - 1
    while first <= last:
        mid = (last + first) // 2
        if alist[mid] > data:
            last = mid - 1
        elif alist[mid] < data:
            first = mid + 1
            return True
    return False

Binary search algorithm has a time complexity of the number of O (logN)

Binary search algorithm solves the time complexity of the problem to find, but for the insertion and deletion of data is indeed O (n), so there a data structure, the time complexity of the data search can be either reduced, but also can reduce data the complexity of the insert and delete it?

Third, the determination binary search tree

In addition to the search key using the above two methods, we can also complete binary tree search key by this data structure.

As can be seen from the figure, if we need to find the number 8, can be achieved (may not understand that the future will be able to understand) the following four steps:

    6 is smaller than the root node 8, to the right child of 6 to find 9

    Node 9 is greater than 8, 9 to the left to find the sub-node 7

    Node 7 is less than 8, the left to find the sub-node 7

    Found 8

    Determine the tree each node needs to find just the number of nodes where the number of layers that;

    Find a number of times to find success does not exceed the depth of the decision tree

    The depth of the decision tree nodes is N \ ([log_2 {n}] + 1 \)

  • \(ASL = (4*4+4*3+2*2+1)/11 = 3\)

Fourth, the definition of the tree

Tree (Tree): \ ((\ {0}) \ n n geq) finite set of nodes configured.

    When n = 0, called null tree

  • 对于任一颗

    Non-empty tree (n>0)


      Tree has a special node called the root (Root), expressed by r

      The remaining nodes can be divided into a finite set of m (m> 0) disjoint \ (T_1, T_2, \ cdots, T_m \), wherein each set is itself a tree, called subtrees of the original tree (SubTree)

V. tree and non-tree

Bear in mind that the tree has the following three characteristics:

    Subtree are disjoint;

    In addition to the root, each node has only one parent node;

    A tree with N nodes has N-1 edges

5.1 Non-tree

5.2 trees

VI. Some basic terms of trees

    The number of sub-tree nodes: node degree (Degree)

    Tree degree: all the nodes in the tree maximum degree

    Leaf nodes (Leaf): degree of node 0

    Parent node (Parent): with a subtree node is the root node of its parent node subtree

    Child nodes (Child): If A node is the parent node B node, the node B is called a child node A node; child nodes, also known as child nodes

  1. Sibling (Sibling): each node having the same parent node is a sibling of another

  2. Path and path length: The path from node (n_1) to (n_k) is a sequence of nodes (n_1, n_2, cdots, n_k), and (n_i) is the parent of (n_ {i+1}). The number of edges the route contains is the length of the route

  3. Ancestor: All nodes along the tree root to a node path are ancestors of this node

  4. Descendant: All nodes in the subtree of a node are descendants of this node

  5. Node Level: specifies that the root node is at layer 1, the number of layers of any other node is the number of layers of its parent node plus 1

  6. Depth of the tree: The largest level among all nodes in the tree is the depth of the tree

Represents seven trees

7.1 tree representation of the list

List tree representation shown above have great drawbacks, assuming a depth of the tree is very large, and can not guarantee that all child nodes of the tree have three, it will cause a large degree of waste.

7.2 list tree (son - brother) notation

In order to solve the waste will be represented by one of ordinary space list tree defects, we can set two link list pointers, a link to son nodes, links to another sibling node, as shown below:

Tree representation shown above has been perfectly adequate, but if we list tree represents the rotation angle of 45 °, will find as follows:

After a 45 ° angle of rotation, we will find a binary tree (a tree node has at most two sub-nodes), that is in fact the most common tree can represent binary tree, that is to say as long as we have thoroughly studied the binary tree , that is, we have thoroughly studied the tree.


Java Micro Services (2): The service consumers and providers to build

Then write a paper on the “Java micro-services (a): dubbo-admin console use,” the article describes the installation docker, zookeeper environment and reference dubbo official website demonstrates the use of dubbo-admin console. The article has been to build a good zookeeper service registry, the film article is mainly to build service consumers and service providers. In accordance with the principle of micro-services, it will demo is divided into three parts: service interface, service consumers and service consumers.

Service Interface: defines all the interfaces required for the system.

Service Provider: The main is to achieve the interface.

Consumer Services: the use of the interface

1.Dubbo Introduction





Role Description

Expose service provider service

Call the remote service consumer services

Service registration and discovery registries

The number of calls statistics monitoring center services and call time

Run container services


Dubbo architecture has the following characteristics, namely communication, robustness, flexibility, and upgradeability to future architectures.

Call relationship Description

    Responsible for starting the service container, load, run service provider.

    Service provider when you start, registration services they provided to the registry.

    Consumer services at startup, you need to subscribe to the service registry.

    Registry return address list service providers to consumers, if there is a change, the registry will be based on long push to change the data connection to the consumer.

    Service consumers, providers from the list of addresses, soft load balancing algorithm, choose a provider call, if the call fails, then select another call.

    Service consumers and providers, in memory of the cumulative number of calls and call time, time sent once per minute statistical data to the monitoring center.

Dubbo architecture has the following characteristics, namely communication, robustness, flexibility, and upgradeability to future architectures.

A more detailed description, please refer to the examiner net: http: //

2. Service Interface

Creating a jar by the project idea, project creation process can refer to the “Spring boot entry (a): Spring boot quickly build project,” The purpose of this project is simply defined interfaces, so the direct creation jar package, not a maven project here. Once it’s created, and a new interface. The following is the interface that I created:


Wherein UserService code is as follows:

1 package;
3 public interface UserService {
4     String sayHi();
5 }

After creating the interface, you need to install an interface to a local warehouse, for service consumers and service providers use

In Terminal directly to mvn clean install or by clicking directly in the install directory lifecycle installed, the following page appears to indicate a successful installation



3. Service Provider

Service provider is mainly to achieve the interface, create a maven project using the same method to create a maven project directory after a good as follows:


Wherein UserServicelmpl achieve multiple interfaces, as follows:

 1 package;
 3 import;
 4 import;
 5 import;
 6 import;
 7 import org.springframework.beans.factory.annotation.Value;
 9 /**
10  * @ClassName UserServiceImpl
11  * @Deccription TODO
12  * @Author DZ
13  * @Date 2019/8/31 11:20
14  **/
15 @Service(version = "${user.service.version}")
16 public class UserServiceImpl implements UserService {
18     @Value("${dubbo.protocol.port}")
19     private String port;
21     /*@HystrixCommand(commandProperties = {
22             @HystrixProperty(name = "circuitBreaker.requestVolumeThreshold", value = "10"),
23             @HystrixProperty(name = "execution.isolation.thread.timeoutInMilliseconds", value = "2000")
24     })*/
25     @Override
26     public String sayHi() {
27         return "Say Hello, i am from " + port;
28     }
29 }

Which will be mentioned later @HystrixCommand comment fuse, where the first comment.

yml configuration is as follows:

 1 spring:
 2   application:
 3     name: hello-dubbo-service-user-provider
 5 user:
 6   service:
 7     version: 1.0.0
 9 dubbo:
10   scan:
11     basePackages:
12   application:
13     id: hello-dubbo-service-user-provider
14     name: hello-dubbo-service-user-provider
15     qos-port: 22222
16     qos-enable: true
17   protocol:
18     id: dubbo
19     name: dubbo
20     port: 12346
21     status: server
22     serialization: kryo #高速序列化
23     # optimizer:
25   registry:
26     id: zookeeper
27     address: zookeeper://,
28   provider:
29     loadbalance: roundrobin #负载均衡
33 management:
34   endpoint:
35     dubbo:
36       enable: true
37     dubbo-shutdown:
38       enabled: true
39     dubbo-configs:
40       enabled: true
41     dubbo-services:
42       enabled: true
43     dubbo-references:
44       enabled: true
45     dubbo-peoperties:
46       enabled: true
47   health:
48     dubbo:
49       status:
50         defaults: memory
51         extras: load,threadpool

View Code

pom file as follows:

 1 xml version="1.0" encoding="UTF-8"?>
 2 <project xmlns="" xmlns:xsi=""
 3          xsi:schemaLocation="">
 4     <modelVersion>4.0.0modelVersion>
 5     <parent>
 6         <groupId>org.springframework.bootgroupId>
 7         <artifactId>spring-boot-starter-parentartifactId>
 8         <version>2.1.7.RELEASEversion>
 9         <relativePath/> 
10     parent>
11     <groupId>com.edugroupId>
12     <artifactId>hello-dubbo-service-user-providerartifactId>
13     <version>1.0.0-SNAPSHOTversion>
14     <name>hello-dubbo-service-user-providername>
15     <description>Demo project for Spring Bootdescription>
17     <properties>
18         <java.version>1.8java.version>
19     properties>
21     <dependencies>
22         <dependency>
23             <groupId>org.springframework.bootgroupId>
24             <artifactId>spring-boot-starterartifactId>
25         dependency>
26         <dependency>
27             <groupId>org.springframework.bootgroupId>
28             <artifactId>spring-boot-starter-actuatorartifactId>
29         dependency>
30         <dependency>
31             <groupId>org.springframework.bootgroupId>
32             <artifactId>spring-boot-starter-testartifactId>
33             <scope>testscope>
34         dependency>
35         <dependency>
36             <groupId>>
37             <artifactId>dubbo-spring-boot-starterartifactId>
38             <version>0.2.0version>
39         dependency>
40         <dependency>
41             <groupId>com.edugroupId>
42             <artifactId>hello-dubbo-service-user-apiartifactId>
43             <version>${project.version}version>
44         dependency>
45         <dependency>
46             <groupId>de.javakaffeegroupId>
47             <artifactId>kryo-serializersartifactId>
48             <version>0.42version>
49         dependency>
52         <dependency>
53             <groupId>org.springframework.cloudgroupId>
54             <artifactId>spring-cloud-starter-netflix-hystrixartifactId>
55             <version>2.0.1.RELEASEversion>
56         dependency>
58         <dependency>
59             <groupId>org.springframework.cloudgroupId>
60             <artifactId>spring-cloud-starter-netflix-hystrix-dashboardartifactId>
61             <version>2.0.1.RELEASEversion>
62         dependency>
66     dependencies>
68     <build>
69         <plugins>
70             <plugin>
71                 <groupId>org.springframework.bootgroupId>
72                 <artifactId>spring-boot-maven-pluginartifactId>
73             plugin>
74         plugins>
75     build>
77 project>

View Code

This paper pom file and yml dubbo file is mainly based on the official website of the service provider pom file dependencies from, with specific reference to: https: // -spring-boot-samples / dubbo-registry-zookeeper-samples.



Note basePackages comment

4. Service Provider

Create a service provider in the same way who, profiles and service providers are also similar, directly attached to the code



 1 package;
 4 import;
 5 import;
 6 import;
 7 import org.springframework.web.bind.annotation.RequestMapping;
 8 import org.springframework.web.bind.annotation.RequestMethod;
 9 import org.springframework.web.bind.annotation.RestController;
11 /**
12  * @ClassName UserController
13  * @Deccription TODO
14  * @Author DZ
15  * @Date 2019/8/31 18:37
16  **/
17 @RestController
18 public class UserController {
20     @Reference(version = "${user.service.version}")
21     private UserService userService;
23     @HystrixCommand(fallbackMethod = "sayHiError")
24     @RequestMapping(value = "hi", method = RequestMethod.GET)
25     public String sayHi() {
26         return userService.sayHi();
27     }
29     public String sayHiError() {
30         return "Hystrix fallback";
31     }
32 }



pom yml profile and profile substantially similar and providers;

 1 spring:
 2   application:
 3     name: hello-dubbo-service-user-consumer
 5 user:
 6   service:
 7     version: 1.0.0
 9 dubbo:
10   scan:
11     basePackages:
12   application:
13     id: hello-dubbo-service-user-consumer
14     name: hello-dubbo-service-user-consumer
15     qos-port: 22223
16     qos-enable: true
17   protocol:
18     id: dubbo
19     name: dubbo
20     port: 12345
21     #status: server
22     serialization: kryo
23   registry:
24     id: zookeeper
25     address: zookeeper://,
28 management:
29   endpoint:
30     dubbo:
31       enable: true
32     dubbo-shutdown:
33       enabled: true
34     dubbo-configs:
35       enabled: true
36     dubbo-services:
37       enabled: true
38     dubbo-references:
39       enabled: true
40     dubbo-peoperties:
41       enabled: true
42   health:
43     dubbo:
44       status:
45         defaults: memory
46         extras: load,threadpool
47 server:
48   port: 9090

View Code

 1 xml version="1.0" encoding="UTF-8"?>
 2 <project xmlns="" xmlns:xsi=""
 3          xsi:schemaLocation="">
 4     <modelVersion>4.0.0modelVersion>
 5     <parent>
 6         <groupId>org.springframework.bootgroupId>
 7         <artifactId>spring-boot-starter-parentartifactId>
 8         <version>2.1.7.RELEASEversion>
 9         <relativePath/> 
10     parent>
11     <groupId>com.edugroupId>
12     <artifactId>hello-dubbo-service-user-consumerartifactId>
13     <version>1.0.0-SNAPSHOTversion>
14     <name>hello-dubbo-service-user-consumername>
15     <description>Demo project for Spring Bootdescription>
17     <properties>
18         <java.version>1.8java.version>
19     properties>
21     <dependencies>
22         <dependency>
23             <groupId>org.springframework.bootgroupId>
24             <artifactId>spring-boot-starter-webartifactId>
25         dependency>
26         <dependency>
27             <groupId>org.springframework.bootgroupId>
28             <artifactId>spring-boot-starter-actuatorartifactId>
29         dependency>
30         <dependency>
31             <groupId>org.springframework.bootgroupId>
32             <artifactId>spring-boot-starter-testartifactId>
33             <scope>testscope>
34         dependency>
35         <dependency>
36             <groupId>>
37             <artifactId>dubbo-spring-boot-starterartifactId>
38             <version>0.2.0version>
39         dependency>
40         <dependency>
41             <groupId>com.edugroupId>
42             <artifactId>hello-dubbo-service-user-apiartifactId>
43             <version>${project.version}version>
44         dependency>
45         <dependency>
46             <groupId>de.javakaffeegroupId>
47             <artifactId>kryo-serializersartifactId>
48             <version>0.42version>
49         dependency>
52         <dependency>
53             <groupId>org.springframework.cloudgroupId>
54             <artifactId>spring-cloud-starter-netflix-hystrixartifactId>
55             <version>2.0.1.RELEASEversion>
56         dependency>
58         <dependency>
59             <groupId>org.springframework.cloudgroupId>
60             <artifactId>spring-cloud-starter-netflix-hystrix-dashboardartifactId>
61             <version>2.0.1.RELEASEversion>
62         dependency>
63     dependencies>
65     <build>
66         <plugins>
67             <plugin>
68                 <groupId>org.springframework.bootgroupId>
69                 <artifactId>spring-boot-maven-pluginartifactId>
70                 <configuration>
71                     <mainClass>>
72                 configuration>
73             plugin>
74         plugins>
75     build>
77 project>

View Code

This service fuse inside on the code and load balancing can temporarily focus back will be devoted to discussion of the fuse.


5. Results

Respectively, after starting the service consumers and service providers, started successfully, as follows:


Visit http: // localhost: 9090 / hi


At the same time we can start dubbo-admin console to view the service, attention to port conflict:



“Ansible automated operation and maintenance: Techniques and Best Practices,” the first chapter study notes

Ansible architecture and features

The first chapter is mainly about Ansible structure and characteristics, mainly includes the following:

    Ansible software

    Ansible architectural patterns

    Ansible properties

Ansible software

Ansible orchestration engine can complete configuration management, process control, resource deployment and so on. Ansible based on the Python language, built by Paramiko and PyYAML two key modules.

Ansible Applications

    Configuration Management

    Immediate opening service

    Application deployment

    Process Choreography

    Monitoring Alarms


Ansible architectural patterns

Ansible Maintenance mode is usually managed by the control unit and the machine. Ansible controlled machine tool is used to install software executing server or workstation maintenance instructions Ansible is to maintain the core. Managed machine is a server running business services, to be managed by the control unit via SSH.

Ansible management

Ansible is a model-driven configuration manager that supports multi-node distribution, remote task execution. Default SSH for remote connections. No need to install additional software on the managed nodes, may use various programming language extensions.

Ansible 管理系统由控制主机和一组被管节点组成。控制主机直接通过SSH控制被管节点,被管节点通过 Ansible 的资源清单来进行分组管理。

Ansible configuration management with the script the way Ubuntu server running Nginx services 3

Ansible scripting webservers.yml, i.e. PlayBook, wherein the nodes comprise hosts and managed in accordance with these hosts task lists (task) executed sequentially.

hosts including the web1, web2, web3.

Task List includes the following process:

    Install Nginx (Install Nginx)

    Creating Nginx configuration file (/etc/nginx/nginx.conf)

    Copy the configuration file based on SSH security certificate mode, restart Nginx service

    Ensure that the service is active Nginx

Ansible control executed on the host system ansible-playbook webservers.yml, Ansible will be connected in parallel mounted web1, web2, web3 above by SSH, configure, run Nginx service.

Ansible System Architecture

    Core Engine: the Ansible.

    The core module (core modules): Ansible resource distribution module to the remote node to execute a particular task or match a particular state.

    Custom module (custom modules)

    Plugin (plugins): complementary module functions by means of plug-in to complete the log, e-mail and other functions.

    Script (playbook): Ansible task defined profile, a plurality of tasks can be defined in a script, the Ansible performed automatically by the control operation of the plurality of tasks hosts simultaneously manage multiple remote hosts.

    Connector plug (connectior plugins): Ansible plug connector is connected to each host based, and is responsible for communication are managed nodes. Because support other connection methods except SSH connection methods, it is necessary to connect the plug.

    List of hosts (host inventory): the definition of Ansible management of host policy.

Ansible library using paramiko protocol, SSH or the like connected to the host through ZeroMQ. Ansible host control module Ansible pushed SSH protocol managed node are performed completely automatically deleted. Between the support and the control panel managed node local, SSH, ZeroMQ three connections, based default SSH connection, in the case of a large scale, using ZeroMQ connection faster execution.

Task execution mode

Ansible host system by the control operation of the managed node can be divided into two categories, i.e., ad-hoc and playbook.

    ad-hoc mode using a single module, to support the implementation of a single batch command.

    playbook mode Ansible primary management, a complete set of Functional playbook by a plurality of task. (Playbook can be understood through a combination of a plurality of ad-hoc operation profile)

Ansible properties

Ansible is based on consistency, security, high reliability, lightweight design automation tool, a powerful, easy to deploy, clearly described and other features, a good solution to the unified configuration, complex IT automation unified deployment, orchestration, etc. management issues.

Ansible Features

    The syntax is simple, easy to read

    Managed nodes do not need to be installed client software

    Based push (Push) mode

    To facilitate the management of small-scale scenes

    A large number of built-in module

    Very lightweight abstraction layer

Ansible contrast with other configuration management


Development language

Is there a client




Whether to support secondary development

not support

stand by

stand by

Server and the remote machine communication protocol

Standard SSL protocol

Use AES encryption

Use OpenSSH

Configuration File Format

Ruby syntax

Puppet SaltStack Ansible
Ruby Python Python

与其他自动化工具比较,Ansible 不需要安装客户端就可以轻松地管理、配置。

to sum up

The key idea is that the computer is Ansible a group, rather than a separate machine, or “multi-layer arrangement of” thinking. Avoid the certificate exchange, as well as the problem of reverse lookup DNS and NTP. YAML configuration file format, easy to use.


A data warehouse reporting test (1)

1. Background

Pedigree recently received a Data Warehouse Report POC pressure measurement tasks (why would a manufacturer also called POC …. A little funny), this recording problem encountered during testing of ideas and analysis of the problem.

2. Test environment architecture diagram

Hair pressure tactics: LR analog business people – the problem >> PostgreSQL cluster 3. encountered – a BI reporting systems >>

3. Issues and Analysis

PostgreSQL nodes to the cluster storage file

PostgreSQL cluster of four server is managed by a unified management node (Po presses used can not direct link), the target server to store the files on the monitor nmon screenwriters, that the use of xshell PostgreSQL jump from node to node management ( not installed ftp), using xftp management node is still open window to transfer files.

Workaround: Use the scp command

scp nmon [email protected]: nmon (on the management node performs the nmon file copy to the specified directory server user name)

scp [email protected]: baobiao1_10vu.nmon /home/admin/baobiao1_10vu_111.nmon (nmon result file to the user directory and rename the copy from the remote host the current)

Pressure measurement problems encountered in the process leading to the GC

Single transaction encountered during the load test to STW GC recovery phenomenon caused xxBI look at a map server resource consumption:

When performing the scene occurs approximately half the 9 FUll GC, the GC CPU dips, logical disk read turned over several times. After stopping the current scene, continue to re-run this scenario, xxBI server resource consumption graph:

. . . . Then look at the LR of TPS trends:

action in the report query transaction sum was not enforced. . . .

Pedigree different reports have made several attempts to have this problem, then what caused it?

    The first feeling is GC, such as garbage collection Used unreasonable, such as large memory, recommended G1 Garbage Collector (specifically why G1 garbage collector after the appropriate specialized writing to you in terms of those things GC), commonly used is parNew + CMS, when FULL GC Imagine occurs, the new total size of the object’s + garbage old age is very large, which resulted in a very long time STW phenomenon.

    If xxBI system is used in G1, after FULL GC occurred, reason to re-execute the scene, TPS will not be no value. After most likely GC, resulting in a cache miss, and this time we measured the pressure script uses anonymous login (IP configuration is to press to xxBI white list, access the report you do not need to log a), this function suspect temporarily disabled. Pedigree trying to make a normal landing from the user browser xxBI system query report, normal query, the system quit after viewing the Task Manager, CPU consumes about 30%, ah? Not the actual pressure measurement, which is why? This operation may be triggered speculation that the cache. Once CPU down, and then retest TPS normal curve and then test it appeared long press FULL GC, then. . . . . .

Finally, people still need xxBI vendors to troubleshoot the problem, in fact the very beginning there is this phenomenon may happen that day PostgreSQL clusters of people adjust memory-related parameters, the person in charge of the PostgreSQL cluster reduction parameter, retesting still have this problem .


Publish to Kubernetes based KubeSphere CI / CD will Spring Boot project

This example how to create a pipeline through GitHub repository Jenkinsfile based on open source KubeSphere container platform presentations, including a total of eight pipeline stages, will eventually deploy a Hello World page to Kubernetes cluster different namespace.

Pipeline Overview

The flow chart below shows the complete simple assembly line work process:

Flow Description:

    Stage a Checkout SCM:. GitHub pull Warehouse Code

    . Phase II Unit test: unit testing, if the test before continuing through the following tasks

    Phase III sonarQube analysis:. SonarQube detection code quality

    Phase IV Build & push snapshot image:. The selected branches depending on the behavior of the policy to build the mirror and tag for the SNAPSHOT- $ BRANCH_NAME- $ BUILD_NUMBER pushed Harbor (of which $ BUILD_NUMBER number pipeline running list of activities).

    Stage 5 Push latest image:. The master branch marked tag is latest, and pushed to DockerHub.

    Stage six Deploy to dev:. The master branch to deploy Dev environment, this phase needs to be reviewed.

    Phase seven Push with tag: generation tag and release to GitHub, and pushed to DockerHub.

    Stage eight Deploy to production:. The release tag deployed to the Production environment.

Creating credentials

Login using project-regular KubeSphere, enter devops-demo project has been created, start creating documents.

1, in this example the code warehouse Jenkinsfile need to use DockerHub, GitHub and kubeconfig (kubeconfig Kubernetes for the access cluster is running) a total of three other credentials (credentials).
    2, and then create a Java-Token and copy.

3, and finally into the KubeSphere in devops-demo of DevOps project, similar to the above steps, click Create in the documents, create a type certificate for the secret text, credential ID named sonar-token, the key step is to copy token information, click OK when finished.

So far, four documents have been created, the next step is to modify the corresponding four jenkinsfile ID credentials in the example warehouse to create their own user ID credentials.

Modify Jenkinsfile

The first step: Fork Project

Log in GitHub, the example used in this GitHub repository devops-java-sample Fork to your personal GitHub.

Step two: Modify Jenkinsfile

1, Fork to post your personal GitHub, enter Jenkinsfile-online in the root directory.

2. Click the edit icon in GitHub UI, you need to modify the following environment variables (environment) values.

Modify Item



Fill DockerHub create credential ID credentials step, used to access your DockerHub

Fill GitHub create credential ID credentials step for GitHub repository to push tag

kubeconfig credential ID, the access to a running cluster Kubernetes

The default is domain name for a push mirror

Replace your DockerHub account name (it can also be the name of the account Organization)

Replace your GitHub account name, for example then fill kubesphere (it can also be the name of the account Organization)

Application Name

Fill SonarQube token credential ID document creating step for detecting code quality

DOCKERHUB_NAMESPACE your-dockerhub-account
GITHUB_ACCOUNT your-github-account
APP_NAME devops-java-sample

Note: The parameter -o Jenkinsfile in mvn command will indicate on offline mode. This example is adapted to interfere with some network environment, and to avoid consuming too long when downloading dependencies, already accomplished dependent download, offline mode is enabled by default.

3, after more than modify environment variables, click Commit changes, the update will be submitted to the current master branch.

Create a project

CI / CD yaml pipeline will be based on the template file of the sample project, eventually the sample were deployed to Dev and Production of these two projects (Namespace) environment, namely kubesphere-sample-dev and kubesphere-sample-prod, the two projects need in advance in order to create a console, refer to the following steps to create the project.

Step 1: Create a project first

1, project managers use project-admin account to log KubeSphere, before the creation of enterprises in the space (demo-workspace), click Project → Create, to create a resource-based project, as a development environment this example, fill out the basic information about the project , after click Next.

    Name: Fixed kubesphere-sample-dev, if you need to modify the project name in the namespace need to modify the template file yaml

    Alias: can be customized, such as the development environment

    Description Information: Can a brief introduction of the project, learn more user-friendly

2, in this example No resource request and restrictions, advanced settings without having to modify the default values, click Create, the project can be created.

Step Two: Invite members

The first project to create a complete, project managers also need to project-admin users to invite ordinary current project into the project-regular kubesphere-sample-dev project moves to the “Project Settings” → “project members”, click on “invite members’ choice invite project-regular role and grant the operator.

Step 3: Create a second project

Ibid., Referring to the above two steps to create a name for kubesphere-sample-prod the project as a production environment, and invites ordinary users to enter the project-regular kubesphere-sample-prod projects and grant operator role.

Description: When the CI / CD line follow-up is successful, see the deployment pipeline created (Deployment) and service (Service) in the kubesphere-sample-dev and kubesphere-sample-prod project.

Creating lines

Step One: Basic Information

1, into the DevOps project has been created, select the line on the left side of the menu bar, and then click Create.

2, in a pop-up window, enter the pipeline of the basic information.

    Name: from a clear and concise name for the pipeline to create, easy to understand and search

    Description: The main characteristic of a brief introduction of the pipeline, help to further understand the role of pipelined

    Code repository: Click to select code repository, code repository need exists Jenkinsfile

Step two: add warehouse

1, click on the code repository to add Github repository for example.

2, click the pop Get Token.

3, in the GitHub page to fill in the access token Token description, a brief description of the token, such as DevOps demo, without any modification in the Select scopes, click Generate token, GitHub will generate a string of letters and numbers used to access the current account under the token the GitHub repo.

4. Copy the token-generated, enter the token in KubeSphere Token box, and then click Save.

5, authentication is passed, the right side of this library will list all the code associated with the user’s Token, select one with Jenkinsfile warehouse. Such as selecting good example here ready warehouse devops-java-sample, click to select this warehouse, after click Next.

The third step: Advanced Settings

After completing the code repository settings, access the Advanced Settings page, Advanced settings support the construction of the pipeline records, behavioral strategies, regularly scans and other customized settings, make the following simple interpretation of the relevant configuration used.

1, build settings, select the discarding old construction, and the number of branches of those branches day retention here default maximum number -1.


And the number of branches to keep the maximum number of days to retain the two options can be set up branches branch to branch acts simultaneously as long as a branch of the number of days and number of reservations does not meet any of the conditions of a setting, the branch will be discarded. Days and the number of reservations for the assumptions set 2 and 3, retain the number of days or more than two branches once reserved number exceeds 3, the branch will be discarded. Two default value of -1 indicates not automatically delete the branch.

Discard the old branch to branch to determine when to discard the records of the project. Console output branch comprising a recording, archiving and other workpieces metadata associated with particular branch. Kept small branches can save disk space used by Jenkins, we offer two options to determine when to discard the old branches:

    Days branch retention: if a certain number of days the branch, the branch is discarded.

    The number of those branches: if a certain number of branch already exists, then discard the oldest branches.

2, behavior policies, KubeSphere default added three strategies. As this example has not been found to use this strategy from PR Fork warehouse, where you can delete the policy, click the Delete button to the right.


Adding support three types of discovery strategy. It should be noted that, when the pipeline is triggered Jenkins, PR (Pull Request) submitted by the developer also seen as a separate branch.

Find branches:

    PR is also ruled out as a branch submitted: Select this represents CI branch will not be scanned source (such as Origin’s master branch), which is to be merge branches

    Only to be submitted to the PR branch: Scans only PR branch

    All branches: Warehouse (origin) pulls in all branches

PR found from the original warehouse:

    Source code version of the PR branch merged with the target: a discovery operation, create and run the pipeline based on source code version of the PR branch merge with the target

    PR source code version of itself: a discovery operation, create and run the pipeline based on the source code version of PR itself

    When the PR is found to create two lines, using a source code version PR pipeline itself, a source code version using the pipeline after merging branch target PR: discovery operations twice, to create two lines respectively, the first PR pipeline itself using the source code version of the second version of source code lines using the combined branch target PR

3, the default path for the script Jenkinsfile, please change it to Jenkinsfile-online.

Note: The path is the path Jenkinsfile code repository, which represents the root directory in the example of the warehouse, if you need to modify the file location change its script path.

4, if the scanning is not checked Repo Trigger scanning triggered, the periodic scan, the scan time interval may be set in accordance with customary groups, the present example is set to 5 minutes.

Description: Regular scanning lines is set to make a cycle periodically scan a remote warehouse, warehouse there is no code to see updated or a new PR strategy based on behavior.

Webhook Push:

Webhook is an efficient way to detect changes in the pipeline allows remote repository and automatically trigger a new operation, GitHub and Git (such as Gitlab) Jenkins triggered automatically scans should Webhook based, supplemented by more than one step is set periodically scan in KubeSphere. In the present example, the pipeline can be run manually, automatically scans To set the trigger and the distal end of the branch operation, see provided automatically trigger scan – GitHub SCM.

Click Create After completing the Advanced Settings.

Step Four: Run pipeline

After the pipeline is created, click your browser’s refresh button, automatically trigger a visible record after running the remote branch.

1, click on the right side of the operation, according to the previous step will automatically scan the code of behavior policy warehouse branch, select the master branch pipeline needs to be built in pop, the system will load Jenkinsfile-online branch according to the input (the default is the root directory Jenkinsfile).

2, since the warehouse Jenkinsfile-online TAG_NAME: defaultValue no default value, so here TAG_NAME tag can enter a number, such as input v0.0.1.

3. Click OK to generate a pipeline of new activities started running.

Description: tag and for generating an image with a release tag in Github and DockerHub respectively. Note: To publish release, TAG_NAME name tag before the code should not be a warehouse of the existence of repeated actively running the pipeline, if the pipeline would run lead to repeated failures.

So far, the pipeline has finished creating and running.

Note: Click the branch switch to branch list to see which branch of the pipeline is run based on specific branch here depends on further behavioral strategies found branching strategy.

Step Five: Review pipeline

To facilitate the presentation, the default here to review with the current account, when pipelined execution to step input will be suspended state, need to manually click Continue, the pipeline can continue. Note that in Jenkinsfile-online are defined in three stages (stage) to deploy to Dev Environment and Production environment and push tag, thus in turn need to deploy to dev in the pipeline, push with tag, deploy to production of these three stage 3 review, if not terminate the review or click on the pipeline will not continue to run.

Description: In the actual development and production scenarios, may require higher privileges or administrator to review the pipeline operation and maintenance personnel and mirror and decide whether to allow it to be pushed to the code repository or mirror, as well as the development or deployment to a production environment. Jenkinsfile input in step support the specified user line audit, such as a user name to specify a user to review project-admin, may be added in Jenkinsfile a function of input field, if a plurality of users are separated by commas, as follows:

input(id: 'release-image-with-tag', message: 'release image with tag?', submitter: 'project-admin,project-admin1')

View Pipeline

1. Click on the list of current pipeline activity pipeline under the serial number of the running, the page shows the operating status of each step in the pipeline, noted that in the initialization phase when the pipeline just created, you may only display the log window, to be initialized (about a minute ) you can see after the completion of the pipeline. Black boxes marked step Name of the pipeline, the pipeline a total of eight exemplary Stage, it is defined in the Jenkinsfile-online.

2, the top right of the current page, click View Log to see the pipeline run log. Page shows the specific log each step, operating status and time information, a specific stage Click on the left to expand to view the detailed log. Log can be downloaded to the local, such as errors, easier to analyze downloaded to the local positioning problems.

Verify operating results

1, if the line is executed successfully, the quality of the click tag line, to see through the code detection result SonarQube mass, as shown below (for reference only).

2, the final build the pipeline Docker image will also be successful push to DockerHub, we already configured DockerHub in Jenkinsfile-online, log DockerHub view mirror push the results, you can see the tag as a snapshot, TAG_NAME (master-1) , latest mirror has been to push DockerHub, GitHub and also generates a new tag and release. Hello World example page will eventually deployment and service were deployed to kubesphere-sample-dev and kubesphere-sample-prod in the project environment KubeSphere.



Where the project (Namespace)

Deployment (Deployment)

Service (Service)

http: // {$ Virtual IP}: {$ 8080} or http: // {$ network / public network IP}: {$ 30861}

http: // {$ Virtual IP}: {$ 8080} or http: // {$ network / public network IP}: {$ 30961}

Dev kubesphere-sample-dev ks-sample-dev ks-sample-dev
Production kubesphere-sample-prod ks-sample ks-sample

3, return to the list of items by KubeSphere, two state projects created before in order to view the deployment and services. For example, the following view deployment in kubesphere-sample-prod project.

Enter the project, on the left side of the menu bar, click → workload deployment, you can see ks-sample has been created successfully. Under normal circumstances, the status of the deployment should display operation.

4. Select Network and Services → Services in the menu bar can also view the corresponding service creation, you can see the Virtual IP services to, node port (NodePort) External exposure is 30961.

View Service

5. Check pushed to your personal DockerHub in the mirror, you can see devops-java-sample is the value of APP_NAME, and the tag is defined in the jenkinsfile-online in the tag.

6. Click release, see the Fork to v0.0.1 tag and release of your personal GitHub repo, it is with a jenkinsfile the push tag generated.

Access Example Service

If HelloWorld example network environment, including access to deployed services can be found at KubeSphere enter the following command to verify that access web kubectl cluster node by SSH login, or use the cluster administrator, and the node where the Cluster IP port (NodePort) by the corresponding item under service View:

Verify Dev environmental sample service

# curl {$Virtual IP}:{$Port} 或者 curl {$内网 IP}:{$NodePort}

Examples of verification Production Environment Service

# curl {$Virtual IP}:{$Port} 或者 curl {$内网 IP}:{$NodePort}

KubeSphere ( is an open source application-centric container management platform to support deployed on any infrastructure, and provides easy to use UI, greatly reduce the daily development, test, the complexity of the operation and maintenance, to address the storage, networking, security and ease of use inherent in Kubernetes pain points to help businesses easily deal with the agile automated monitoring operation and maintenance, end to end application delivery, micro-management services, and more tenant management, multi-cluster management, service and network management, mirroring warehouse, AI platform edge computing business scenarios.


The Java security: csrf protective actual analysis

Summarized above, the csrf attacks and some common mode protection, csrf full name Cross-site request forgery (CSRF), it is a registered certificate for a class using the trust users have acquired the background to bypass user authentication, send to the attack site a cross-site request user is not authorized to perform an action on the malicious Web site was attacked attacks.

The above definition of abstract, let’s take a simple example to explain in detail csrf attack, help to understand.

Suppose you log on the bank website through a computer for transfer, the general type of transfer is actually a form page form, click the transfer is actually submit the form, http request to initiate the background, the requested format probably looks something like this:

POST /transfer HTTP/1.1
Cookie: JSESSIONID=randomid;; Secure; HttpOnly
Content-Type: application/x-www-form-urlencoded


Well, now turn to their own account finishing up, but then you generally will not immediately withdraw to your bank’s website, you can immediately go surfing other web pages, when you happen to see some very attractive Internet eyeball advertising (such as part-time at home relaxing … a monthly income of tens of thousands like), you click on a bit, but found nothing, maybe you will turn off this page to Why did not happen. However, the background may have a series of things, if it is a phishing site, and you just click on the page contains a form just another form, as follows:

<form action="" method="post">
  <input type="hidden"
  <input type="hidden"
  <input type="hidden"
  <input type="submit"
      value="Win Money!"/>

Here as long as you click on the page will automatically submit the form, leading you to transfer 100 yuan a strange account (which can be realized through automation js), but has not been without your authorization, which is csrf attack, although it Do not know your login information, but using its own mechanisms to impersonate the user browser to bypass user authentication in order to attack back.

csrf is a common web attacks, some of the existing security framework provides support for the protection of the attack, such as spring security, from 4.0, CSRF protection is enabled by default, will be against PATCH, POST, PUT and DELETE methods of protection. This article will provide a combination of spring security protection methods, combined with its source code to learn about its internal protection principles, this paper comes to the source Spring Security version 5.1.5.

This article directory is as follows:

CSRF attacks using Spring Security Protection

CSRF protection philosophy of Spring Security

to sum up


1. Use Spring Security Protection CSRF attacks

To protect CSRF attack by Spring Security What needs to be done to configure it, are summarized as follows:

    Suitable HTTP request method using

    Configuration CSRF protection

    Use CSRF Token

1.1 using the appropriate HTTP request method

The first step is to ensure that the interface you want to protect the exposed site suitable HTTP request methods, that is not yet open to ensure that all required interfaces only support the use PATCH, POST, PUT, DELETE four requests of the way before the Security of CSRF a modified back-end data.

This is not Spring Security CSRF attacks in the protection of their restrictions, but CSRF attacks reasonable protection must be done, because it is easy to transfer data via GET private way leading to its disclosure, use POST to pass sensitive data more reasonable.

1.2 Configuration CSRF protection

The next step is the introduction of your background Spring Security application. Some frameworks by allowing the user session fails to handle invalid CSRF Token, but this approach is problematic, instead, Spring Security default return a 403 HTTP status code is not valid to refuse access can be achieved by configuring their refusal logic AccessDeniedHandler .

If the project is the use of XML configuration, it must show the use tag element to open CSRF protection, see .

By the way Java configuration will open CSRF protection by default, if you want to disable this feature, you need to manually configure, see example below, a more detailed configuration refer csrf () method of official documents.

public class WebSecurityConfig extends
   WebSecurityConfigurerAdapter {

  protected void configure(HttpSecurity http) throws Exception {

1.3 Use CSRF Token

The next step is to bring each time a request CSRF Token, there will be different depending on how the request in different ways:

1.3.1 Form form submission

CSRF Token will submit a form attached by _csrf Http request attribute, the background interface to get token from the request, the following is an example (JSP):

<c:url var="logoutUrl" value="/logout"/>
<form action="${logoutUrl}"
  <input type="submit"
    value="Log out" />
  <input type="hidden"

In fact, the background while rendering the page, Mr. into a CSRF Token, into the form; then when the user submits the form will be included with this CSRF Token, the background to remove it and check, then refused this request is inconsistent. Here is the background because this Token generation, which is to obtain third-party site can not achieve protection in this way.

1.3.2 Ajax and JSON requests

If JSON used, then no CSRF Token submitted as HTTP parameters, but on the HTTP request header. A typical approach is to CSRF Token included in the meta tags of the page. The following is an example of the JSP:

    <meta name="_csrf" content="${_csrf.token}"/>
    <meta name="_csrf_header" content="${_csrf.headerName}"/>

Then all of the requests need to bring Ajax CSRF Token, the following is achieved in jQuery:

$(function () {
  var token = $("meta[name='_csrf']").attr("content");
  var header = $("meta[name='_csrf_header']").attr("content");
  $(document).ajaxSend(function(e, xhr, options) {
    xhr.setRequestHeader(header, token);

Configuring here all configurations have been good, including the invocation interface design, configuration framework, the front page earlier in talking about a series of protective manner, Spring Security, what is the way the use of it, the most direct way It is to see the source code.

2. Spring Security CSRF protection principles of

Spring Security is based Filter (filter) to achieve its security features, on the main logic CsrfFilter CSRF protection in this filter, inherited from OncePerRequestFilter, and rewrite doFilterInternal Method:

    protected void doFilterInternal(HttpServletRequest request,
            HttpServletResponse response, FilterChain filterChain)
                    throws ServletException, IOException {
        request.setAttribute(HttpServletResponse.class.getName(), response);

Get csrf token from the request by tokenRepository

CsrfToken csrfToken = this.tokenRepository.loadToken(request); final boolean missingToken = csrfToken == null;      //

If you do not get the token newly generated token and save

if (missingToken) { csrfToken = this.tokenRepository.generateToken(request); this.tokenRepository.saveToken(csrfToken, request, response); } request.setAttribute(CsrfToken.class.getName(), csrfToken); request.setAttribute(csrfToken.getParameterName(), csrfToken);      //

Determine the need for csrf token verification

if (!this.requireCsrfProtectionMatcher.matches(request)) { filterChain.doFilter(request, response); return; }      //

The front end to get the actual token pass over

String actualToken = request.getHeader(csrfToken.getHeaderName()); if (actualToken == null) { actualToken = request.getParameter(csrfToken.getParameterName()); }      //

Checking whether the token is equal to two

if (!csrfToken.getToken().equals(actualToken)) { if (this.logger.isDebugEnabled()) { this.logger.debug("Invalid CSRF token found for " + UrlUtils.buildFullRequestUrl(request)); }        //

If the token is missing lead, an exception is thrown MissingCsrfTokenException

if (missingToken) { this.accessDeniedHandler.handle(request, response, new MissingCsrfTokenException(actualToken)); }        //

If the same token is not an exception is thrown InvalidCsrfTokenException

else { this.accessDeniedHandler.handle(request, response, new InvalidCsrfTokenException(csrfToken, actualToken)); } return; }      //

To the next filter

filterChain.doFilter(request, response); }

The entire process is very clear, we summarize:

    To obtain csrf token from the request by tokenRepository;

    If you do not get the token newly generated token and save it;

    Determine the need for check csrf token need not directly execute the next filter;

    The call request getHeader () method or getParameter () method to get the actual token pass over the distal end;

    Two verification token are equal, not equal, then an exception is thrown, then the check is passed equal, perform the next filter;

May know, Spring Security is implemented by means of CSRF Token protection, we talked about above, you can choose to store the cookie through a token way session can also choose the way that Spring Security provides what way, the answer just get token the tokenRepository, we look at this tokenRepository type is CsrfTokenRepository (which is an interface), Spring Security provides three achieved, namely HttpSessionCsrfTokenRepository, CookieCsrfTokenRepository, LazyCsrfTokenRepository, we focused look at the first two, as the name suggests, is through a session and the other is through cookie, we’ll look at each of their respective implementation loadToken () method to test.


The realization CookieCsrfTokenRepository

public CsrfToken loadToken(HttpServletRequest request) { Cookie cookie = WebUtils.getCookie(request, this.cookieName); if (cookie == null) { return null; } String token = cookie.getValue(); if (!StringUtils.hasLength(token)) { return null; } return new DefaultCsrfToken(this.headerName, this.parameterName, token); } //

The realization HttpSessionCsrfTokenRepository

public CsrfToken loadToken(HttpServletRequest request) { HttpSession session = request.getSession(false); if (session == null) { return null; } return (CsrfToken) session.getAttribute(this.sessionAttributeName); }

Here we are clear, Spring Security offers a variety of strategies to save token, both can be saved in a cookie can also be stored in the session, this can be manually specified. So when it comes to the two earlier token of protection on the way, Spring Security supports. Now here, we’ll look at how Spring Security token is generated and saved, where only achieve CookieCsrfTokenRepository example:


Generate a token

public CsrfToken generateToken(HttpServletRequest request) { return new DefaultCsrfToken(this.headerName, this.parameterName, createNewToken()); } private String createNewToken() { return UUID.randomUUID().toString(); } //

Save token

public void saveToken(CsrfToken token, HttpServletRequest request, HttpServletResponse response) { String tokenValue = token == null ? "" : token.getToken(); Cookie cookie = new Cookie(this.cookieName, tokenValue); cookie.setSecure(request.isSecure()); if (this.cookiePath != null && !this.cookiePath.isEmpty()) { cookie.setPath(this.cookiePath); } else { cookie.setPath(this.getRequestContext(request)); } if (token == null) { cookie.setMaxAge(0); } else { cookie.setMaxAge(-1); } if (cookieHttpOnly && setHttpOnlyMethod != null) { ReflectionUtils.invokeMethod(setHttpOnlyMethod, cookie, Boolean.TRUE); } response.addCookie(cookie); }

You can see, in fact, generated token is essentially a uuid, and then save is stored in a cookie, it comes to cookie operations, many of the details, it would not elaborate.


3. Summary

This article first explains the basic example of a csrf attack, and then describes the configuration using Spring Security to protect csrf attack needed, and finally learned a bit from Spring Security source point of view of how it is achieved csrf protection, basic principles or by token to achieve, can be achieved by means of a specific session cookie and ways.

Note: This article source involved are from the Spring Security 5.1.5.



Cross Site Request Forgery (CSRF)

Spring Security Architecture


Python Web Flask source interpretation (three) – template rendering process

about me
    A thoughtful program ape, lifelong learning practitioners, is currently in a start-up team any team lead, technology stack involves Android, Python, Java, and Go, this is the main technology stack our team.
    Github: https: //
    Micro-channel public number: Lifetime developer (angrycode)

Flask front of the startup process and routing principles have been the source commuting. Today we take a look at the process template rendering.

0x00 using a template

First look at an example using a template to render official document from

from flask import render_template

def hello(name=None):
    return render_template('hello.html', name=name)

Under the project directory needs to have a templates directory and creates a file hello.html


Hello.html content is

Hello from Flask
{% if name %}

Hello {{ name }}!

{% else %}

Hello, World!

{% endif %}

This template name is a parameter, you can achieve rendering html template file according to the parameters by calling render_template method.

0x01 Flask.render_template

def render_template(template_name, **context):
    """Renders a template from the template folder with the given

    :param template_name: the name of the template to be rendered
    :param context: the variables that should be available in the
                    context of the template.
    return current_app.jinja_env.get_template(template_name).render(context)

Notes approach is clear, find the folder name from the templates to render template_name file. Which is initialized by the following statement current_app

_request_ctx_stack = LocalStack()
current_app = LocalProxy(lambda:

LocalStack is an implementation class stack. And it is in the _request_ctx_stack Flask.request_context () method of the current context instance to push the stack inside

def request_context(self, environ):
    """Creates a request context from the given environment and binds
    it to the current context.  This must be used in combination with
    the `with` statement because the request is only bound to the
    current context for the duration of the `with` block.

    Example usage::

        with app.request_context(environ):

    :params environ: a WSGI environment
    return _RequestContext(self, environ)

_RequestContext class implements the context management protocol, it can be used with statement

class _RequestContext(object):
    """The request context contains all request relevant information.  It is
    created at the beginning of the request and pushed to the
    `_request_ctx_stack` and removed at the end of it.  It will create the
    URL adapter and request object for the WSGI environment provided.

    def __init__(self, app, environ): = app
        self.url_adapter = app.url_map.bind_to_environ(environ)
        self.request = app.request_class(environ)
        self.session = app.open_session(self.request)
        self.g = _RequestGlobals()
        self.flashes = None

    def __enter__(self):

    def __exit__(self, exc_type, exc_value, tb):
        # do not pop the request stack if we are in debug mode and an
        # exception happened.  This will allow the debugger to still
        # access the request object in the interactive shell.
        if tb is None or not

__Enter __ perform operations push (time), on the implementation of pop operation with exit statement.
    Back request_context () method, which is called wsgi_app () in

def wsgi_app(self, environ, start_response):
    """The actual WSGI application.  This is not implemented in
    `__call__` so that middlewares can be applied:

        app.wsgi_app = MyMiddleware(app.wsgi_app)

    :param environ: a WSGI environment
    :param start_response: a callable accepting a status code,
                           a list of headers and an optional
                           exception context to start the response
    with self.request_context(environ):
        rv = self.preprocess_request()
        if rv is None:
            rv = self.dispatch_request()
        response = self.make_response(rv)
        response = self.process_response(response)
        return response(environ, start_response)

Know from the analysis of the routing principle of the article, wsgi_app () will be executed when the server receives a client request.
    Therefore, when a request comes, the request will Flask current context instance is saved to stack instance _request_ctx_stack instance in; the request processing context instance to pop from the stack inside the current request.

LocalProxy is a proxy class, its constructor is passed a lambda expression: lambda:
    This operation took current context instance is encapsulated by LocalProxy, i.e., the context agent current_app Flask current instance.
    So when current_app.jinja_env this statement is actually an instance attribute access Flask jinja_env, this property is initialized in the constructor’s Flask.

class Flask(object):
    #: 源码太长了省略
    #: options that are passed directly to the Jinja2 environment
    jinja_options = dict(
        extensions=['jinja2.ext.autoescape', 'jinja2.ext.with_']

    def __init__(self, package_name):
        #: 源码太长省略部分源码
        #: the Jinja2 environment.  It is created from the
        #: :attr:`jinja_options` and the loader that is returned
        #: by the :meth:`create_jinja_loader` function.
        self.jinja_env = Environment(loader=self.create_jinja_loader(),

Environment is an example of jinja_env. This is jinja template engine provides a class template Flask frame rendering is implemented by the jinja.
    Environment needs a loader, is obtained by the following method

def create_jinja_loader(self):
    """Creates the Jinja loader.  By default just a package loader for
    the configured package is returned that looks up templates in the
    `templates` folder.  To add other loaders it's possible to
    override this method.
    if pkg_resources is None:
        return FileSystemLoader(os.path.join(self.root_path, 'templates'))
    return PackageLoader(self.package_name)

The default is to construct an example of a FileSystemLoader templates from the catalog, the role of this class is to load the template file from the file system.

0x02 Environment.get_template

def get_template(self, name, parent=None, globals=None):
    """Load a template from the loader.  If a loader is configured this
    method ask the loader for the template and returns a :class:`Template`.
    If the `parent` parameter is not `None`, :meth:`join_path` is called
    to get the real template name before loading.

    The `globals` parameter can be used to provide template wide globals.
    These variables are available in the context at render time.

    If the template does not exist a :exc:`TemplateNotFound` exception is

    .. versionchanged:: 2.4
       If `name` is a :class:`Template` object it is returned from the
       function unchanged.
    if isinstance(name, Template):
        return name
    if parent is not None:
        name = self.join_path(name, parent)
    return self._load_template(name, self.make_globals(globals))

Internal () method is called get_template _load_template () method

def _load_template(self, name, globals):
    if self.loader is None:
        raise TypeError('no loader for this environment specified')
    if self.cache is not None:
        template = self.cache.get(name)
        if template is not None and (not self.auto_reload or \
            return template
    template = self.loader.load(self, name, globals)
    if self.cache is not None:
        self.cache[name] = template
    return template

_load_template () method first checks to see if there is a cache, if the cache is available on the use of cache; the cache is not available on the use of loader to load a template, this loader is an example of FileSystemLoader previously mentioned (by default).

0x03 BaseLoader.load

def load(self, environment, name, globals=None):
    # 省略部分源码
    return environment.template_class.from_code(environment, code, globals, uptodate)

BaseLoader is FileSystemLoader base class. The load method to achieve a compilation of templates, such as load logic. Finally, use environment.template_class.from_code () method. Which is template_class Template class that represents the template objects compiled.
    from_code Template class is a static method can be used to create a Template instance. When the load method returns, there was obtained a Template object.
    And finally back to render_template method

def render_template(template_name, **context):
    return current_app.jinja_env.get_template(template_name).render(context)

Implementation of the Template object render () method.

0x04 Template.render

def render(self, *args, **kwargs):
    """This function accepts either a dict or some keyword arguments which
    will then be the context the template is evaluated in.  The return
    value will be the rendered template.

    :param context: the function accepts the same arguments as the
                    :class:`dict` constructor.
    :return: the rendered template as string
    ns = self.default_context.copy()
    if len(args) == 1 and isinstance(args[0], utils.MultiDict):
    if kwargs:
    context = Context(ns, self.charset, self.errors)
    exec self.code in context.runtime, context
    return context.get_value(self.unicode_mode)

This method receives a dict type parameter, used to pass parameters to the template. The core of this method is an exec function. Python is built exec function, it can perform dynamic Python code.

0x05 summarize

Flask Jinja use as a template engine. Execution path

Flask.render_template => Environment.get_template => Template.render => exec

0x06 learning materials


Learn PHP – Learn papers

Learn PHP

    Learn Artifact: PhpStudy a key to build PHP environment


PHP is a scripting language that can be nested in an HTML page

  • Nested HTML file:


    PHP also can be saved to a separate “* .php” file and is accessed but in php file, it must be “” End! php code will only be parsed and executed here

  • PHP is case sensitive identification

  • PHP supports three mainstream Notes: double slash “#” sign, multi-line comments


Variable rules:

    Variables begin with $ symbol

  • Variable names must begin with a letter or an underscore, numbers, letters, underscores the variable name

  • PHP process does not create a command variable name, variable assignment process is created


  • local (local): Only allows statements to access their own grammatical structure (internal function declaration)

  • global (overall): Allows access to the current statement in all the grammatical structure of PHP programs (outside the function declaration)

    Access global variables:

    global keyword is used within the function to access global variables inside the function to access the global variables must use the global keyword before the visit.

  • static (static): implementation of the outcome variable, the next round is not reset

    Static access:

    When variable declaration, add static statement, you can not reset the change in number of visits and the variable variable


echo output:

    May output more than one string


'; ?>

print output:

type of data:


    String is a sequence of characters, the text string within the quotation marks belong


    No decimal number combinations (including negative)


    All combinations of numbers is not an integer, and scientific notation

Boolean logic:

    true and false


  • A variable value stored in one or more of

  • Creating an array using the keyword “array”

        // 运行结果
    // “array(3) { [0]=> string(6) "HUAWEI" [1]=> string(5) "China" [2]=> string(3) "GO!" }”

    Keywords: var_dump returned array size, data type, etc. of each information parameter value


  • Using the “class” keyword to declare object data types

     color = $color;
        function what_color(){
            return $this->color;

NULL values:

    It represents a variable to a null value (the value set to null, empty the representative variable values)

Analyzing Data type:


    Type and value of the variable print

    Syntax: void var_dump (mixed $ expression)

  • No return value

iS function:

    is_bool (): Boolean value determines whether

    is_float (): determines whether the float

    is_int (): determining whether an integer

    is_numeric (): determines whether numeric

    is_string (): determining whether a string

    is_array (): determining whether an array

    is_object (): whether the object is determined

    is_null (): determines whether null

    is_resource (): determine whether the resource type


    Check the variable exists

    Returns: Returns true variable exists


    Check if the argument is null (isset () may only detect the presence or absence)

    Returns: present and non-empty variable returns false (var = null was considered empty)

PHP system constants:

System constants


PHP file name

PHP program current line number

PHP version number

Operating system name



The most recent error

Recent warnings

Resolve potential problems with grammar

Unusual error

__ FILE __
__ LINE __


Arithmetic operators:

// + – * / % ++ —

In addition to addition and subtraction by addition molded from decrementing

String operators:

String concatenation operator: (points)

Connecting equal: = (equal points)

Assignment operator:

Assignment: =

Less like: – =

Et plus: + =

Multiplication and other: * =

In addition other: / =

I like to take:% =

Comparison operators:

Greater than, less than, greater than or equal, less than equal to, equal to, not equal to

Full equal to: ===

Insufficiency:! ==

Logical Operators:

Logical AND: and &&

Logical OR: or ||

Logic Non-: not!

Ternary operator:

Conditional operator:? :

Operator error suppression:

Symbol: @
    Before using statements may produce incorrect, an error can be suppressed

Operator command:

Symbols: * (reverse single quotes) “- and in the same key position
    The operator can perform an operation command directly on the OS (echo after the command output can be executed on the system)

Control structures:

if the branch condition:


Switch conditional statement:


While cycle:


do … while loop:


for loop:


Control functions:

Slightly (break / continue)

Delivery values: *

🙁 assignment pass both different memory addresses)

​ $a = $b ;

Assignment by reference 🙁 two variables the same address)

​ $a = &$b ;


Array type:

    Enumerated array: Index subscript is an integer

    Associative array: string superscript index

    Multidimensional array: array element is an array

Create an array:

  • Keywords: array () creates an array

  • 标识符:
    • $arr[key] = value ;
    • $arr[] = value ;

Array functions:

print_r (): print variable information

unset (): delete array element

foreach (): traverse the array elements


    The first cell array was removed and returned as a result


    The last element of the array cell and returns removed


    In an array of one or more elements prepend


    Insertion of one or more elements to the end of the array


    Returns an array of all the values ​​and the establishment of a numeric index

count (): calculated value of the number of attributes

array_sum (): calculated value and

array_reverse (): returns an array of reverse

list (): the array element assigned to the variable

 $IntArray[$i]) {
            $MIN = $IntArray[$i] ;
            $MIN_i = $i ; 
    echo "Array_MAX:".$MAX."
MAX_i=".$MAX_i ; echo "

"; echo "Array_MIN:".$MIN."
MIN_i=".$MIN_i ; echo "
"; // 位置交换 echo "位置交换前:".print_r($IntArray) ; $M = $IntArray[$MIN_i] ; $IntArray[$MIN_i] = $IntArray[$MAX_i] ; $IntArray[$MAX_i] = $M ; echo "
"; echo "位置交换后:".print_r($IntArray) ; echo "
"; // 反转数组 echo print_r(array_reverse($IntArray)); ?>


Elements / Code


Returns the file name of the script execution.

Returns the CGI specification used by the server version.

Returns the current IP address of the server running the script is located.

Returns the host name of the server currently running script is located (for example

Returns a string identifying the server (such as Apache / 2.2.24).

Name and version (e.g., “HTTP / 1.0”) communications protocol return the requested page.

The method used to access the page return request (e.g. POST).

Returns the timestamp of the start request (e.g., 1577687494).

Returns the query string, if it is to access this page via the query string.

Returns the current request from the request header.

Accept_Charset head returns from the current request (e.g. utf-8, ISO-8859-1)

Host head returns from the current request.

Returns the current page full URL (unreliable because not all user agents are supported).

Whether through secure HTTP protocol query script.

Back to Browse user’s IP address of the current page.

Returns the host name of the user browsing the current page.

Back on the user machine connected to the port number used by the Web server.

Returns the absolute path of the currently executing script.

This value indicates the SERVER_ADMIN Apache server configuration file.

Port Web server. The default value is “80.”

Returns the server version and virtual host name.

The current base path where the script file system (non-document root directory).

Returns the current script’s path.

Returns the URI of the current page.



    Keywords: function name () {……}

Function parameters:

  • The value of the arguments:

    When the function call, allowed to pass parameters to a function, the function can be used freely operating parameters

  • Parameters passed by reference:

    If desired function can modify the parameter values, the parameters can be passed by reference

    Passed by reference, just add “&” symbol in front of the parameters can be

    $arr = array(1,2,3,4);
    function addElement(&$arr){
        $arr[count($arr)] = 100;
        print_r($arr);   //在函数内输出$arr
    print_r($arr);  //在函数外输出
  • The default value is passed:

    function hobby($who,$style=‘运动’){
        echo “$who 喜欢 $style”;
  • global keyword:

    Reference to an external function parameters (nature of the reference parameters)

    $name = "Mary";//初始化变量
    function getName(){
        global $name; //引入外部的变量
        echo "我的名字叫:$name";

Built-in functions:

    echo statements: Output

    print statement: Output

  • include the statement: include and run the specified file

  • require statement: include and run the specified file


What end 2019 engineers should learn?

To work more than three years, most recently business is not very busy, but my heart is empty. The company recently participated in a UI library development, find themselves do not know too many things. This opportunity it:

1. knowledge necessary to experience the last two years down a little bit, one by one confirmed with the standards and documents, you may say.

2. The tool will be used not only with the good, but also to understand the principle.

3. Even some knowledge of being less than, as a little pursuit of engineers, should do some advance knowledge reserves.

Writes shocked to find myself here two or three years in technical or grew, and in 2016 I looked at the need to share predecessors, everyone said I was useful to school, listen to the older generation always right, 2019 I can according to their own summed up the experience and understanding of how to learn the system, and the heart has its own priorities bottom.

Now go back to the title, what front-end engineers should learn? I wrote an outline, but inside the goal is to learn to say to yourself, issue first, in order to share with you, learn together; the second is that you help me see that nothing is missing under.

My self-positioning did not identify the front end of the field is too broad, which in the end point of the most suitable and worthy of further study? I do not know. But the first full school again, you may say overall correct.

I feel still, still, I will always be a front-end students.

This article does not discouraging results, please rest assured to read. If you are a fresh graduate, pinpoint their location, take your time.


Front-end engineers

First, the basic language

1. HTML-related

    HTML standard, follow-up for the latest HTML standard update.

    HTML tags semantic nested standards.


Learning objectives: to re-sort the tabs relationship. International, future web page structure, in line with accessibility standards.

2. CSS-related

    CSS standard, CSS standards to follow for the latest updates.

    CSS property, the latest attributes.

    CSS programming, Houdini.

  • Web Fonts

Learning objectives: to sort out the relationship between the css properties, more features try to use css to achieve. Master concerned about the latest development of css.

3. JavaScript related

    ECMAScript standard, such as the latest proposal, the browser DOM, BOM.

Learning objectives: the basis of the familiar JavaScript API, parameter clear. JavaScript language abreast of the latest trends.

4. Node.js related

    Node.js global API, the native module, Node.js learn about the latest trends.

Learning objectives: familiar with the role and use native API of Node.js basis. Lay the foundation for learning services side development.

5. TypeScript(TS)

    TS fire into future development trend.

    TypeScript use. The difference between JavaScript

Learning Objectives: Familiar with TypeScript.

6. AssemblyScript(AS)

    In addition to c \ c ++, Rust, Kotlin, Golang and other high-level languages ​​can be translated into WebAssembly byte code, but a new language can be AS. AS is a strict subset of TS can learn together lay the foundation for WebAssembly development.

    AssemblyScript grammar and usage

Learning Objective: Learn basic grammar, AssemblyScript files can be compiled into .wasm format. With it, you can not go to review C / C ++ a.

7. Dart

    Flutter continued hot, Dart as a basis for development, should grasp

    Dart syntax of JavaScript and understand the difference.

Learning Objectives: Familiar with Dart language.

8. Markdown

    Syntax to use.

    Writing articles, writing documentation necessary

Learning Objectives: proficient use Markdown to write articles, and other project documentation.

9. Shell Scripts

    Grammar, commonly used functions

Learning Objectives: You can use the shell to write more common procedures.

10. SQL language

    Common syntax, functions

Learning Objectives: You can write sql statement CARD common queries.

Second, basic computer

1. Data Structures and Algorithms

    Ideas classical algorithm

    Common data structures

Learning Objectives: mastering classical arithmetic thought to apply to the business code in the past, will choose the best data structure in an appropriate scene.

2. Computer Network

    HTTP protocol, TCP protocol, UDP protocol

  • DNS
  • WebSocket

Learning objectives: to master and understand the principles of these network protocols, it can be used to practice.

3. The computer composition principle


    Unicode, ASCII, UTF-8 encoding, etc.

    Computers Work

Learning Objectives: to understand their partner, to understand the “cloud” hosting, web hosting foundation.

4. Operating Systems

    Computer Operating System

    Linux operating system

Learning Objectives: To understand how the operating system works, you can use the linux operating system independent, master commonly used commands.

Third, Advanced

1. engineering

  • webpack, rollup
  • babel use the principles that can be used with the latest ECMAScript syntax eleven confirmed.

    Use eslint, stylelint, prettier style and grammar, etc., code review tool

    unit using the test tool or library

    sass programming syntax

    postcss postprocessor

    Principle and Implementation uglify

    Multiplayer git collaborative process

    gitlab to build and use

  • CI/CD
  • git hooks, husky,commitlint
  • Document output, StoryBook, gitDoc, gitbook etc.

  • npm, lerna
  • yarn
  • markdown render. Examples of markdown write can be performed online

    Modular, js modular in ECMAScript and Node.js have studied here mainly refers to the css modular several ways

    Data mock

Learning Objectives: scratch can quickly build up a modern multiplayer collaborative front-end engineering project, select the appropriate tools to improve development efficiency, keeping team members coding style uniform, and maximize the use of tools to protect the quality of the code.

2. Components of

  • Vue
  • React
  • WebComponents
  • Browser compatibility, canIUse

Learning Objectives: Familiar with Vue, React development, component-based understanding of future trends WebComponents. Master data-driven thinking, grasp the classic two-way binding implementation principles, read the source code, in-depth understanding. The distal derived product master routing, design and implementation of data management.

3. Web-based services development of Node.js

  • koa
  • express
  • pm2
  • RESTFul style

    Process Management

    Data persistence MongoDB, mysql, etc.

    Redis data cache, etc.

    Long Link Service

  • SSR
  • Docker
  • Nginx configuration, openresty

    Cloud hosting, shared hosting, etc.

Learning objectives: complete the build and deploy Web services independent.

4. Based on the CLI development Node.js

    CLI library and common development principles

    CLI library of popular design, realization of ideas

Learning Objectives: can be developed independently CLI, when there is a demand, you can quickly locate this program.

5. Desktop Application Development

  • Electron
  • NW.JS

Learning Objectives: To understand the development of JavaScript-based desktop applications, when necessary, can quickly locate this technical solution.

6. Mobile Application Development

    Flutter art and related derivatives

    React Native art and related derivatives

  • PWA
  • WEEX

Learning Objectives: to understand and master. Flutter can be used to develop a mobile RN or APP. Learn PWA.

7. Third Party platform

    Small micro-channel program

    Alipay applet

    Baidu applet

    Fast Application

  • wepy
  • mpvue
  • taro

Learning objectives: to quickly get started developing any kind of small program. Learn implementations applet. Understanding of the industry to realize the idea of ​​popular small library of program development.

8. Plug-in Development

    chrome plugin API

    DevTool extension

    VSCode and other IDE Plug-in Development

Learning Objectives: Learn what plug-ins can be done, when necessary, can quickly locate this scheme.

9. The browser works

    Layout engine, browser rendering principle

    Script interpreter engine, the script runs principle, v8

    headless headless browser, puppeteer

Learning Objectives: Mastering the browser works, can be applied to automated testing and performance optimization.

10. Performance Optimization

    RAIL model

    Hardware basis: a frame, the frame rate, the display principle drawing

    Progressive web page index (Progressive Web Metrics, referred to as PWM’s)

    Commonly used performance optimization tool

Learning Objectives: To understand the performance optimization tools, write excellent performance of Web applications.

11. Web browser security

    Browser security policy: origin policy, content security policy, sandbox

    The attack: XSS, CSRF

    Other: CRLF attacks, DNS hijacking and DNS poisoning, clickjacking, browser plug-in exploits, etc.

    Learn about common symmetric encryption and asymmetric encryption algorithms

Learning Objective: Learn common Web browser attacks, to avoid a security risk to write website.

12. Web server security

    Common attacks: directory traversal, DDOS, playback, password cracking, SQL injection

    Other attacks: CC attacks, port penetration

Learning Objectives: Learn about common server attacks and principle, there are obvious loopholes to avoid write Web services.

13. Monitoring statistics

    Front-end monitoring script error: error stack form, real-time monitoring implementation

    Front-end performance monitoring: performance indicators, implementation

    Server monitoring: hardware monitor, system monitoring, application monitoring, network monitoring, traffic analysis, log monitoring, security monitoring, API monitoring (availability, accuracy, response time), performance monitoring, service monitoring

Learning Objectives: learn how to self-build or build open source monitoring platform. Understand the meaning of some of the common monitoring indicators. Such as performance-related indicators TTLB, QPS What does it mean, what business-related indicators PV, UV, CTR, and so representative.

14. Visualization

    Advanced canvas

    Advanced svg

    WebGL foundation

    Computer Graphics

    Common library: ECharts, D3, etc.

Learning Objectives: This is a future-oriented front-end technology. Learn about common visualization technology solutions, when there is a demand can quickly locate programs. Attention and can be used to apply the latest technology to develop cool, data visualization.

15. SEO

    Search engine crawlers principle

    Search engine algorithm weights

    SEO web page associated with the tag

Learning Objectives: To understand and implement the principles of search results ranking algorithm of search engines. If external websites that can autonomously perform simple SEO, the website ranking in the search engines as high as possible.

16. development and debugging

    Explorer script debugging

    Node.js debugging

    Chrome Dev Tools Advanced use (environmental simulation, rendering performance, memory usage, debugging endpoints, packet capture, console built-in functions, and so on)

  • IDE
  • Use to help develop plug-ins, such as spell checking and so on.

Learning Objectives: mastering using Chrome debugging tools for script development, performance optimization are of great benefit. Master debugging method Node.js services.

17. UI library

    UX base

    Color theory, color light primary colors, color representation page

    Web page with the principles of color, color psychology

    Achieve common UI components

    Common use UI components and source code

Learning objectives: have a certain aesthetic and user experience attention, when there is no designers to participate, can independently design and some interactive programs, understand the meaning of common UI components represent, you can use the right components in the right needs. Identifying common UI component design and realization of ideas, independently developed UI component library.

18. WebAssembly



    Simple application development

Learning Objectives: To understand the development process WebAssembly byte code, understand its operation mode in the browser, when necessary, can quickly locate this program.

19. WebRTC

    Real-time communication solutions

    It is simple to understand and develop

Learning Objectives: Understand and follow WebRTC technology, understand the standards, implementation principle, when necessary, can quickly locate this program.

20. WebXR

    JavaScript developers VR and AR

    Learn WebXR API, follow up the draft process

Learning objectives: understanding and concern WebXR technology, understand the standards, implementation principle, when necessary, can quickly locate this program.

21. WebAuthn

    Biological authentication using a browser

    Learn WebAuthn API, and simple to use

Learning objectives: understanding and concern WebAuthn technology, understand the standards, implementation principle, when necessary, can quickly locate this program.


Here are some additional thinking:

Things have been identified: the next few years, 5G comprehensive commercial, network delay and transmission rate will no longer limit the human imagination, all things Internet era of unlimited anytime touch up.

Uncertain thing: to master the technique of controlled nuclear fusion.

Imagine, even if the future did all things Internet, we can make real-time AR, VR interactive, but not if the phone battery technology development, these assumptions no doubt in castles in the air, unable to fall. We all know now the browser using WebGL degree of power and other technologies. Assuming that humans mastered controlled nuclear fusion technology, and electricity networks have become as ubiquitous and air, that is the real age of the imagination can fly.

Bold prediction about the future development trend of front-end technology: Based WebAuthn biological authentication, we get rid of the shackles password authentication; 5G transmission rate based, open application installation or use open Web access without distinction locally; based WebAssembly, traditional customers end end may migrate to the Web, and has ultra-high performance. Assuming that the phone battery technology has made considerable progress, then WebRTC, WebXR, WebGL will be widely applied. Website qualitative change, traditional web page DOM structure will be called the history class, AR, VR, immersive interactive real-time communications will replace the text and images on mobile phones only need to install a browser, you can do what you want of anything.

But the arrival of the future when I do not know yet, based on the present, look to the future, trying to learn it.

This article published on “a schoolboy’s blog”, please indicate the source.

The End.


volatile principle underlying implementation


When the shared variable is declared as volatile, read this variable / write operations will be very special, here we uncover the mystery of volatile.

1.volatile semantic memory

1.1 volatile characteristics

A volatile variable itself has the following three characteristics:

  1. Visibility: that is, when a thread modifies the declared value of the volatile variable, the new values ​​for the other to read the variable thread is immediately visible. The common variables can not do this, the value of the common variable transmission needs to be done by the main memory between threads.

  2. Order: the ordering of the so-called volatile variable is declared as a critical section of code volatile variables execution is in order, which prohibits instruction reordering.

  3. Limited Atomicity: Atomicity Atomicity here synchronized with the volatile variables is different, means synchronized atomic declared as synchronized as long as the pieces of code or method of atomic operations performed on. The method does not modify the volatile or pieces of code, which is used to modify variables, to read a single volatile variables / write operations are atomic, but similar to such composite operations are not volatile ++ atomic. Atomicity is volatile so limited. And in a multithreaded environment, volatile does not guarantee atomicity.

1.2 volatile write – read semantic memory

Semantic volatile memory write: Write when writing to a volatile variable thread, JMM will correspond to the thread local memory shared variable values ​​flushed to main memory.

volatile memory semantics to read: When reading a volatile variable reader thread, the thread will JMM corresponding local memory is deasserted, the next thread to read from the main memory shared variables.

The principle of semantic 2.volatile

Before introducing the volatile semantic realization of the principle, let’s look at two CPU-related terminology:

    Memory barrier (memory barriers): a set of processor instructions for implementing restrictions on the order of memory operations.

    Cache line (cache line): smallest unit of storage that can be allocated in the CPU cache. The processor cache line fill will load the entire cache line.

2.1 volatile achieve visibility principle

How volatile memory visibility semantics to achieve it? Here we look at a piece of code, assembly instructions and code generation processor to print out (assembly instructions on how to print, I will attach the end of the article), look for volatile variables during a write operation, CPU will do anything:

public class VolatileTest {

    private static volatile VolatileTest instance = null;

    private VolatileTest(){}

    public static VolatileTest getInstance(){
        if(instance == null){
            instance = new VolatileTest();

        return instance;

    public static void main(String[] args) {

The above code is the one we are very familiar with singleton mode code in a multithreaded environment does not guarantee thread-safe code in a special place, I will add a volatile instance variable instance modifications, see the following assembly instructions printed :

In the screenshot above, we see the end of a line is crossed I have a compilation of comments: putstatic instance, understand the small partners JVM bytecode instructions are aware, putstatic meaning is to set the value of a static variable, also in the above code is assigned to the static instance variable, the code corresponding to: instance = new VolatileTest (); instance is instantiated in getInstance method, since the addition of volatile modification instance, the setting value to a static instance variable is written in a volatile variable.

See above there are assembly instructions, there bytecode instructions, you will not confuse these two commands, here I indicate what bytecode instructions and assembly instructions difference:

We all know that java is a cross-platform language, java is how to achieve such platform independence it? This requires us to understand the JVM and java bytecode files. Here we need a little consensus that any one programming language can be converted to be able to finally be executed by hardware as assembly instructions related to the platform, such as C and C ++ are our source code directly compiled into the assembly instructions associated with the CPU to CPU execution. Different CPU architectures different series, they have different assembly instructions, such as corresponding to X86 X86 CPU architecture assembly instructions, corresponding to the CPU architecture arm arm assembly instructions. If the program source code is compiled directly into the underlying hardware-related assembly instructions, then the cross-platform program will be greatly reduced, but relatively high execution performance. To achieve platform independence, the java compiler javac the java not directly translated into the assembler source program instruction associated with the platform, but is compiled into an intermediate language, i.e., the java byte code class files. Byte code file, the name suggests bytecode is stored, i.e. a one byte. It has opened read java bytecode file is too small partners might find bytecode files kept inside is not binary, but hexadecimal, binary because it is too long, a byte to the 8 binary components. Therefore subscripts represent hexadecimal, two hexadecimal byte can represent a. Byte code file is compiled java source code can not be directly executed CPU, then how to enforce it? The answer is JVM, in order to allow java program can run on different platforms, the official java java virtual machine specific to each platform, JVM running on the hardware layer, shielding the differences of various platforms. Byte code files to the unified javac compiler loaded by JVM, and finally reconverted hardware-related machine instruction is executed CPU. Know to load JVM bytecode files, then there is a problem is how to JVM bytecode each byte and we write java source code associated with that is that we know how to write JVM java source the code corresponding to the class file which paragraph hex, this hex is doing, to perform what function? And a bunch of hex, we can not read ah. So this need to define a JVM-level specification, the JVM level of abstraction that we can recognize some instruction mnemonic, mnemonic instructions that the java bytecode instructions.

Look at the screenshot above, when writing this volatile instance variable, found in front add to add a lock command, I box in the screenshot out, how volatile without modification, there is no lock in.

lock instruction causes the following event in the multi-core processors:

The current data processor cache line is written back to system memory, while the other CPU caches data set in the memory address is not valid.

In order to increase the processing speed of the processor and memory generally do not communicate directly, but first memory system to read data before an internal cache operation, but the operation is completed when the processor does not know the write cache data back to memory. But if the addition of volatile variables write, JVM will send a lock prefix instructions to the processor, this variable in the data cache line is written back to system memory. Then just write back to the system memory, but the data cache line in other processors or old, to make the other data processor cache line is written back to the new system memory data, we need to implement cache coherency protocol. A data processor that will own cache line is written back to system memory after each other processors to check if they will buffer the data has expired by sniffing data traveling in the bus, when the processor finds himself after memory address data corresponding to the cache line is modified, it will own cache line buffer as invalid data, when the operation of the processor to be modified on this data, the re-read data from system memory to your cache line, re-cache.

Under summary: volatile visibility is achieved by means of a lock CPU instruction, before writing the volatile by machine instructions plus lock prefix, the write volatile with the following two principles:

    Volatile write cache processor will be written back to main memory.

    A processor’s cache is written back to memory can cause failure of the other processor’s cache.

2.2 volatile orderly implementation principle

volatile ordering to ensure that by prohibiting instruction reordering to achieve. Reordering instruction including a compiler and a processor reordering, JMM limit both instructions are reordered.

Banning instruction reordering is how to achieve it? The answer is to add memory barrier. JMM has the following four cases to add volatile memory barrier:

    Each preceding write operation to insert a volatile StoreStore barrier against volatile write operation and the write reorder later.

    Write back operation is inserted in each of a volatile StoreLoad barrier against volatile write and read operations reordering later.

    Inserting a volatile LoadLoad barrier after each read operation, a read operation to prevent volatile read and reordered later.

    Inserting a volatile LoadStore barrier after each read operation, a write operation to prevent volatile read and reordered later.

The above-mentioned memory barrier insertion strategy is very conservative, such as a volatile write back operations need to add StoreStore and StoreLoad barrier, but the volatile write back may not read, so in theory you can only add StoreStore barrier, indeed, some processors did. But JMM this conservative strategy can insert memory barriers to ensure that any processor platform, volatile variables are ordered.

3.JSR-133 enhanced volatile memory semantics

JSR-133 before the old java memory model, although the operation is not allowed between the reordering volatile variables, but allows for reordering volatile variable between normal and variable. A barrier such as memory write operation is in front of volatile variables, the latter memory barrier operation is a write operation of the common variable, even though the two write operations may damage volatile memory semantics, but these two operations are allowed JMM reordering .

In JSR-133 and the following new java memory model to enhance the volatile memory semantics. As long as reordering between the volatile variables and common variables may destroy volatile memory semantics, this reordering will be compiled discouraged collation processor and memory barrier access policy prohibits.

Annex: Configure Print assembler instruction idea

Kit Download: link: https: //
    Extraction code: gn8z

Download extract the kit, to the bin directory under jre path jdk installation directory, as shown:

Then configure the idea, the input VM options options: -server -Xcomp -XX: + UnlockDiagnosticVMOptions -XX: + PrintAssembly -XX: CompileCommand = compileonly, * the name of the class method names.

JRE option to select jre path has been placed in the toolkit.

Below is my idea configuration:

After running the above can be configured to print the assembly instructions.