Categories
Uncategorized

Depth analysis Kubebuilder: make it easier to write CRD

Author | Liu Yang (look for inflammation) Ali cloud Senior Development Engineer

REVIEW: Custom Resource CRD (Custom Resource Definition) can be extended Kubernetes API, grasp the CRD is to become an essential skill Kubernetes advanced players, this article will introduce the concept of CRD and Controller, and the preparation of CRD framework Kubebuilder in-depth analysis, so you really understand and be able to quickly develop CRD.

Overview

Controller pattern and declarative API

Prior to the formal introduction Kubebuidler, we need to first understand the underlying implementation K8s extensive use of controller mode, and allows users hooked declarative API, which is the basis of introduction CRDs and Kubebuidler.

Controller Mode

K8S as a “container arrangement” platform, the core is a layout function, a minimum unit K8S Pod scheduling, has many attributes and fields, the arrangement is a K8S controllers and fields is achieved according to the attributes of the object are controlled by .
    Let’s look at an example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
spec:
  selector:
    matchLabels:
      app: test
  replicas: 2
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

Cluster Controllers K8s included in the deployment assembly, which is build-in for each type of resource (such as Deployments, Statefulset, CronJob, …) has a corresponding Controller, is substantially 1: 1 relationship. In the above example, after Deployment resource creation, scheduling corresponding to the Controller Deployment operation is very simple, sure to bring the number of Pod app = test is always equal to 2, Pod portion defined by the template, in particular, which is K8S kube-controller- this component manager doing this, you can look at pkg / controller directory K8s project, which contains all the controllers in a unique way responsible for some orchestration capabilities, but they all follow a common layout mode, namely: tuning loop (Reconcile loop), which pseudo code logic:

for {
actualState := GetResourceActualState(rsvc)
expectState := GetResourceExpectState(rsvc)
if actualState == expectState {
// do nothing
} else {
Reconcile(rsvc)
}
}

It is an infinite loop (event-driven actually + timing synchronization is achieved, not without cerebral circulation) continuously compare the actual state and the desired state, if the discrepancy is performed Reconcile (tuning) to adjust the actual state logic for the desired state. The desired state is our object definitions (usually YAML file), the actual state of the cluster which the current operating status (usually from K8s state for the cluster resources summary), the main orchestration logic controller is the third step to do, this operation is called tuning (Reconcile), the entire controller tuning process is called “Reconcile Loop”, the final result is generally tuned for a certain control target write operations, such as add / delete / change Pod.
    The definition of the object to be controlled in the controller through the “template” to complete, such as Deployment inside the template field content with API defines a standard Pod objects, like all instances of this Pod Deployment management, are based on this template Create a field, which is PodTemplate, a general definition of the control is defined by the upper part (desired state), together with the lower half of the control target of composition control target template.

Declarative API

The so-called declarative is, “What do you want to tell K8s instead of telling it how to do command”, a very familiar example is SQL, you “tell the DB return data according to the conditions and all types of operators, rather than telling it how to traverse, filtered and polymerization. ” In K8s inside, declarative manifestation kubectl apply command, always use the same command to apply the object creation and subsequent updates, tell K8s subject to the final state, the bottom is a PATCH by performing the operation on the original API objects to achieve, it can be treated in one portion a plurality of write operations, comprising Merge capacity of the PATCH final diff, and the command type can only handle one write request. Declarative API allows K8s “container arrangement” world looks gentle and beautiful, and the controller (and runtime container, storage, network models, etc.) are the unsung heroes of this millennium. Here, people would hope to be like build-in resources to build their own custom resource (CRD-Customize Resource Definition), then the custom resource corresponding to a write controller, to launch its own declarative API. K8s provides extended CRD way to meet the needs of users, and because of this extension very flexible way, in the latest version 1.15 pairs CRD made considerable enhancements. For users, the CRD to achieve expansion mainly do two things:

    CRD write and deploy them to K8s cluster;

When the effect of this step is to let K8s aware of the resources and their structural properties, submit define the custom resource of the user (usually a YAML file definition), K8s able to successfully verify this resource and create corresponding persistent Go struct of simultaneously tuning trigger logic controller.

    Controller write and deploy it to K8s cluster.

Effect of this step is to realize the tuning logic.
    Kubebuilder tools that help us to simplify these two things, and now we begin to introduce the protagonist.

What Kubebuilder that?

Summary

Kubebuilder is a CRDs build K8s API using the SDK, mainly:

    CRDs engineering scaffold provides tools to initialize, automatically generates boilerplate code and configuration;

    Library package provides code underlying K8s go-client;

User-developed from scratch CRDs, Controllers and Admission Webhooks to expand K8s.

Key concept

GVKs&GVRs

GVK = GroupVersionKind,GVR = GroupVersionResource。

API Group & Versions(GV)

API API Group is a collection of related functions, each have one or more of the Versions Group, for the evolution of the interface.

Kinds & Resources

Each contains a plurality of API type GV, called Kinds, Kind same definition may differ between different Versions, Resource object identification is Kind (resource type), and Resources are generally Kinds 1: 1, such as pods Resource correspond Pod Kind, but sometimes the same Kind may correspond to multiple Resources, such as Scale Kind may correspond to many of Resources: deployments / scale, replicasets / scale, for the CRD, it will only be 1: 1 relationship. Each package GVK is associated with a given root Go type, such as apps / v1 / Deployment to associate with Deployment struct K8s source inside k8s.io/api/apps/v1 package of all kinds of resources define our submission YAML files are required to write:

    apiVersion: This is GV.

    kind: This is K.

After according to GVK K8s will be able to find out what kind of resources you want to create in the end, create a good resource that you define in accordance with Spec has become a Resource, which is GVR. GVK / GVR is to coordinate K8s resources, we create / delete / modify / read resource base.

Scheme

Each group Controllers require a Scheme, provides Kinds and Go types of mapping correspondence, Go type that is given to know his GVK, given GVK knew his Go type, for example, we are given a Scheme: “tutotial.kubebuilder.io/api/v1” .CronJob {} Go type that is mapped to batch.tutotial.kubebuilder.io/v1 of CronJob GVK, then Api Server to obtain from the following JSON:

{
    "kind": "CronJob",
    "apiVersion": "batch.tutorial.kubebuilder.io/v1",
    ...
}

Go can be constructed corresponding to the type, it is possible to get some information correctly GVR Go through this type, the controller may obtain the desired type status and other auxiliary information through the tuning logic Go.

Manager

Kubebuilder core component has three responsibilities:

    Responsible for running all Controllers;

    Initializes the shared caches, containing listAndWatch function;

    Initialization clients for communicating with Api Server.

Cache

Kubebuilder core component responsible for the inside Controller process according to Scheme synchronous Api Server in GVRs all the Controller concerned GVKs, the core is GVK -> Informer mapping, Informer will be responsible for monitoring the corresponding create GVRs of GVK / deletion / update operations, Reconcile logic to trigger the Controller.

Controller

Kubebuidler generate scaffolding for our files, we just need to implement Reconcile method can be.

Clients

Inevitably need to be in the realization Controller for certain types of resources to create / delete / update is implemented by the Clients, which queries the actual query is local Cache, write operations to directly access Api Server.

Index

Because of the Cache Controller often query, Kubebuilder providing Index utility to add Cache index improve query efficiency.

Finalizer

In general, if the resource is deleted after, although we can delete the event is triggered, but this time not read any information be deleted from the Cache object inside, so that led to a lot of garbage clean-up work because of lack of information can not be, the Finalizer K8s field is used to handle this situation. In K8s, as long as the object ObjectMeta inside Finalizers is not empty, delete operations on the object will be transformed for the update operation, specifically, is to update deletionTimestamp field, its meaning is to tell K8s of GC “after deletionTimestamp this moment, as long as Finalizers it is empty, immediately delete the object. ”
    Therefore, the general posture is used to create the object Finalizers set (any string), then the process is not empty DeletionTimestamp update operation (actually delete), after executing all of the pre-delete hook according Finalizers value (in this case can be after any information is deleted in the Cache object inside read) will Finalizers set to be empty.

OwnerReference

K8s GC 在删除一个对象时,任何 ownerReference 是该对象的对象都会被清除,与此同时,Kubebuidler 支持所有对象的变更都会触发 Owner 对象 controller 的 Reconcile 方法。

所有概念集合在一起如图 1 所示:

图 1-Kubebuilder 核心概念
 

Kubebuilder how to use?

1. Create a scaffolding

kubebuilder init --domain edas.io

This step creates a Go module project, the introduction of the necessary dependencies, create some template files.

2. Create API

kubebuilder create api --group apps --version v1alpha1 --kind Application

This will create a corresponding template files CRD Controller and, after 2 steps, conventional engineering structure shown in Figure 2:
    
    FIG 2-Kubebuilder generated engineering structures described

3. Define the CRD

In Figure 2 corresponding document definition Spec and Status.

4. Write Logic Controller

In FIG. 3 the corresponding files to implement the logic Reconcile.

5. Test Published

Use Kubebuilder after completion of the test Makefile to build local mirroring, the deployment of our CRDs and Controller can be.

Meaning Kubebuilder occur?

让扩展 K8s 变得更简单,K8s 扩展的方式很多,Kubebuilder 目前专注于 CRD 扩展方式。
 

Thorough

In the process of using Kubebuilder some of the problems plaguing me:

    How to synchronize custom resource and K8s build-in resources?

    Controller’s Reconcile method is how it is triggered?

    Cache is what works?

With these questions we have to look at the source: D.

Source Reading

From the beginning main.go

main.go Kubebuilder create the entrance of the project, the logic is simple:

var (
    scheme   = runtime.NewScheme()
    setupLog = ctrl.Log.WithName("setup")
)
func init() {
    appsv1alpha1.AddToScheme(scheme)
    // +kubebuilder:scaffold:scheme
}
func main() {
    ...
        // 1、init Manager
    mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{Scheme: scheme, MetricsBindAddress: metricsAddr})
    if err != nil {
        setupLog.Error(err, "unable to start manager")
        os.Exit(1)
    }
        // 2、init Reconciler(Controller)
    err = (&controllers.ApplicationReconciler{
        Client: mgr.GetClient(),
        Log:    ctrl.Log.WithName("controllers").WithName("Application"),
        Scheme: mgr.GetScheme(),
    }).SetupWithManager(mgr)
    if err != nil {
        setupLog.Error(err, "unable to create controller", "controller", "EDASApplication")
        os.Exit(1)
    }
    // +kubebuilder:scaffold:builder
    setupLog.Info("starting manager")
        // 3、start Manager
    if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
        setupLog.Error(err, "problem running manager")
        os.Exit(1)
    }

You can see inside the init method we will appsv1alpha1 Scheme registered to go inside, so that Cache know who watch the, main method which basically Manager of logic:

    Initiated a Manager;

    Client Manager will be passed to the Controller, and the incoming call SetupWithManager Manager method to initialize the Controller;

    Start Manager.

Our core is to see these three processes.

Manager initialization

Manager initialization code as follows:

// New returns a new Manager for creating Controllers.
func New(config *rest.Config, options Options) (Manager, error) {
    ...
    // Create the cache for the cached read client and registering informers
    cache, err := options.NewCache(config, cache.Options{Scheme: options.Scheme, Mapper: mapper, Resync: options.SyncPeriod, Namespace: options.Namespace})
    if err != nil {
        return nil, err
    }
    apiReader, err := client.New(config, client.Options{Scheme: options.Scheme, Mapper: mapper})
    if err != nil {
        return nil, err
    }
    writeObj, err := options.NewClient(cache, config, client.Options{Scheme: options.Scheme, Mapper: mapper})
    if err != nil {
        return nil, err
    }
    ...
    return &controllerManager{
        config:           config,
        scheme:           options.Scheme,
        errChan:          make(chan error),
        cache:            cache,
        fieldIndexes:     cache,
        client:           writeObj,
        apiReader:        apiReader,
        recorderProvider: recorderProvider,
        resourceLock:     resourceLock,
        mapper:           mapper,
        metricsListener:  metricsListener,
        internalStop:     stop,
        internalStopper:  stop,
        port:             options.Port,
        host:             options.Host,
        leaseDuration:    *options.LeaseDuration,
        renewDeadline:    *options.RenewDeadline,
        retryPeriod:      *options.RetryPeriod,
    }, nil
}

Cache can be seen mainly created and Clients:

Creating Cache

Cache Initialization code is as follows:

// New initializes and returns a new Cache.
func New(config *rest.Config, opts Options) (Cache, error) {
    opts, err := defaultOpts(config, opts)
    if err != nil {
        return nil, err
    }
    im := internal.NewInformersMap(config, opts.Scheme, opts.Mapper, *opts.Resync, opts.Namespace)
    return &informerCache{InformersMap: im}, nil
}
// newSpecificInformersMap returns a new specificInformersMap (like
// the generical InformersMap, except that it doesn't implement WaitForCacheSync).
func newSpecificInformersMap(...) *specificInformersMap {
    ip := &specificInformersMap{
        Scheme:            scheme,
        mapper:            mapper,
        informersByGVK:    make(map[schema.GroupVersionKind]*MapEntry),
        codecs:            serializer.NewCodecFactory(scheme),
        resync:            resync,
        createListWatcher: createListWatcher,
        namespace:         namespace,
    }
    return ip
}
// MapEntry contains the cached data for an Informer
type MapEntry struct {
    // Informer is the cached informer
    Informer cache.SharedIndexInformer
    // CacheReader wraps Informer and implements the CacheReader interface for a single type
    Reader CacheReader
}
func createUnstructuredListWatch(gvk schema.GroupVersionKind, ip *specificInformersMap) (*cache.ListWatch, error) {
        ...
    // Create a new ListWatch for the obj
    return &cache.ListWatch{
        ListFunc: func(opts metav1.ListOptions) (runtime.Object, error) {
            if ip.namespace != "" && mapping.Scope.Name() != meta.RESTScopeNameRoot {
                return dynamicClient.Resource(mapping.Resource).Namespace(ip.namespace).List(opts)
            }
            return dynamicClient.Resource(mapping.Resource).List(opts)
        },
        // Setup the watch function
        WatchFunc: func(opts metav1.ListOptions) (watch.Interface, error) {
            // Watch needs to be set to true separately
            opts.Watch = true
            if ip.namespace != "" && mapping.Scope.Name() != meta.RESTScopeNameRoot {
                return dynamicClient.Resource(mapping.Resource).Namespace(ip.namespace).Watch(opts)
            }
            return dynamicClient.Resource(mapping.Resource).Watch(opts)
        },
    }, nil
}

Cache can be seen mainly created InformersMap, each GVK Scheme which have created a corresponding Informer, by informersByGVK this map GVK to do Informer mapping, each corresponding to GVK Informer will be based on ListWatch Watch List and function.

Creating Clients

Clients create very simple:

// defaultNewClient creates the default caching client
func defaultNewClient(cache cache.Cache, config *rest.Config, options client.Options) (client.Client, error) {
    // Create the Client for Write operations.
    c, err := client.New(config, options)
    if err != nil {
        return nil, err
    }
    return &client.DelegatingClient{
        Reader: &client.DelegatingReader{
            CacheReader:  cache,
            ClientReader: c,
        },
        Writer:       c,
        StatusClient: c,
    }, nil
}

Read operations use the cache created above, write operations use K8s go-client direct connection.

Controller initialization

Let’s look at the start Controller:

func (r *EDASApplicationReconciler) SetupWithManager(mgr ctrl.Manager) error {
    err := ctrl.NewControllerManagedBy(mgr).
        For(&appsv1alpha1.EDASApplication{}).
        Complete(r)
return err
}

The Builder mode is used, the NewControllerManagerBy and For methods are passed to Builder, and the most important is the last method Complete, the logic is:

func (blder *Builder) Build(r reconcile.Reconciler) (manager.Manager, error) {
...
    // Set the Manager
    if err := blder.doManager(); err != nil {
        return nil, err
    }
    // Set the ControllerManagedBy
    if err := blder.doController(r); err != nil {
        return nil, err
    }
    // Set the Watch
    if err := blder.doWatch(); err != nil {
        return nil, err
    }
...
    return blder.mgr, nil
}

Mainfly to look at DoController and Dowatch methods:

DoController method

func New(name string, mgr manager.Manager, options Options) (Controller, error) {
    if options.Reconciler == nil {
        return nil, fmt.Errorf("must specify Reconciler")
    }
    if len(name) == 0 {
        return nil, fmt.Errorf("must specify Name for Controller")
    }
    if options.MaxConcurrentReconciles <= 0 {
        options.MaxConcurrentReconciles = 1
    }
    // Inject dependencies into Reconciler
    if err := mgr.SetFields(options.Reconciler); err != nil {
        return nil, err
    }
    // Create controller with dependencies set
    c := &controller.Controller{
        Do:                      options.Reconciler,
        Cache:                   mgr.GetCache(),
        Config:                  mgr.GetConfig(),
        Scheme:                  mgr.GetScheme(),
        Client:                  mgr.GetClient(),
        Recorder:                mgr.GetEventRecorderFor(name),
        Queue:                   workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), name),
        MaxConcurrentReconciles: options.MaxConcurrentReconciles,
        Name:                    name,
    }
    // Add the controller as a Manager components
    return c, mgr.Add(c)
}

This method initializes a Controller, passing in some important parameters:

    Do: Reconcile logic;

    Cache: look for Informer registration Watch;

    Client: for K8s resources CRUD;

    Queue: CUD event cache for Watch resources;

    Recorder: event collection.

Dowatch method

func (blder *Builder) doWatch() error {
    // Reconcile type
    src := &source.Kind{Type: blder.apiType}
    hdler := &handler.EnqueueRequestForObject{}
    err := blder.ctrl.Watch(src, hdler, blder.predicates...)
    if err != nil {
        return err
    }
    // Watches the managed types
    for _, obj := range blder.managedObjects {
        src := &source.Kind{Type: obj}
        hdler := &handler.EnqueueRequestForOwner{
            OwnerType:    blder.apiType,
            IsController: true,
        }
        if err := blder.ctrl.Watch(src, hdler, blder.predicates...); err != nil {
            return err
        }
    }
    // Do the watch requests
    for _, w := range blder.watchRequest {
        if err := blder.ctrl.Watch(w.src, w.eventhandler, blder.predicates...); err != nil {
            return err
        }
    }
    return nil
}

You can see the method responsible for this Controller CRD was watch, while under the CRD will also watch other resources in the management of this managedObjects by Controller Owns Buidler incoming initialization method, we are concerned when it comes to Watch two logic:

    Registered handler

type EnqueueRequestForObject struct{}
// Create implements EventHandler
func (e *EnqueueRequestForObject) Create(evt event.CreateEvent, q workqueue.RateLimitingInterface) {
        ...
    q.Add(reconcile.Request{NamespacedName: types.NamespacedName{
        Name:      evt.Meta.GetName(),
        Namespace: evt.Meta.GetNamespace(),
    }})
}
// Update implements EventHandler
func (e *EnqueueRequestForObject) Update(evt event.UpdateEvent, q workqueue.RateLimitingInterface) {
    if evt.MetaOld != nil {
        q.Add(reconcile.Request{NamespacedName: types.NamespacedName{
            Name:      evt.MetaOld.GetName(),
            Namespace: evt.MetaOld.GetNamespace(),
        }})
    } else {
        enqueueLog.Error(nil, "UpdateEvent received with no old metadata", "event", evt)
    }
    if evt.MetaNew != nil {
        q.Add(reconcile.Request{NamespacedName: types.NamespacedName{
            Name:      evt.MetaNew.GetName(),
            Namespace: evt.MetaNew.GetNamespace(),
        }})
    } else {
        enqueueLog.Error(nil, "UpdateEvent received with no new metadata", "event", evt)
    }
}
// Delete implements EventHandler
func (e *EnqueueRequestForObject) Delete(evt event.DeleteEvent, q workqueue.RateLimitingInterface) {
        ...
    q.Add(reconcile.Request{NamespacedName: types.NamespacedName{
        Name:      evt.Meta.GetName(),
        Namespace: evt.Meta.GetNamespace(),
    }})
}

We can see Kubebuidler our registered Handler NamespacedName change an object into a queue that will happen, if you need to create a judgment in the Reconcile logic / update / delete, needs its own decision logic.

    Registration Process

// Watch implements controller.Controller
func (c *Controller) Watch(src source.Source, evthdler handler.EventHandler, prct ...predicate.Predicate) error {
    ...
    log.Info("Starting EventSource", "controller", c.Name, "source", src)
    return src.Start(evthdler, c.Queue, prct...)
}
// Start is internal and should be called only by the Controller to register an EventHandler with the Informer
// to enqueue reconcile.Requests.
func (is *Informer) Start(handler handler.EventHandler, queue workqueue.RateLimitingInterface,
    ...
    is.Informer.AddEventHandler(internal.EventHandler{Queue: queue, EventHandler: handler, Predicates: prct})
    return nil
}

Our Handler actually registered to the Informer above, so that the entire logic to string together, and through the Cache we created Informers all Scheme which GVKs, and then the corresponding GVK's Controller registered Watch Handler to the corresponding Informer, a result corresponding GVK inside the resources changes will trigger Handler, wrote Controller will change events in the event queue, and then trigger our Reconcile method.

Manager Start

func (cm *controllerManager) Start(stop <-chan struct{}) error {
    ...
    go cm.startNonLeaderElectionRunnables()
    ...
}
func (cm *controllerManager) startNonLeaderElectionRunnables() {
    ...
    // Start the Cache. Allow the function to start the cache to be mocked out for testing
    if cm.startCache == nil {
        cm.startCache = cm.cache.Start
    }
    go func() {
        if err := cm.startCache(cm.internalStop); err != nil {
            cm.errChan <- err
        }
    }()
        ...
        // Start Controllers
    for _, c := range cm.nonLeaderElectionRunnables {
        ctrl := c
        go func() {
            cm.errChan <- ctrl.Start(cm.internalStop)
        }()
    }
    cm.started = true
}

The main is to start Cache, Controller, will stream the entire event up and running, we start to look at the following logic.

Cache start

func (ip *specificInformersMap) Start(stop <-chan struct{}) {
    func() {
        ...
        // Start each informer
        for _, informer := range ip.informersByGVK {
            go informer.Informer.Run(stop)
        }
    }()
}
func (s *sharedIndexInformer) Run(stopCh <-chan struct{}) {
        ...
        // informer push resource obj CUD delta to this fifo queue
    fifo := NewDeltaFIFO(MetaNamespaceKeyFunc, s.indexer)
    cfg := &Config{
        Queue:            fifo,
        ListerWatcher:    s.listerWatcher,
        ObjectType:       s.objectType,
        FullResyncPeriod: s.resyncCheckPeriod,
        RetryOnError:     false,
        ShouldResync:     s.processor.shouldResync,
                // handler to process delta
        Process: s.HandleDeltas,
    }
    func() {
        s.startedLock.Lock()
        defer s.startedLock.Unlock()
                // this is internal controller process delta generate by reflector
        s.controller = New(cfg)
        s.controller.(*controller).clock = s.clock
        s.started = true
    }()
        ...
    wg.StartWithChannel(processorStopCh, s.processor.run)
    s.controller.Run(stopCh)
}
func (c *controller) Run(stopCh <-chan struct{}) {
    ...
    r := NewReflector(
        c.config.ListerWatcher,
        c.config.ObjectType,
        c.config.Queue,
        c.config.FullResyncPeriod,
    )
    ...
        // reflector is delta producer
    wg.StartWithChannel(stopCh, r.Run)
        // internal controller's processLoop is comsume logic
    wait.Until(c.processLoop, time.Second, stopCh)
}

The initialization core of the Cache is to initialize all Informer, Informer initialization core is to create reflector and internal controller, reflector listens for GVK specified on the Api Server, writes changes to delta queue, can be understood as changes event, the internal controller is the consumer of the change event, he is responsible for updating the local indexer and calculating the CUD event to the Watch Handler registered before us.

Controller startup

// Start implements controller.Controller
func (c *Controller) Start(stop <-chan struct{}) error {
    ...
    for i := 0; i < c.MaxConcurrentReconciles; i++ {
        // Process work items
        go wait.Until(func() {
            for c.processNextWorkItem() {
            }
        }, c.JitterPeriod, stop)
    }
    ...
}
func (c *Controller) processNextWorkItem() bool {
    ...
    obj, shutdown := c.Queue.Get()
    ...
    var req reconcile.Request
    var ok bool
    if req, ok = obj.(reconcile.Request); 
        ...
    // RunInformersAndControllers the syncHandler, passing it the namespace/Name string of the
    // resource to be synced.
    if result, err := c.Do.Reconcile(req); err != nil {
        c.Queue.AddRateLimited(req)
        ...
    } 
        ...
}

Controller initialization is to start goroutine continue to query the queue, the message is triggered if there is a change to Reconcile our custom logic.

Overall logic series

Above we read through the source code has been very clear throughout the process, but the saying goes, a picture is worth a thousand words, I produced a whole series logic diagram (Figure 3) to help you understand:
    
    FIG 3-Kubebuidler FIG overall logic series
    Kubebuilder as scaffolding tools have done a lot for us, we just need to finally achieve Reconcile method can not be repeated here.

Things work next month

Just started using Kubebuilder time, because of the high degree of encapsulation, many things are ignorant forced state, after a lot of analysis done very aware of the problem, such as the beginning of several proposed:

    How to synchronize custom resource and K8s build-in resources?

The custom resource needs and GVKs K8s build-in resource you want to register to Watch Scheme, Cache will automatically help us synchronize.

    Controller's Reconcile method is how it is triggered?

Get the resource change event via Informer inside the Cache, and then pass the event in producer consumer mode through two built-in Controllers, and finally trigger the Unified method.

    Cache is what works?

GVK -> Informer mapping, Informer contains Reflector and do Indexer event listeners and the local cache.
    There are many questions I will not say a word, and now Kubebuilder now no longer a black box.

Comparison of similar tools

The Operator Framework is similar to Kubebuilder, which is no longer expanded because of space.

Best Practices

mode

    Using OwnerRerence to do resource association, there are two features:

    Owner resources are deleted, Own resources are cascade deleted, which utilizes K8s GC;

    Own resources events are subject to change Reconcile method can trigger Owner object;

    Use Finalizer to do to clean up resources.

be careful

    When not in use Finalizer, resources are deleted can not obtain any information;

    Status field changes the object will trigger Reconcile method;

    Reconcile logic requires idempotent;

optimization

使用 IndexFunc 来优化资源查询的效率
 

to sum up

Through in-depth analysis, we can see Kubebuilder provides functionality to quickly write and Controller CRD is very helpful, whether it is Istio, Knative project or a variety of other well-known custom Operators, heavy use of CRD, the various components of the abstract for the CRD, Kubernetes become the control panel will become a trend, I hope this can help you understand and take advantage of this trend.

"Alibaba Cloud native micro-channel public number (ID: Alicloudnative) focus on micro service, Serverless, container, Service Mesh and other technical fields, focusing popular technology trends in cloud native, cloud native large-scale landing practice, do most understand cloud native developers technology public number. "

Leave a Reply