2 - Getting Started

Introduction & Resources on Operators

Operators manage both cluster and non-cluster resources on behalf of Kubernetes. This Java Operator SDK (JOSDK) aims at making it as easy as possible to write Kubernetes operators in Java using an API that should feel natural to Java developers and without having to worry about many low-level details that the SDK handles automatically.

For an introduction on operators, please see this blog post. or this talk

You can read about the common problems JOSDK is solving for you here.

You can also refer to the Writing Kubernetes operators using JOSDK blog series .

Generating Project Skeleton

Project includes a maven plugin to generate a skeleton project:

mvn io.javaoperatorsdk:bootstrapper:[version]:create -DprojectGroupId=org.acme -DprojectArtifactId=getting-started

Getting Started

The easiest way to get started with SDK is to start minikube and execute one of our examples. There is a dedicated page to describe how to use the samples.

Here are the main steps to develop the code and deploy the operator to a Kubernetes cluster. A more detailed and specific version can be found under samples/mysql-schema/README.md.

  1. Setup kubectl to work with your Kubernetes cluster of choice.
  2. Apply Custom Resource Definition
  3. Compile the whole project (framework + samples) using mvn install in the root directory
  4. Run the main class of the sample you picked and check out the sample’s README to see what it does. When run locally the framework will use your Kubernetes client configuration (in ~/. kube/config) to establish a connection to the cluster. This is why it was important to set up kubectl up front.
  5. You can work in this local development mode to play with the code.
  6. Build the Docker image and push it to the registry
  7. Apply RBAC configuration
  8. Apply deployment configuration
  9. Verify if the operator is up and running. Don’t run it locally anymore to avoid conflicts in processing events from the cluster’s API server.

3 - Patterns and Best Practices

This document describes patterns and best practices, to build and run operators, and how to implement them in terms of the Java Operator SDK (JOSDK).

See also best practices in Operator SDK.

Implementing a Reconciler

Reconcile All The Resources All the Time

The reconciliation can be triggered by events from multiple sources. It could be tempting to check the events and reconcile just the related resource or subset of resources that the controller manages. However, this is considered an anti-pattern for operators because the distributed nature of Kubernetes makes it difficult to ensure that all events are always received. If, for some reason, your operator doesn’t receive some events, if you do not reconcile the whole state, you might be operating with improper assumptions about the state of the cluster. This is why it is important to always reconcile all the resources, no matter how tempting it might be to only consider a subset. Luckily, JOSDK tries to make it as easy and efficient as possible by providing smart caches to avoid unduly accessing the Kubernetes API server and by making sure your reconciler is only triggered when needed.

Since there is a consensus regarding this topic in the industry, JOSDK does not provide event access from Reconciler implementations anymore starting with version 2 of the framework.

EventSources and Caching

As mentioned above during a reconciliation best practice is to reconcile all the dependent resources managed by the controller. This means that we want to compare a desired state with the actual state of the cluster. Reading the actual state of a resource from the Kubernetes API Server directly all the time would mean a significant load. Therefore, it’s a common practice to instead create a watch for the dependent resources and cache their latest state. This is done following the Informer pattern. In Java Operator SDK, informers are wrapped into an EventSource, to integrate it with the eventing system of the framework. This is implemented by the InformerEventSource class.

A new event that triggers the reconciliation is only propagated to the Reconciler when the actual resource is already in cache. Reconciler implementations therefore only need to compare the desired state with the observed one provided by the cached resource. If the resource cannot be found in the cache, it therefore needs to be created. If the actual state doesn’t match the desired state, the resource needs to be updated.

Idempotency

Since all resources should be reconciled when your Reconciler is triggered and reconciliations can be triggered multiple times for any given resource, especially when retry policies are in place, it is especially important that Reconciler implementations be idempotent, meaning that the same observed state should result in exactly the same outcome. This also means that operators should generally operate in stateless fashion. Luckily, since operators are usually managing declarative resources, ensuring idempotency is usually not difficult.

Sync or Async Way of Resource Handling

Depending on your use case, it’s possible that your reconciliation logic needs to wait a non-insignificant amount of time while the operator waits for resources to reach their desired state. For example, you Reconciler might need to wait for a Pod to get ready before performing additional actions. This problem can be approached either synchronously or asynchronously.

The asynchronous way is to just exit the reconciliation logic as soon as the Reconciler determines that it cannot complete its full logic at this point in time. This frees resources to process other primary resource events. However, this requires that adequate event sources are put in place to monitor state changes of all the resources the operator waits for. When this is done properly, any state change will trigger the Reconciler again and it will get the opportunity to finish its processing

The synchronous way would be to periodically poll the resources’ state until they reach their desired state. If this is done in the context of the reconcile method of your Reconciler implementation, this would block the current thread for possibly a long time. It’s therefore usually recommended to use the asynchronous processing fashion.

Why have Automatic Retries?

Automatic retries are in place by default and can be configured to your needs. It is also possible to completely deactivate the feature, though we advise against it. The main reason configure automatic retries for your Reconciler is due to the fact that errors occur quite often due to the distributed nature of Kubernetes: transient network errors can be easily dealt with by automatic retries. Similarly, resources can be modified by different actors at the same time, so it’s not unheard of to get conflicts when working with Kubernetes resources. Such conflicts can usually be quite naturally resolved by reconciling the resource again. If it’s done automatically, the whole process can be completely transparent.

Managing State

Thanks to the declarative nature of Kubernetes resources, operators that deal only with Kubernetes resources can operate in a stateless fashion, i.e. they do not need to maintain information about the state of these resources, as it should be possible to completely rebuild the resource state from its representation (that’s what declarative means, after all). However, this usually doesn’t hold true anymore when dealing with external resources, and it might be necessary for the operator to keep track of this external state so that it is available when another reconciliation occurs. While such state could be put in the primary resource’s status sub-resource, this could become quickly difficult to manage if a lot of state needs to be tracked. It also goes against the best practice that a resource’s status should represent the actual resource state, when its spec represents the desired state. Putting state that doesn’t strictly represent the resource’s actual state is therefore discouraged. Instead, it’s advised to put such state into a separate resource meant for this purpose such as a Kubernetes Secret or ConfigMap or even a dedicated Custom Resource, which structure can be more easily validated.

Stopping (or not) Operator in case of Informer Errors and Cache Sync Timeouts

It can be configured if the operator should stop in case of any informer error happens on startup. By default, if there ia an error on startup and the informer for example has no permissions list the target resources (both the primary resource or secondary resources) the operator will stop instantly. This behavior can be altered by setting the mentioned flag to false, so operator will start even some informers are not started. In this case - same as in case when an informer is started at first but experienced problems later - will continuously retry the connection indefinitely with an exponential backoff. The operator will just stop if there is a fatal error, currently that is when a resource cannot be deserialized. The typical use case for changing this flag is when a list of namespaces is watched by a controller. In is better to start up the operator, so it can handle other namespaces while there might be a permission issue for some resources in another namespace.

The stopOnInformerErrorDuringStartup has implication on cache sync timeout behavior. If true operator will stop on cache sync timeout. if false, after the timeout the controller will start reconcile resources even if one or more event source caches did not sync yet.

Graceful Shutdown

You can provide sufficient time for the reconciler to process and complete the currently ongoing events before shutting down. The configuration is simple. You just need to set an appropriate duration value for reconciliationTerminationTimeout using ConfigurationServiceOverrider.

final var overridden = new ConfigurationServiceOverrider(config)
    .withReconciliationTerminationTimeout(Duration.ofSeconds(5));

final var operator = new Operator(overridden);

4 - Using sample Operators

We have examples under sample-operators directory which are intended to demonstrate the usage of different components in different scenarios, but mainly are more real world examples:

  • webpage: Simple example creating an NGINX webserver from a Custom Resource containing HTML code.
  • mysql-schema: Operator managing schemas in a MySQL database. Shows how to manage non Kubernetes resources.
  • tomcat: Operator with two controllers, managing Tomcat instances and Webapps running in Tomcat. The intention with this example to show how to manage multiple related custom resources and/or more controllers.

Implementing a Sample Operator

Add dependency to your project with Maven:


<dependency>
    <groupId>io.javaoperatorsdk</groupId>
    <artifactId>operator-framework</artifactId>
    <version>{see https://search.maven.org/search?q=a:operator-framework%20AND%20g:io.javaoperatorsdk for latest version}</version>
</dependency>

Or alternatively with Gradle, which also requires declaring the SDK as an annotation processor to generate the mappings between controllers and custom resource classes:

dependencies {
    implementation "io.javaoperatorsdk:operator-framework:${javaOperatorVersion}"
    annotationProcessor "io.javaoperatorsdk:operator-framework:${javaOperatorVersion}"
}

Once you’ve added the dependency, define a main method initializing the Operator and registering a controller.

public class Runner {

    public static void main(String[] args) {
        Operator operator = new Operator();
        operator.register(new WebPageReconciler());
        operator.start();
    }
}

The Controller implements the business logic and describes all the classes needed to handle the CRD.


@ControllerConfiguration
public class WebPageReconciler implements Reconciler<WebPage> {

    // Return the changed resource, so it gets updated. See javadoc for details.
    @Override
    public UpdateControl<CustomService> reconcile(CustomService resource,
                                                               Context context) {
        // ... your logic ...
        return UpdateControl.patchStatus(resource);
    }
}

A sample custom resource POJO representation


@Group("sample.javaoperatorsdk")
@Version("v1")
public class WebPage extends CustomResource<WebPageSpec, WebPageStatus> implements
        Namespaced {
}

public class WebServerSpec {

    private String html;

    public String getHtml() {
        return html;
    }

    public void setHtml(String html) {
        this.html = html;
    }
}

Deactivating CustomResource implementations validation

The operator will, by default, query the deployed CRDs to check that the CustomResource implementations match what is known to the cluster. This requires an additional query to the cluster and, sometimes, elevated privileges for the operator to be able to read the CRDs from the cluster. This validation is mostly meant to help users new to operator development get started and avoid common mistakes. Advanced users or production deployments might want to skip this step. This is done by setting the CHECK_CRD_ENV_KEY environment variable to false.

Automatic generation of CRDs

To automatically generate CRD manifests from your annotated Custom Resource classes, you only need to add the following dependencies to your project (in the background an annotation processor is used), with Maven:


<dependency>
    <groupId>io.fabric8</groupId>
    <artifactId>crd-generator-apt</artifactId>
    <scope>provided</scope>
</dependency>

or with Gradle:

dependencies {
    annotationProcessor 'io.fabric8:crd-generator-apt:<version>'
    ...
}

The CRD will be generated in target/classes/META-INF/fabric8 (or in target/test-classes/META-INF/fabric8, if you use the test scope) with the CRD name suffixed by the generated spec version. For example, a CR using the java-operator-sdk.io group with a mycrs plural form will result in 2 files:

  • mycrs.java-operator-sdk.io-v1.yml
  • mycrs.java-operator-sdk.io-v1beta1.yml

NOTE:

Quarkus users using the quarkus-operator-sdk extension do not need to add any extra dependency to get their CRD generated as this is handled by the extension itself.

Quarkus

A Quarkus extension is also provided to ease the development of Quarkus-based operators.

Add this dependency to your project:


<dependency>
    <groupId>io.quarkiverse.operatorsdk</groupId>
    <artifactId>quarkus-operator-sdk</artifactId>
    <version>{see https://search.maven.org/search?q=a:quarkus-operator-sdk for latest version}
    </version>
</dependency>

Create an Application, Quarkus will automatically create and inject a KubernetesClient ( or OpenShiftClient), Operator, ConfigurationService and ResourceController instances that your application can use. Below, you can see the minimal code you need to write to get your operator and controllers up and running:


@QuarkusMain
public class QuarkusOperator implements QuarkusApplication {

    @Inject
    Operator operator;

    public static void main(String... args) {
        Quarkus.run(QuarkusOperator.class, args);
    }

    @Override
    public int run(String... args) throws Exception {
        operator.start();
        Quarkus.waitForExit();
        return 0;
    }
}

Spring Boot

You can also let Spring Boot wire your application together and automatically register the controllers.

Add this dependency to your project:


<dependency>
    <groupId>io.javaoperatorsdk</groupId>
    <artifactId>operator-framework-spring-boot-starter</artifactId>
    <version>{see https://search.maven.org/search?q=a:operator-framework-spring-boot-starter%20AND%20g:io.javaoperatorsdk for
        latest version}
    </version>
</dependency>

Create an Application


@SpringBootApplication
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

You will also need a @Configuration to make sure that your reconciler is registered:


@Configuration
public class Config {

    @Bean
    public WebPageReconciler customServiceController() {
        return new WebPageReconciler();
    }

    @Bean(initMethod = "start", destroyMethod = "stop")
    @SuppressWarnings("rawtypes")
    public Operator operator(List<Reconciler> controllers) {
        Operator operator = new Operator();
        controllers.forEach(operator::register);
        return operator;
    }
}

Spring Boot test support

Adding the following dependency would let you mock the operator for the tests where loading the spring container is necessary, but it doesn’t need real access to a Kubernetes cluster.


<dependency>
    <groupId>io.javaoperatorsdk</groupId>
    <artifactId>operator-framework-spring-boot-starter-test</artifactId>
    <version>{see https://search.maven.org/search?q=a:operator-framework-spring-boot-starter%20AND%20g:io.javaoperatorsdk for
        latest version}
    </version>
</dependency>

Mock the operator:


@SpringBootTest
@EnableMockOperator
public class SpringBootStarterSampleApplicationTest {

    @Test
    void contextLoads() {
    }
}

5 - Glossary

  • Primary Resource - the resource that represents the desired state that the controller is working to achieve. While this is often a Custom Resource, it can be also be a Kubernetes native resource (Deployment, ConfigMap,…).
  • Secondary Resource - any resource that the controller needs to manage the reach the desired state represented by the primary resource. These resources can be created, updated, deleted or simply read depending on the use case. For example, the Deployment controller manages ReplicaSet instances when trying to realize the state represented by the Deployment. In this scenario, the Deployment is the primary resource while ReplicaSet is one of the secondary resources managed by the Deployment controller.
  • Dependent Resource - a feature of JOSDK, to make it easier to manage secondary resources. A dependent resource represents a secondary resource with related reconciliation logic.
  • Low-level API - refers to the SDK APIs that don’t use any of features (such as Dependent Resources or Workflows) outside of the core
  • Reconciler interface. See the WebPage sample . The same logic is also implemented using Dependent Resource and Workflows

6 - Features

Features

The Java Operator SDK (JOSDK) is a high level framework and related tooling aimed at facilitating the implementation of Kubernetes operators. The features are by default following the best practices in an opinionated way. However, feature flags and other configuration options are provided to fine tune or turn off these features.

Reconciliation Execution in a Nutshell

Reconciliation execution is always triggered by an event. Events typically come from a primary resource, most of the time a custom resource, triggered by changes made to that resource on the server (e.g. a resource is created, updated or deleted). Reconciler implementations are associated with a given resource type and listens for such events from the Kubernetes API server so that they can appropriately react to them. It is, however, possible for secondary sources to trigger the reconciliation process. This usually occurs via the event source mechanism.

When an event is received reconciliation is executed, unless a reconciliation is already underway for this particular resource. In other words, the framework guarantees that no concurrent reconciliation happens for any given resource.

Once the reconciliation is done, the framework checks if:

  • an exception was thrown during execution and if yes schedules a retry.
  • new events were received during the controller execution, if yes schedule a new reconciliation.
  • the reconcilier instructed the SDK to re-schedule a reconciliation at a later date, if yes schedules a timer event with the specified delay.
  • none of the above, the reconciliation is finished.

In summary, the core of the SDK is implemented as an eventing system, where events trigger reconciliation requests.

Implementing a Reconciler and/or Cleaner

The lifecycle of a Kubernetes resource can be clearly separated into two phases from the perspective of an operator depending on whether a resource is created or updated, or on the other hand if it is marked for deletion.

This separation-related logic is automatically handled by the framework. The framework will always call the reconcile method, unless the custom resource is marked from deletion . On the other, if the resource is marked from deletion and if the Reconciler implements the Cleaner interface, only the cleanup method will be called. Implementing the Cleaner interface allows developers to let the SDK know that they are interested in cleaning related state (e.g. out-of-cluster resources). The SDK will therefore automatically add a finalizer associated with your Reconciler so that the Kubernetes server doesn’t delete your resources before your Reconciler gets a chance to clean things up. See Finalizer support for more details.

Using UpdateControl and DeleteControl

These two classes are used to control the outcome or the desired behaviour after the reconciliation.

The UpdateControl can instruct the framework to update the status sub-resource of the resource and/or re-schedule a reconciliation with a desired time delay:

  @Override
  public UpdateControl<MyCustomResource> reconcile(
     EventSourceTestCustomResource resource, Context context) {
    // omitted code
    
    return UpdateControl.patchStatus(resource).rescheduleAfter(10, TimeUnit.SECONDS);
  }

without an update:

  @Override
  public UpdateControl<MyCustomResource> reconcile(
     EventSourceTestCustomResource resource, Context context) {
    // omitted code
    
    return UpdateControl.<MyCustomResource>noUpdate().rescheduleAfter(10, TimeUnit.SECONDS);
  }

Note, though, that using EventSources should be preferred to rescheduling since the reconciliation will then be triggered only when needed instead than on a timely basis.

Those are the typical use cases of resource updates, however in some cases there it can happen that the controller wants to update the resource itself (for example to add annotations) or not perform any updates, which is also supported.

It is also possible to update both the status and the resource with the patchResourceAndStatus method. In this case, the resource is updated first followed by the status, using two separate requests to the Kubernetes API.

From v5 UpdateControl only supports patching the resources, by default using Server Side Apply (SSA). It is important to understand how SSA works in Kubernetes. Mainly, resources applied using SSA should contain only the fields identifying the resource and those the user is interested in (a ‘fully specified intent’ in Kubernetes parlance), thus usually using a resource created from scratch, see sample. To contrast, see the same sample, this time without SSA.

Non-SSA based patch is still supported.
You can control whether or not to use SSA using ConfigurationServcice.useSSAToPatchPrimaryResource() and the related ConfigurationServiceOverrider.withUseSSAToPatchPrimaryResource method. Related integration test can be found here.

Handling resources directly using the client, instead of delegating these updates operations to JOSDK by returning an UpdateControl at the end of your reconciliation, should work appropriately. However, we do recommend to use UpdateControl instead since JOSDK makes sure that the operations are handled properly, since there are subtleties to be aware of. For example, if you are using a finalizer, JOSDK makes sure to include it in your fully specified intent so that it is not unintentionally removed from the resource (which would happen if you omit it, since your controller is the designated manager for that field and Kubernetes interprets the finalizer being gone from the specified intent as a request for removal).

DeleteControl typically instructs the framework to remove the finalizer after the dependent resource are cleaned up in cleanup implementation.


public DeleteControl cleanup(MyCustomResource customResource,Context context){
        // omitted code
    
        return DeleteControl.defaultDelete();
        }

However, it is possible to instruct the SDK to not remove the finalizer, this allows to clean up the resources in a more asynchronous way, mostly for cases when there is a long waiting period after a delete operation is initiated. Note that in this case you might want to either schedule a timed event to make sure cleanup is executed again or use event sources to get notified about the state changes of the deleted resource.

Finalizer Support

Kubernetes finalizers make sure that your Reconciler gets a chance to act before a resource is actually deleted after it’s been marked for deletion. Without finalizers, the resource would be deleted directly by the Kubernetes server.

Depending on your use case, you might or might not need to use finalizers. In particular, if your operator doesn’t need to clean any state that would not be automatically managed by the Kubernetes cluster (e.g. external resources), you might not need to use finalizers. You should use the Kubernetes garbage collection mechanism as much as possible by setting owner references for your secondary resources so that the cluster can automatically deleted them for you whenever the associated primary resource is deleted. Note that setting owner references is the responsibility of the Reconciler implementation, though dependent resources make that process easier.

If you do need to clean such state, you need to use finalizers so that their presence will prevent the Kubernetes server from deleting the resource before your operator is ready to allow it. This allows for clean up to still occur even if your operator was down when the resources was “deleted” by a user.

JOSDK makes cleaning resources in this fashion easier by taking care of managing finalizers automatically for you when needed. The only thing you need to do is let the SDK know that your operator is interested in cleaning state associated with your primary resources by having it implement the Cleaner<P> interface. If your Reconciler doesn’t implement the Cleaner interface, the SDK will consider that you don’t need to perform any clean-up when resources are deleted and will therefore not activate finalizer support. In other words, finalizer support is added only if your Reconciler implements the Cleaner interface.

Finalizers are automatically added by the framework as the first step, thus after a resource is created, but before the first reconciliation. The finalizer is added via a separate Kubernetes API call. As a result of this update, the finalizer will then be present on the resource. The reconciliation can then proceed as normal.

The finalizer that is automatically added will be also removed after the cleanup is executed on the reconciler. This behavior is customizable as explained above when we addressed the use of DeleteControl.

You can specify the name of the finalizer to use for your Reconciler using the @ControllerConfiguration annotation. If you do not specify a finalizer name, one will be automatically generated for you.

From v5 by default finalizer is added using Served Side Apply. See also UpdateControl in docs.

Generation Awareness and Event Filtering

A best practice when an operator starts up is to reconcile all the associated resources because changes might have occurred to the resources while the operator was not running.

When this first reconciliation is done successfully, the next reconciliation is triggered if either dependent resources are changed or the primary resource .spec field is changed. If other fields like .metadata are changed on the primary resource, the reconciliation could be skipped. This behavior is supported out of the box and reconciliation is by default not triggered if changes to the primary resource do not increase the .metadata.generation field. Note that changes to .metada.generation are automatically handled by Kubernetes.

To turn off this feature, set generationAwareEventProcessing to false for the Reconciler.

Support for Well Known (non-custom) Kubernetes Resources

A Controller can be registered for a non-custom resource, so well known Kubernetes resources like ( Ingress, Deployment,…).

See the integration test for reconciling deployments.

public class DeploymentReconciler
    implements Reconciler<Deployment>, TestExecutionInfoProvider {

    @Override
    public UpdateControl<Deployment> reconcile(
            Deployment resource, Context context) {
        // omitted code
    }
}

Max Interval Between Reconciliations

When informers / event sources are properly set up, and the Reconciler implementation is correct, no additional reconciliation triggers should be needed. However, it’s a common practice to have a failsafe periodic trigger in place, just to make sure resources are nevertheless reconciled after a certain amount of time. This functionality is in place by default, with a rather high time interval (currently 10 hours) after which a reconciliation will be automatically triggered even in the absence of other events. See how to override this using the standard annotation:

@ControllerConfiguration(maxReconciliationInterval = @MaxReconciliationInterval(
                interval = 50,
                timeUnit = TimeUnit.MILLISECONDS))
public class MyReconciler implements Reconciler<HasMetadata> {}

The event is not propagated at a fixed rate, rather it’s scheduled after each reconciliation. So the next reconciliation will occur at most within the specified interval after the last reconciliation.

This feature can be turned off by setting maxReconciliationInterval to Constants.NO_MAX_RECONCILIATION_INTERVAL or any non-positive number.

The automatic retries are not affected by this feature so a reconciliation will be re-triggered on error, according to the specified retry policy, regardless of this maximum interval setting.

Automatic Retries on Error

JOSDK will schedule an automatic retry of the reconciliation whenever an exception is thrown by your Reconciler. The retry is behavior is configurable but a default implementation is provided covering most of the typical use-cases, see GenericRetry .

    GenericRetry.defaultLimitedExponentialRetry()
        .setInitialInterval(5000)
        .setIntervalMultiplier(1.5D)
        .setMaxAttempts(5);

You can also configure the default retry behavior using the @GradualRetry annotation.

It is possible to provide a custom implementation using the retry field of the @ControllerConfiguration annotation and specifying the class of your custom implementation. Note that this class will need to provide an accessible no-arg constructor for automated instantiation. Additionally, your implementation can be automatically configured from an annotation that you can provide by having your Retry implementation implement the AnnotationConfigurable interface, parameterized with your annotation type. See the GenericRetry implementation for more details.

Information about the current retry state is accessible from the Context object. Of note, particularly interesting is the isLastAttempt method, which could allow your Reconciler to implement a different behavior based on this status, by setting an error message in your resource’ status, for example, when attempting a last retry.

Note, though, that reaching the retry limit won’t prevent new events to be processed. New reconciliations will happen for new events as usual. However, if an error also occurs that would normally trigger a retry, the SDK won’t schedule one at this point since the retry limit is already reached.

A successful execution resets the retry state.

Setting Error Status After Last Retry Attempt

In order to facilitate error reporting, Reconciler can implement the ErrorStatusHandler interface:

public interface ErrorStatusHandler<P extends HasMetadata> {

   ErrorStatusUpdateControl<P> updateErrorStatus(P resource, Context<P> context, Exception e);

}

The updateErrorStatus method is called in case an exception is thrown from the Reconciler. It is also called even if no retry policy is configured, just after the reconciler execution. RetryInfo.getAttemptCount() is zero after the first reconciliation attempt, since it is not a result of a retry (regardless of whether a retry policy is configured or not).

ErrorStatusUpdateControl is used to tell the SDK what to do and how to perform the status update on the primary resource, always performed as a status sub-resource request. Note that this update request will also produce an event, and will result in a reconciliation if the controller is not generation aware.

This feature is only available for the reconcile method of the Reconciler interface, since there should not be updates to resource that have been marked for deletion.

Retry can be skipped in cases of unrecoverable errors:

 ErrorStatusUpdateControl.patchStatus(customResource).withNoRetry();

Correctness and Automatic Retries

While it is possible to deactivate automatic retries, this is not desirable, unless for very specific reasons. Errors naturally occur, whether it be transient network errors or conflicts when a given resource is handled by a Reconciler but is modified at the same time by a user in a different process. Automatic retries handle these cases nicely and will usually result in a successful reconciliation.

Retry and Rescheduling and Event Handling Common Behavior

Retry, reschedule and standard event processing form a relatively complex system, each of these functionalities interacting with the others. In the following, we describe the interplay of these features:

  1. A successful execution resets a retry and the rescheduled executions which were present before the reconciliation. However, a new rescheduling can be instructed from the reconciliation outcome (UpdateControl or DeleteControl).

    For example, if a reconciliation had previously been re-scheduled after some amount of time, but an event triggered the reconciliation (or cleanup) in the mean time, the scheduled execution would be automatically cancelled, i.e. re-scheduling a reconciliation does not guarantee that one will occur exactly at that time, it simply guarantees that one reconciliation will occur at that time at the latest, triggering one if no event from the cluster triggered one. Of course, it’s always possible to re-schedule a new reconciliation at the end of that “automatic” reconciliation.

    Similarly, if a retry was scheduled, any event from the cluster triggering a successful execution in the mean time would cancel the scheduled retry (because there’s now no point in retrying something that already succeeded)

  2. In case an exception happened, a retry is initiated. However, if an event is received meanwhile, it will be reconciled instantly, and this execution won’t count as a retry attempt.

  3. If the retry limit is reached (so no more automatic retry would happen), but a new event received, the reconciliation will still happen, but won’t reset the retry, and will still be marked as the last attempt in the retry info. The point (1) still holds, but in case of an error, no retry will happen.

The thing to keep in mind when it comes to retrying or rescheduling is that JOSDK tries to avoid unnecessary work. When you reschedule an operation, you instruct JOSDK to perform that operation at the latest by the end of the rescheduling delay. If something occurred on the cluster that triggers that particular operation (reconciliation or cleanup), then JOSDK considers that there’s no point in attempting that operation again at the end of the specified delay since there is now no point to do so anymore. The same idea also applies to retries.

Rate Limiting

It is possible to rate limit reconciliation on a per-resource basis. The rate limit also takes precedence over retry/re-schedule configurations: for example, even if a retry was scheduled for the next second but this request would make the resource go over its rate limit, the next reconciliation will be postponed according to the rate limiting rules. Note that the reconciliation is never cancelled, it will just be executed as early as possible based on rate limitations.

Rate limiting is by default turned off, since correct configuration depends on the reconciler implementation, in particular, on how long a typical reconciliation takes. (The parallelism of reconciliation itself can be limited ConfigurationService by configuring the ExecutorService appropriately.)

A default rate limiter implementation is provided, see: PeriodRateLimiter . Users can override it by implementing their own RateLimiter and specifying this custom implementation using the rateLimiter field of the @ControllerConfiguration annotation. Similarly to the Retry implementations, RateLimiter implementations must provide an accessible, no-arg constructor for instantiation purposes and can further be automatically configured from your own, provided annotation provided your RateLimiter implementation also implements the AnnotationConfigurable interface, parameterized by your custom annotation type.

To configure the default rate limiter use the @RateLimited annotation on your Reconciler class. The following configuration limits each resource to reconcile at most twice within a 3 second interval:


@RateLimited(maxReconciliations = 2, within = 3, unit = TimeUnit.SECONDS)
@ControllerConfiguration
public class MyReconciler implements Reconciler<MyCR> {

}

Thus, if a given resource was reconciled twice in one second, no further reconciliation for this resource will happen before two seconds have elapsed. Note that, since rate is limited on a per-resource basis, other resources can still be reconciled at the same time, as long, of course, that they stay within their own rate limits.

See also this blog post .

Event sources are a relatively simple yet powerful and extensible concept to trigger controller executions, usually based on changes to dependent resources. You typically need an event source when you want your Reconciler to be triggered when something occurs to secondary resources that might affect the state of your primary resource. This is needed because a given Reconciler will only listen by default to events affecting the primary resource type it is configured for. Event sources act as listen to events affecting these secondary resources so that a reconciliation of the associated primary resource can be triggered when needed. Note that these secondary resources need not be Kubernetes resources. Typically, when dealing with non-Kubernetes objects or services, we can extend our operator to handle webhooks or websockets or to react to any event coming from a service we interact with. This allows for very efficient controller implementations because reconciliations are then only triggered when something occurs on resources affecting our primary resources thus doing away with the need to periodically reschedule reconciliations.

Event Sources architecture diagram

There are few interesting points here:

The CustomResourceEventSource event source is a special one, responsible for handling events pertaining to changes affecting our primary resources. This EventSource is always registered for every controller automatically by the SDK. It is important to note that events always relate to a given primary resource. Concurrency is still handled for you, even in the presence of EventSource implementations, and the SDK still guarantees that there is no concurrent execution of the controller for any given primary resource (though, of course, concurrent/parallel executions of events pertaining to other primary resources still occur as expected).

Caching and Event Sources

Kubernetes resources are handled in a declarative manner. The same also holds true for event sources. For example, if we define an event source to watch for changes of a Kubernetes Deployment object using an InformerEventSource, we always receive the whole associated object from the Kubernetes API. This object might be needed at any point during our reconciliation process and it’s best to retrieve it from the event source directly when possible instead of fetching it from the Kubernetes API since the event source guarantees that it will provide the latest version. Not only that, but many event source implementations also cache resources they handle so that it’s possible to retrieve the latest version of resources without needing to make any calls to the Kubernetes API, thus allowing for very efficient controller implementations.

Note after an operator starts, caches are already populated by the time the first reconciliation is processed for the InformerEventSource implementation. However, this does not necessarily hold true for all event source implementations (PerResourceEventSource for example). The SDK provides methods to handle this situation elegantly, allowing you to check if an object is cached, retrieving it from a provided supplier if not. See related method .

Registering Event Sources

To register event sources, your Reconciler has to override the prepareEventSources and return list of event sources to register. One way to see this in action is to look at the tomcat example (irrelevant details omitted):


@ControllerConfiguration
public class WebappReconciler
        implements Reconciler<Webapp>, Cleaner<Webapp>, EventSourceInitializer<Webapp> {
   // ommitted code
    
   @Override
    public Map<String, EventSource> prepareEventSources(EventSourceContext<Webapp> context) {
        InformerConfiguration<Tomcat> configuration =
                InformerEventSourceConfiguration.from(Tomcat.class, Tomcat.class)
                        .withSecondaryToPrimaryMapper(webappsMatchingTomcatName)
                        .withPrimaryToSecondaryMapper(
                                (Webapp primary) -> Set.of(new ResourceID(primary.getSpec().getTomcat(),
                                        primary.getMetadata().getNamespace())))
                        .build();
        return EventSourceInitializer
                .nameEventSources(new InformerEventSource<>(configuration, context));
    }
  
}

In the example above an InformerEventSource is configured and registered. InformerEventSource is one of the bundled EventSource implementations that JOSDK provides to cover common use cases.

Managing Relation between Primary and Secondary Resources

Event sources let your operator know when a secondary resource has changed and that your operator might need to reconcile this new information. However, in order to do so, the SDK needs to somehow retrieve the primary resource associated with which ever secondary resource triggered the event. In the Tomcat example above, when an event occurs on a tracked Deployment, the SDK needs to be able to identify which Tomcat resource is impacted by that change.

Seasoned Kubernetes users already know one way to track this parent-child kind of relationship: using owner references. Indeed, that’s how the SDK deals with this situation by default as well, that is, if your controller properly set owner references on your secondary resources, the SDK will be able to follow that reference back to your primary resource automatically without you having to worry about it.

However, owner references cannot always be used as they are restricted to operating within a single namespace (i.e. you cannot have an owner reference to a resource in a different namespace) and are, by essence, limited to Kubernetes resources so you’re out of luck if your secondary resources live outside of a cluster.

This is why JOSDK provides the SecondayToPrimaryMapper interface so that you can provide alternative ways for the SDK to identify which primary resource needs to be reconciled when something occurs to your secondary resources. We even provide some of these alternatives in the Mappers class.

Note that, while a set of ResourceID is returned, this set usually consists only of one element. It is however possible to return multiple values or even no value at all to cover some rare corner cases. Returning an empty set means that the mapper considered the secondary resource event as irrelevant and the SDK will thus not trigger a reconciliation of the primary resource in that situation.

Adding a SecondaryToPrimaryMapper is typically sufficient when there is a one-to-many relationship between primary and secondary resources. The secondary resources can be mapped to its primary owner, and this is enough information to also get these secondary resources from the Context object that’s passed to your Reconciler.

There are however cases when this isn’t sufficient and you need to provide an explicit mapping between a primary resource and its associated secondary resources using an implementation of the PrimaryToSecondaryMapper interface. This is typically needed when there are many-to-one or many-to-many relationships between primary and secondary resources, e.g. when the primary resource is referencing secondary resources. See PrimaryToSecondaryIT integration test for a sample.

Built-in EventSources

There are multiple event-sources provided out of the box, the following are some more central ones:

InformerEventSource

InformerEventSource is probably the most important EventSource implementation to know about. When you create an InformerEventSource, JOSDK will automatically create and register a SharedIndexInformer, a fabric8 Kubernetes client class, that will listen for events associated with the resource type you configured your InformerEventSource with. If you want to listen to Kubernetes resource events, InformerEventSource is probably the only thing you need to use. It’s highly configurable so you can tune it to your needs. Take a look at InformerConfiguration and associated classes for more details but some interesting features we can mention here is the ability to filter events so that you can only get notified for events you care about. A particularly interesting feature of the InformerEventSource, as opposed to using your own informer-based listening mechanism is that caches are particularly well optimized preventing reconciliations from being triggered when not needed and allowing efficient operators to be written.

PerResourcePollingEventSource

PerResourcePollingEventSource is used to poll external APIs, which don’t support webhooks or other event notifications. It extends the abstract ExternalResourceCachingEventSource to support caching. See MySQL Schema sample for usage.

PollingEventSource

PollingEventSource is similar to PerResourceCachingEventSource except that, contrary to that event source, it doesn’t poll a specific API separately per resource, but periodically and independently of actually observed primary resources.

Inbound event sources

SimpleInboundEventSource and CachingInboundEventSource are used to handle incoming events from webhooks and messaging systems.

ControllerResourceEventSource

ControllerResourceEventSource is a special EventSource implementation that you will never have to deal with directly. It is, however, at the core of the SDK is automatically added for you: this is the main event source that listens for changes to your primary resources and triggers your Reconciler when needed. It features smart caching and is really optimized to minimize Kubernetes API accesses and avoid triggering unduly your Reconciler.

More on the philosophy of the non Kubernetes API related event source see in issue #729.

Contextual Info for Logging with MDC

Logging is enhanced with additional contextual information using MDC. The following attributes are available in most parts of reconciliation logic and during the execution of the controller:

MDC KeyValue added from primary resource
resource.apiVersion.apiVersion
resource.kind.kind
resource.name.metadata.name
resource.namespace.metadata.namespace
resource.resourceVersion.metadata.resourceVersion
resource.generation.metadata.generation
resource.uid.metadata.uid

For more information about MDC see this link.

InformerEventSource Multi-Cluster Support

It is possible to handle resources for remote cluster with InformerEventSource. To do so, simply set a client that connects to a remote cluster:


InformerEventSourceConfiguration<Tomcat> configuration =
        InformerEventSourceConfiguration.from(SecondaryResource.class, PrimaryResource.class)
            .withKubernetesClient(remoteClusterClient)
            .withSecondaryToPrimaryMapper(Mappers.fromDefaultAnnotations());

You will also need to specify a SecondaryToPrimaryMapper, since the default one is based on owner references and won’t work across cluster instances. You could, for example, use the provided implementation that relies on annotations added to the secondary resources to identify the associated primary resource.

See related integration test.

Dynamically Changing Target Namespaces

A controller can be configured to watch a specific set of namespaces in addition of the namespace in which it is currently deployed or the whole cluster. The framework supports dynamically changing the list of these namespaces while the operator is running. When a reconciler is registered, an instance of RegisteredController is returned, providing access to the methods allowing users to change watched namespaces as the operator is running.

A typical scenario would probably involve extracting the list of target namespaces from a ConfigMap or some other input but this part is out of the scope of the framework since this is use-case specific. For example, reacting to changes to a ConfigMap would probably involve registering an associated Informer and then calling the changeNamespaces method on RegisteredController.


public static void main(String[] args) {
    KubernetesClient client = new DefaultKubernetesClient();
    Operator operator = new Operator(client);
    RegisteredController registeredController = operator.register(new WebPageReconciler(client));
    operator.installShutdownHook();
    operator.start();

    // call registeredController further while operator is running
}

If watched namespaces change for a controller, it might be desirable to propagate these changes to InformerEventSources associated with the controller. In order to express this, InformerEventSource implementations interested in following such changes need to be configured appropriately so that the followControllerNamespaceChanges method returns true:


@ControllerConfiguration
public class MyReconciler implements Reconciler<TestCustomResource> {

   @Override
   public Map<String, EventSource> prepareEventSources(
      EventSourceContext<ChangeNamespaceTestCustomResource> context) {

    InformerEventSource<ConfigMap, TestCustomResource> configMapES =
        new InformerEventSource<>(InformerEventSourceConfiguration.from(ConfigMap.class, TestCustomResource.class)
            .withNamespacesInheritedFromController(context)
            .build(), context);

    return EventSourceUtils.nameEventSources(configMapES);
  }

}

As seen in the above code snippet, the informer will have the initial namespaces inherited from controller, but also will adjust the target namespaces if it changes for the controller.

See also the integration test for this feature.

Leader Election

Operators are generally deployed with a single running or active instance. However, it is possible to deploy multiple instances in such a way that only one, called the “leader”, processes the events. This is achieved via a mechanism called “leader election”. While all the instances are running, and even start their event sources to populate the caches, only the leader will process the events. This means that should the leader change for any reason, for example because it crashed, the other instances are already warmed up and ready to pick up where the previous leader left off should one of them become elected leader.

See sample configuration in the E2E test .

Runtime Info

RuntimeInfo is used mainly to check the actual health of event sources. Based on this information it is easy to implement custom liveness probes.

stopOnInformerErrorDuringStartup setting, where this flag usually needs to be set to false, in order to control the exact liveness properties.

See also an example implementation in the WebPage sample

Automatic Generation of CRDs

Note that this feature is provided by the Fabric8 Kubernetes Client, not JOSDK itself.

To automatically generate CRD manifests from your annotated Custom Resource classes, you only need to add the following dependencies to your project:


<dependency>
    <groupId>io.fabric8</groupId>
    <artifactId>crd-generator-apt</artifactId>
    <scope>provided</scope>
</dependency>

The CRD will be generated in target/classes/META-INF/fabric8 (or in target/test-classes/META-INF/fabric8, if you use the test scope) with the CRD name suffixed by the generated spec version. For example, a CR using the java-operator-sdk.io group with a mycrs plural form will result in 2 files:

  • mycrs.java-operator-sdk.io-v1.yml
  • mycrs.java-operator-sdk.io-v1beta1.yml

NOTE:

Quarkus users using the quarkus-operator-sdk extension do not need to add any extra dependency to get their CRD generated as this is handled by the extension itself.

Metrics

JOSDK provides built-in support for metrics reporting on what is happening with your reconcilers in the form of the Metrics interface which can be implemented to connect to your metrics provider of choice, JOSDK calling the methods as it goes about reconciling resources. By default, a no-operation implementation is provided thus providing a no-cost sane default. A micrometer-based implementation is also provided.

You can use a different implementation by overriding the default one provided by the default ConfigurationService, as follows:

Metrics metrics; // initialize your metrics implementation
Operator operator = new Operator(client, o -> o.withMetrics(metrics));

Micrometer implementation

The micrometer implementation is typically created using one of the provided factory methods which, depending on which is used, will return either a ready to use instance or a builder allowing users to customized how the implementation behaves, in particular when it comes to the granularity of collected metrics. It is, for example, possible to collect metrics on a per-resource basis via tags that are associated with meters. This is the default, historical behavior but this will change in a future version of JOSDK because this dramatically increases the cardinality of metrics, which could lead to performance issues.

To create a MicrometerMetrics implementation that behaves how it has historically behaved, you can just create an instance via:

MeterRegistry registry; // initialize your registry implementation
Metrics metrics = new MicrometerMetrics(registry);

Note, however, that this constructor is deprecated and we encourage you to use the factory methods instead, which either return a fully pre-configured instance or a builder object that will allow you to configure more easily how the instance will behave. You can, for example, configure whether or not the implementation should collect metrics on a per-resource basis, whether or not associated meters should be removed when a resource is deleted and how the clean-up is performed. See the relevant classes documentation for more details.

For example, the following will create a MicrometerMetrics instance configured to collect metrics on a per-resource basis, deleting the associated meters after 5 seconds when a resource is deleted, using up to 2 threads to do so.

MicrometerMetrics.newPerResourceCollectingMicrometerMetricsBuilder(registry)
        .withCleanUpDelayInSeconds(5)
        .withCleaningThreadNumber(2)
        .build();

Operator SDK metrics

The micrometer implementation records the following metrics:

Meter nameTypeTag namesDescription
operator.sdk.reconciliations.executions.<reconciler name>gaugegroup, version, kindNumber of executions of the named reconciler
operator.sdk.reconciliations.queue.size.<reconciler name>gaugegroup, version, kindHow many resources are queued to get reconciled by named reconciler
operator.sdk.<map name>.sizegauge map sizeGauge tracking the size of a specified map (currently unused but could be used to monitor caches size)
operator.sdk.events.receivedcounter<resource metadata>, event, actionNumber of received Kubernetes events
operator.sdk.events.deletecounter<resource metadata>Number of received Kubernetes delete events
operator.sdk.reconciliations.startedcounter<resource metadata>, reconciliations.retries.last, reconciliations.retries.numberNumber of started reconciliations per resource type
operator.sdk.reconciliations.failedcounter<resource metadata>, exceptionNumber of failed reconciliations per resource type
operator.sdk.reconciliations.successcounter<resource metadata>Number of successful reconciliations per resource type
operator.sdk.controllers.execution.reconciletimer<resource metadata>, controllerTime taken for reconciliations per controller
operator.sdk.controllers.execution.cleanuptimer<resource metadata>, controllerTime taken for cleanups per controller
operator.sdk.controllers.execution.reconcile.successcountercontroller, typeNumber of successful reconciliations per controller
operator.sdk.controllers.execution.reconcile.failurecountercontroller, exceptionNumber of failed reconciliations per controller
operator.sdk.controllers.execution.cleanup.successcountercontroller, typeNumber of successful cleanups per controller
operator.sdk.controllers.execution.cleanup.failurecountercontroller, exceptionNumber of failed cleanups per controller

As you can see all the recorded metrics start with the operator.sdk prefix. <resource metadata>, in the table above, refers to resource-specific metadata and depends on the considered metric and how the implementation is configured and could be summed up as follows: group?, version, kind, [name, namespace?], scope where the tags in square brackets ([]) won’t be present when per-resource collection is disabled and tags followed by a question mark are omitted if the associated value is empty. Of note, when in the context of controllers’ execution metrics, these tag names are prefixed with resource.. This prefix might be removed in a future version for greater consistency.

Optimizing Caches

One of the ideas around the operator pattern is that all the relevant resources are cached, thus reconciliation is usually very fast (especially if no resources are updated in the process) since the operator is then mostly working with in-memory state. However for large clusters, caching huge amount of primary and secondary resources might consume lots of memory. JOSDK provides ways to mitigate this issue and optimize the memory usage of controllers. While these features are working and tested, we need feedback from real production usage.

Bounded Caches for Informers

Limiting caches for informers - thus for Kubernetes resources - is supported by ensuring that resources are in the cache for a limited time, via a cache eviction of least recently used resources. This means that when resources are created and frequently reconciled, they stay “hot” in the cache. However, if, over time, a given resource “cools” down, i.e. it becomes less and less used to the point that it might not be reconciled anymore, it will eventually get evicted from the cache to free up memory. If such an evicted resource were to become reconciled again, the bounded cache implementation would then fetch it from the API server and the “hot/cold” cycle would start anew.

Since all resources need to be reconciled when a controller start, it is not practical to set a maximal cache size as it’s desirable that all resources be cached as soon as possible to make the initial reconciliation process on start as fast and efficient as possible, avoiding undue load on the API server. It’s therefore more interesting to gradually evict cold resources than try to limit cache sizes.

See usage of the related implementation using Caffeine cache in integration tests for primary resources.

See also CaffeineBoundedItemStores for more details.

7 - Dependent Resources

Dependent Resources

Motivations and Goals

Most operators need to deal with secondary resources when trying to realize the desired state described by the primary resource they are in charge of. For example, the Kubernetes-native Deployment controller needs to manage ReplicaSet instances as part of a Deployment’s reconciliation process. In this instance, ReplicatSet is considered a secondary resource for the Deployment controller.

Controllers that deal with secondary resources typically need to perform the following steps, for each secondary resource:

flowchart TD

compute[Compute desired secondary resource based on primary state] --> A
A{Secondary resource exists?}
A -- Yes --> match
A -- No --> Create --> Done

match{Matches desired state?}
match -- Yes --> Done
match -- No --> Update --> Done

While these steps are not difficult in and of themselves, there are some subtleties that can lead to bugs or sub-optimal code if not done right. As this process is pretty much similar for each dependent resource, it makes sense for the SDK to offer some level of support to remove the boilerplate code associated with encoding these repetitive actions. It should be possible to handle common cases (such as dealing with Kubernetes-native secondary resources) in a semi-declarative way with only a minimal amount of code, JOSDK taking care of wiring everything accordingly.

Moreover, in order for your reconciler to get informed of events on these secondary resources, you need to configure and create event sources and maintain them. JOSDK already makes it rather easy to deal with these, but dependent resources makes it even simpler.

Finally, there are also opportunities for the SDK to transparently add features that are even trickier to get right, such as immediate caching of updated or created resources (so that your reconciler doesn’t need to wait for a cluster roundtrip to continue its work) and associated event filtering (so that something your reconciler just changed doesn’t re-trigger a reconciliation, for example).

Design

DependentResource vs. AbstractDependentResource

The new DependentResource interface lies at the core of the design and strives to encapsulate the logic that is required to reconcile the state of the associated secondary resource based on the state of the primary one. For most cases, this logic will follow the flow expressed above and JOSDK provides a very convenient implementation of this logic in the form of the AbstractDependentResource class. If your logic doesn’t fit this pattern, though, you can still provide your own reconcile method implementation. While the benefits of using dependent resources are less obvious in that case, this allows you to separate the logic necessary to deal with each secondary resource in its own class that can then be tested in isolation via unit tests. You can also use the declarative support with your own implementations as we shall see later on.

AbstractDependentResource is designed so that classes extending it specify which functionality they support by implementing trait interfaces. This design has been selected to express the fact that not all secondary resources are completely under the control of the primary reconciler: some dependent resources are only ever created or updated for example and we needed a way to let JOSDK know when that is the case. We therefore provide trait interfaces: Creator, Updater and Deleter to express that the DependentResource implementation will provide custom functionality to create, update and delete its associated secondary resources, respectively. If these traits are not implemented then parts of the logic described above is never triggered: if your implementation doesn’t implement Creator, for example, AbstractDependentResource will never try to create the associated secondary resource, even if it doesn’t exist. It is even possible to not implement any of these traits and therefore create read-only dependent resources that will trigger your reconciler whenever a user interacts with them but that are never modified by your reconciler itself - however note that read-only dependent resources rarely make sense, as it is usually simpler to register an event source for the target resource.

All subclasses of AbstractDependentResource can also implement the Matcher interface to customize how the SDK decides whether or not the actual state of the dependent matches the desired state. This makes it convenient to use these abstract base classes for your implementation, only customizing the matching logic. Note that in many cases, there is no need to customize that logic as the SDK already provides convenient default implementations in the form of DesiredEqualsMatcher and GenericKubernetesResourceMatcher implementations, respectively. If you want to provide custom logic, you only need your DependentResource implementation to implement the Matcher interface as below, which shows how to customize the default matching logic for Kubernetes resources to also consider annotations and labels, which are ignored by default:

public class MyDependentResource extends KubernetesDependentResource<MyDependent, MyPrimary>
        implements Matcher<MyDependent, MyPrimary> {
    // your implementation

    public Result<MyDependent> match(MyDependent actualResource, MyPrimary primary,
                                     Context<MyPrimary> context) {
        return GenericKubernetesResourceMatcher.match(this, actualResource, primary, context, true);
    }
}

Batteries included: convenient DependentResource implementations!

JOSDK also offers several other convenient implementations building on top of AbstractDependentResource that you can use as starting points for your own implementations.

One such implementation is the KubernetesDependentResource class that makes it really easy to work with Kubernetes-native resources. In this case, you usually only need to provide an implementation for the desired method to tell JOSDK what the desired state of your secondary resource should be based on the specified primary resource state.

JOSDK takes care of everything else using default implementations that you can override in case you need more precise control of what’s going on.

We also provide implementations that make it easy to cache (AbstractExternalDependentResource) or poll for changes in external resources (PollingDependentResource, PerResourcePollingDependentResource). All the provided implementations can be found in the io/javaoperatorsdk/operator/processing/dependent package of the operator-framework-core module.

Sample Kubernetes Dependent Resource

A typical use case, when a Kubernetes resource is fully managed - Created, Read, Updated and Deleted (or set to be garbage collected). The following example shows how to create a Deployment dependent resource:


@KubernetesDependent(labelSelector = WebPageManagedDependentsReconciler.SELECTOR)
class DeploymentDependentResource extends CRUDKubernetesDependentResource<Deployment, WebPage> {

    public DeploymentDependentResource() {
        super(Deployment.class);
    }

    @Override
    protected Deployment desired(WebPage webPage, Context<WebPage> context) {
        var deploymentName = deploymentName(webPage);
        Deployment deployment = loadYaml(Deployment.class, getClass(), "deployment.yaml");
        deployment.getMetadata().setName(deploymentName);
        deployment.getMetadata().setNamespace(webPage.getMetadata().getNamespace());
        deployment.getSpec().getSelector().getMatchLabels().put("app", deploymentName);

        deployment.getSpec().getTemplate().getMetadata().getLabels()
                .put("app", deploymentName);
        deployment.getSpec().getTemplate().getSpec().getVolumes().get(0)
                .setConfigMap(new ConfigMapVolumeSourceBuilder().withName(configMapName(webPage)).build());
        return deployment;
    }
}

The only thing that you need to do is to extend the CRUDKubernetesDependentResource and specify the desired state for your secondary resources based on the state of the primary one. In the example above, we’re handling the state of a Deployment secondary resource associated with a WebPage custom (primary) resource.

The @KubernetesDependent annotation can be used to further configure managed dependent resource that are extending KubernetesDependentResource.

See the full source code here .

Managed Dependent Resources

As mentioned previously, one goal of this implementation is to make it possible to declaratively create and wire dependent resources. You can annotate your reconciler with @Dependent annotations that specify which DependentResource implementation it depends upon. JOSDK will take the appropriate steps to wire everything together and call your DependentResource implementations reconcile method before your primary resource is reconciled. This makes sense in most use cases where the logic associated with the primary resource is usually limited to status handling based on the state of the secondary resources and the resources are not dependent on each other.

See Workflows for more details on how the dependent resources are reconciled.

This behavior and automated handling is referred to as “managed” because the DependentResource instances are managed by JOSDK, an example of which can be seen below:


@ControllerConfiguration(
        labelSelector = SELECTOR,
        dependents = {
                @Dependent(type = ConfigMapDependentResource.class),
                @Dependent(type = DeploymentDependentResource.class),
                @Dependent(type = ServiceDependentResource.class)
        })
public class WebPageManagedDependentsReconciler
        implements Reconciler<WebPage>, ErrorStatusHandler<WebPage> {

    // omitted code

    @Override
    public UpdateControl<WebPage> reconcile(WebPage webPage, Context<WebPage> context) {

        final var name = context.getSecondaryResource(ConfigMap.class).orElseThrow()
                .getMetadata().getName();
        webPage.setStatus(createStatus(name));
        return UpdateControl.patchStatus(webPage);
    }

}

See the full source code of sample here .

Standalone Dependent Resources

It is also possible to wire dependent resources programmatically. In practice this means that the developer is responsible for initializing and managing the dependent resources as well as calling their reconcile method. However, this makes it possible for developers to fully customize the reconciliation process. Standalone dependent resources should be used in cases when the managed use case does not fit. You can, of course, also use Workflows when managing resources programmatically.

You can see a commented example of how to do so here.

Creating/Updating Kubernetes Resources

From version 4.4 of the framework the resources are created and updated using Server Side Apply , thus the desired state is simply sent using this approach to update the actual resource.

Comparing desired and actual state (matching)

During the reconciliation of a dependent resource, the desired state is matched with the actual state from the caches. The dependent resource only gets updated on the server if the actual, observed state differs from the desired one. Comparing these two states is a complex problem when dealing with Kubernetes resources because a strict equality check is usually not what is wanted due to the fact that multiple fields might be automatically updated or added by the platform ( by dynamic admission controllers or validation webhooks, for example). Solving this problem in a generic way is therefore a tricky proposition.

JOSDK provides such a generic matching implementation which is used by default: SSABasedGenericKubernetesResourceMatcher This implementation relies on the managed fields used by the Server Side Apply feature to compare only the values of the fields that the controller manages. This ensures that only semantically relevant fields are compared. See javadoc for further details.

JOSDK versions prior to 4.4 were using a different matching algorithm as implemented in GenericKubernetesResourceMatcher.

Since SSA is a complex feature, JOSDK implements a feature flag allowing users to switch between these implementations. See in ConfigurationService.

It is, however, important to note that these implementations are default, generic implementations that the framework can provide expected behavior out of the box. In many situations, these will work just fine but it is also possible to provide matching algorithms optimized for specific use cases. This is easily done by simply overriding the match(...) method.

It is also possible to bypass the matching logic altogether to simply rely on the server-side apply mechanism if always sending potentially unchanged resources to the cluster is not an issue. JOSDK’s matching mechanism allows to spare some potentially useless calls to the Kubernetes API server. To bypass the matching feature completely, simply override the match method to always return false, thus telling JOSDK that the actual state never matches the desired one, making it always update the resources using SSA.

WARNING: Older versions of Kubernetes before 1.25 would create an additional resource version for every SSA update performed with certain resources - even though there were no actual changes in the stored resource - leading to infinite reconciliations. This behavior was seen with Secrets using stringData, Ingresses using empty string fields, and StatefulSets using volume claim templates. The operator framework has added built-in handling for the StatefulSet issue. If you encounter this issue on an older Kubernetes version, consider changing your desired state, turning off SSA for that resource, or even upgrading your Kubernetes version. If you encounter it on a newer Kubernetes version, please log an issue with the JOSDK and with upstream Kubernetes.

Telling JOSDK how to find which secondary resources are associated with a given primary resource

KubernetesDependentResource automatically maps secondary resource to a primary by owner reference. This behavior can be customized by implementing SecondaryToPrimaryMapper by the dependent resource.

See sample in one of the integration tests here .

Multiple Dependent Resources of Same Type

When dealing with multiple dependent resources of same type, the dependent resource implementation needs to know which specific resource should be targeted when reconciling a given dependent resource, since there could be multiple instances of that type which could possibly be used, each associated with the same primary resource. In this situation, JOSDK automatically selects the appropriate secondary resource matching the desired state associated with the primary resource. This makes sense because the desired state computation already needs to be able to discriminate among multiple related secondary resources to tell JOSDK how they should be reconciled.

There might be cases, though, where it might be problematic to call the desired method several times (for example, because it is costly to do so), it is always possible to override this automated discrimination using several means (consider in this priority order):

  • Override the targetSecondaryResourceID method, if your DependentResource extends KubernetesDependentResource, where it’s very often possible to easily determine the ResourceID of the secondary resource. This would probably be the easiest solution if you’re working with Kubernetes resources.
  • Override the selectTargetSecondaryResource method, if your DependentResource extends AbstractDependentResource. This should be relatively simple to override this method to optimize the matching to your needs. You can see an example of such an implementation in the ExternalWithStateDependentResource class.
  • As last resort, you can implement your own getSecondaryResource method on your DependentResource implementation from scratch.

Sharing an Event Source Between Dependent Resources

Dependent resources usually also provide event sources. When dealing with multiple dependents of the same type, one needs to decide whether these dependent resources should track the same resources and therefore share a common event source, or, to the contrary, track completely separate resources, in which case using separate event sources is advised.

Dependents can therefore reuse existing, named event sources by referring to their name. In the declarative case, assuming a configMapSource EventSource has already been declared, this would look as follows:

 @Dependent(type = MultipleManagedDependentResourceConfigMap1.class, 
   useEventSourceWithName = "configMapSource")

A sample is provided as an integration test both: for managed

For standalone cases.

Bulk Dependent Resources

So far, all the cases we’ve considered were dealing with situations where the number of dependent resources needed to reconcile the state expressed by the primary resource is known when writing the code for the operator. There are, however, cases where the number of dependent resources to be created depends on information found in the primary resource.

These cases are covered by the “bulk” dependent resources feature. To create such dependent resources, your implementation should extend AbstractDependentResource (at least indirectly) and implement the BulkDependentResource interface.

Various examples are provided as integration tests .

To see how bulk dependent resources interact with workflow conditions, please refer to this integration test.

External State Tracking Dependent Resources

It is sometimes necessary for a controller to track external (i.e. non-Kubernetes) state to properly manage some dependent resources. For example, your controller might need to track the state of a REST API resource, which, after being created, would be refer to by its identifier. Such identifier would need to be tracked by your controller to properly retrieve the state of the associated resource and/or assess if such a resource exists. While there are several ways to support this use case, we recommend storing such information in a dedicated Kubernetes resources (usually a ConfigMap or a Secret), so that it can be manipulated with common Kubernetes mechanisms.

This particular use case is supported by the AbstractExternalDependentResource class that you can extend to suit your needs, as well as implement the DependentResourceWithExplicitState interface. Note that most of the JOSDK-provided dependent resource implementations such as PollingDependentResource or PerResourcePollingDependentResource already extends AbstractExternalDependentResource, thus supporting external state tracking out of the box.

See integration test as a sample.

For a better understanding it might be worth to study a sample implementation without dependent resources.

Please also refer to the docs for managing state in general.

Combining Bulk and External State Tracking Dependent Resources

Both bulk and external state tracking features can be combined. In that case, a separate, state-tracking resource will be created for each bulk dependent resource created. For example, if three bulk dependent resources associated with external state are created, three associated ConfigMaps (assuming ConfigMaps are used as a state-tracking resource) will also be created, one per dependent resource.

See integration test as a sample.

GenericKubernetesResource based Dependent Resources

In rare circumstances resource handling where there is no class representation or just typeless handling might be needed. Fabric8 Client provides GenericKubernetesResource to support that.

For dependent resource this is supported by GenericKubernetesDependentResource . See samples here.

Other Dependent Resource Features

Caching and Event Handling in KubernetesDependentResource

  1. When a Kubernetes resource is created or updated the related informer (more precisely the InformerEventSource), eventually will receive an event and will cache the up-to-date resource. Typically, though, there might be a small time window when calling the getResource() of the dependent resource or getting the resource from the EventSource itself won’t return the just updated resource, in the case where the associated event hasn’t been received from the Kubernetes API. The KubernetesDependentResource implementation, however, addresses this issue, so you don’t have to worry about it by making sure that it or the related InformerEventSource always return the up-to-date resource.

  2. Another feature of KubernetesDependentResource is to make sure that if a resource is created or updated during the reconciliation, this particular change, which normally would trigger the reconciliation again (since the resource has changed on the server), will, in fact, not trigger the reconciliation again since we already know the state is as expected. This is a small optimization. For example if during a reconciliation a ConfigMap is updated using dependent resources, this won’t trigger a new reconciliation. Such a reconciliation is indeed not needed since the change originated from our reconciler. For this system to work properly, though, it is required that changes are received only by one event source (this is a best practice in general) - so for example if there are two config map dependents, either there should be a shared event source between them, or a label selector on the event sources to select only the relevant events, see in related integration test .

“Read-only” Dependent Resources vs. Event Source

See Integration test for a read-only dependent here.

Some secondary resources only exist as input for the reconciliation process and are never updated by a controller (they might, and actually usually do, get updated by users interacting with the resources directly, however). This might be the case, for example, of a ConfigMapthat is used to configure common characteristics of multiple resources in one convenient place.

In such situations, one might wonder whether it makes sense to create a dependent resource in this case or simply use an EventSource so that the primary resource gets reconciled whenever a user changes the resource. Typical dependent resources provide a desired state that the reconciliation process attempts to match. In the case of so-called read-only dependents, though, there is no such desired state because the operator / controller will never update the resource itself, just react to external changes to it. An EventSource would achieve the same result.

Using a dependent resource for that purpose instead of a simple EventSource, however, provides several benefits:

  • dependents can be created declaratively, while an event source would need to be manually created
  • if dependents are already used in a controller, it makes sense to unify the handling of all secondary resources as dependents from a code organization perspective
  • dependent resources can also interact with the workflow feature, thus allowing the read-only resource to participate in conditions, in particular to decide whether the primary resource needs/can be reconciled using reconcile pre-conditions, block the progression of the workflow altogether with ready post-conditions or have other dependents depend on them, in essence, read-only dependents can participate in workflows just as any other dependents.

8 - Workflows

Overview

Kubernetes (k8s) does not have the notion of a resource “depending on” on another k8s resource, at least not in terms of the order in which these resources should be reconciled. Kubernetes operators typically need to reconcile resources in order because these resources’ state often depends on the state of other resources or cannot be processed until these other resources reach a given state or some condition holds true for them. Dealing with such scenarios are therefore rather common for operators and the purpose of the workflow feature of the Java Operator SDK (JOSDK) is to simplify supporting such cases in a declarative way. Workflows build on top of the dependent resources feature. While dependent resources focus on how a given secondary resource should be reconciled, workflows focus on orchestrating how these dependent resources should be reconciled.

Workflows describe how as a set of dependent resources (DR) depend on one another, along with the conditions that need to hold true at certain stages of the reconciliation process.

Elements of Workflow

  • Dependent resource (DR) - are the resources being managed in a given reconciliation logic.

  • Depends-on relation - a B DR depends on another A DR if B needs to be reconciled after A.

  • Reconcile precondition - is a condition on a given DR that needs to be become true before the DR is reconciled. This also allows to define optional resources that would, for example, only be created if a flag in a custom resource .spec has some specific value.

  • Ready postcondition - is a condition on a given DR to prevent the workflow from proceeding until the condition checking whether the DR is ready holds true

  • Delete postcondition - is a condition on a given DR to check if the reconciliation of dependents can proceed after the DR is supposed to have been deleted

  • Activation condition - is a special condition meant to specify under which condition the DR is used in the workflow. A typical use-case for this feature is to only activate some dependents depending on the presence of optional resources / features on the target cluster. Without this activation condition, JOSDK would attempt to register an informer for these optional resources, which would cause an error in the case where the resource is missing. With this activation condition, you can now conditionally register informers depending on whether the condition holds or not. This is a very useful feature when your operator needs to handle different flavors of the platform (e.g. OpenShift vs plain Kubernetes) and/or change its behavior based on the availability of optional resources / features (e.g. CertManager, a specific Ingress controller, etc.).

    A generic activation condition is provided out of the box, called CRDPresentActivationCondition
    that will prevent the associated dependent resource from being activated if the Custom Resource Definition associated with the dependent’s resource type is not present on the cluster. See related integration test.

    To have multiple resources of same type with an activation condition is a bit tricky, since you don’t want to have multiple InformerEventSource for the same type, you have to explicitly name the informer for the Dependent Resource (@KubernetesDependent(informerConfig = @InformerConfig(name = "configMapInformer"))) for all resource of same type with activation condition. This will make sure that only one is registered. See details at low level api.

Result conditions

While simple conditions are usually enough, it might happen you want to convey extra information as a result of the evaluation of the conditions (e.g., to report error messages or because the result of the condition evaluation might be interesting for other purposes). In this situation, you should implement DetailedCondition instead of Condition and provide an implementation of the detailedIsMet method, which allows you to return a more detailed Result object via which you can provide extra information. The DetailedCondition.Result interface provides factory method for your convenience but you can also provide your own implementation if required.

You can access the results for conditions from the WorkflowResult instance that is returned whenever a workflow is evaluated. You can access that result from the ManagedWorkflowAndDependentResourceContext accessible from the reconciliation Context. You can then access individual condition results using the getDependentConditionResult methods. You can see an example of this in this integration test.

Defining Workflows

Similarly to dependent resources, there are two ways to define workflows, in managed and standalone manner.

Managed

Annotations can be used to declaratively define a workflow for a Reconciler. Similarly to how things are done for dependent resources, managed workflows execute before the reconcile method is called. The result of the reconciliation can be accessed via the Context object that is passed to the reconcile method.

The following sample shows a hypothetical use case to showcase all the elements: the primary TestCustomResource resource handled by our Reconciler defines two dependent resources, a Deployment and a ConfigMap. The ConfigMap depends on the Deployment so will be reconciled after it. Moreover, the Deployment dependent resource defines a ready post-condition, meaning that the ConfigMap will not be reconciled until the condition defined by the Deployment becomes true. Additionally, the ConfigMap dependent also defines a reconcile pre-condition, so it also won’t be reconciled until that condition becomes true. The ConfigMap also defines a delete post-condition, which means that the workflow implementation will only consider the ConfigMap deleted until that post-condition becomes true.


@Workflow(dependents = {
        @Dependent(name = DEPLOYMENT_NAME, type = DeploymentDependentResource.class,
                readyPostcondition = DeploymentReadyCondition.class),
        @Dependent(type = ConfigMapDependentResource.class,
                reconcilePrecondition = ConfigMapReconcileCondition.class,
                deletePostcondition = ConfigMapDeletePostCondition.class,
                activationCondition = ConfigMapActivationCondition.class,
                dependsOn = DEPLOYMENT_NAME)
})
@ControllerConfiguration
public class SampleWorkflowReconciler implements Reconciler<WorkflowAllFeatureCustomResource>,
    Cleaner<WorkflowAllFeatureCustomResource> {

  public static final String DEPLOYMENT_NAME = "deployment";

  @Override
  public UpdateControl<WorkflowAllFeatureCustomResource> reconcile(
      WorkflowAllFeatureCustomResource resource,
      Context<WorkflowAllFeatureCustomResource> context) {

    resource.getStatus()
        .setReady(
            context.managedWorkflowAndDependentResourceContext()  // accessing workflow reconciliation results
                .getWorkflowReconcileResult()
                .allDependentResourcesReady());
    return UpdateControl.patchStatus(resource);
  }

  @Override
  public DeleteControl cleanup(WorkflowAllFeatureCustomResource resource,
      Context<WorkflowAllFeatureCustomResource> context) {
    // emitted code

    return DeleteControl.defaultDelete();
  }
}

Standalone

In this mode workflow is built manually using standalone dependent resources . The workflow is created using a builder, that is explicitly called in the reconciler (from web page sample):


@ControllerConfiguration(
    labelSelector = WebPageDependentsWorkflowReconciler.DEPENDENT_RESOURCE_LABEL_SELECTOR)
public class WebPageDependentsWorkflowReconciler
    implements Reconciler<WebPage>, ErrorStatusHandler<WebPage> {

  public static final String DEPENDENT_RESOURCE_LABEL_SELECTOR = "!low-level";
  private static final Logger log =
      LoggerFactory.getLogger(WebPageDependentsWorkflowReconciler.class);

  private KubernetesDependentResource<ConfigMap, WebPage> configMapDR;
  private KubernetesDependentResource<Deployment, WebPage> deploymentDR;
  private KubernetesDependentResource<Service, WebPage> serviceDR;
  private KubernetesDependentResource<Ingress, WebPage> ingressDR;

  private final Workflow<WebPage> workflow;

  public WebPageDependentsWorkflowReconciler(KubernetesClient kubernetesClient) {
    initDependentResources(kubernetesClient);
    workflow = new WorkflowBuilder<WebPage>()
        .addDependentResource(configMapDR)
        .addDependentResource(deploymentDR)
        .addDependentResource(serviceDR)
        .addDependentResource(ingressDR).withReconcilePrecondition(new ExposedIngressCondition())
        .build();
  }

  @Override
  public Map<String, EventSource> prepareEventSources(EventSourceContext<WebPage> context) {
    return EventSourceUtils.nameEventSources(
        configMapDR.initEventSource(context),
        deploymentDR.initEventSource(context),
        serviceDR.initEventSource(context),
        ingressDR.initEventSource(context));
  }

  @Override
  public UpdateControl<WebPage> reconcile(WebPage webPage, Context<WebPage> context) {

    var result = workflow.reconcile(webPage, context);

    webPage.setStatus(createStatus(result));
    return UpdateControl.patchStatus(webPage);
  }
  // omitted code
}

Workflow Execution

This section describes how a workflow is executed in details, how the ordering is determined and how conditions and errors affect the behavior. The workflow execution is divided in two parts similarly to how Reconciler and Cleaner behavior are separated. Cleanup is executed if a resource is marked for deletion.

Common Principles

  • As complete as possible execution - when a workflow is reconciled, it tries to reconcile as many resources as possible. Thus, if an error happens or a ready condition is not met for a resources, all the other independent resources will be still reconciled. This is the opposite to a fail-fast approach. The assumption is that eventually in this way the overall state will converge faster towards the desired state than would be the case if the reconciliation was aborted as soon as an error occurred.
  • Concurrent reconciliation of independent resources - the resources which doesn’t depend on others are processed concurrently. The level of concurrency is customizable, could be set to one if required. By default, workflows use the executor service from ConfigurationService

Reconciliation

This section describes how a workflow is executed, considering first which rules apply, then demonstrated using examples:

Rules

  1. A workflow is a Directed Acyclic Graph (DAG) build from the DRs and their associated depends-on relations.
  2. Root nodes, i.e. nodes in the graph that do not depend on other nodes are reconciled first, in a parallel manner.
  3. A DR is reconciled if it does not depend on any other DRs, or ALL the DRs it depends on are reconciled and ready. If a DR defines a reconcile pre-condition and/or an activation condition, then these condition must become true before the DR is reconciled.
  4. A DR is considered ready if it got successfully reconciled and any ready post-condition it might define is true.
  5. If a DR’s reconcile pre-condition is not met, this DR is deleted. All the DRs that depend on the dependent resource are also recursively deleted. This implies that DRs are deleted in reverse order compared the one in which they are reconciled. The reason for this behavior is (Will make a more detailed blog post about the design decision, much deeper than the reference documentation) The reasoning behind this behavior is as follows: a DR with a reconcile pre-condition is only reconciled if the condition holds true. This means that if the condition is false and the resource didn’t exist already, then the associated resource would not be created. To ensure idempotency (i.e. with the same input state, we should have the same output state), from this follows that if the condition doesn’t hold true anymore, the associated resource needs to be deleted because the resource shouldn’t exist/have been created.
  6. If a DR’s activation condition is not met, it won’t be reconciled or deleted. If other DR’s depend on it, those will be recursively deleted in a way similar to reconcile pre-conditions. Event sources for a dependent resource with activation condition are registered/de-registered dynamically, thus during the reconciliation.
  7. For a DR to be deleted by a workflow, it needs to implement the Deleter interface, in which case its delete method will be called, unless it also implements the GarbageCollected interface. If a DR doesn’t implement Deleter it is considered as automatically deleted. If a delete post-condition exists for this DR, it needs to become true for the workflow to consider the DR as successfully deleted.

Samples

Notation: The arrows depicts reconciliation ordering, thus following the reverse direction of the
depends-on relation: 1 --> 2 mean DR 2 depends-on DR 1.

Reconcile Sample

stateDiagram-v2
1 --> 2
1 --> 3
2 --> 4
3 --> 4
  • Root nodes (i.e. nodes that don’t depend on any others) are reconciled first. In this example, DR 1 is reconciled first since it doesn’t depend on others. After that both DR 2 and 3 are reconciled concurrently, then DR 4 once both are reconciled successfully.
  • If DR 2 had a ready condition and if it evaluated to as false, DR 4 would not be reconciled. However 1,2 and 3 would be.
  • If 1 had a false ready condition, neither 2,3 or 4 would be reconciled.
  • If 2’s reconciliation resulted in an error, 4 would not be reconciled, but 3 would be (and 1 as well, of course).

Sample with Reconcile Precondition

stateDiagram-v2
1 --> 2
1 --> 3
3 --> 4
3 --> 5
  • If 3 has a reconcile pre-condition that is not met, 1 and 2 would be reconciled. However, DR 3,4,5 would be deleted: 4 and 5 would be deleted concurrently but 3 would only be deleted if 4 and 5 were deleted successfully (i.e. without error) and all existing delete post-conditions were met.
  • If 5 had a delete post-condition that was false, 3 would not be deleted but 4 would still be because they don’t depend on one another.
  • Similarly, if 5’s deletion resulted in an error, 3 would not be deleted but 4 would be.

Cleanup

Cleanup works identically as delete for resources in reconciliation in case reconcile pre-condition is not met, just for the whole workflow.

Rules

  1. Delete is called on a DR if there is no DR that depends on it
  2. If a DR has DRs that depend on it, it will only be deleted if all these DRs are successfully deleted without error and any delete post-condition is true.
  3. A DR is “manually” deleted (i.e. it’s Deleter.delete method is called) if it implements the Deleter interface but does not implement GarbageCollected. If a DR does not implement Deleter interface, it is considered as deleted automatically.

Sample

stateDiagram-v2
1 --> 2
1 --> 3
2 --> 4
3 --> 4
  • The DRs are deleted in the following order: 4 is deleted first, then 2 and 3 are deleted concurrently, and, only after both are successfully deleted, 1 is deleted.
  • If 2 had a delete post-condition that was false, 1 would not be deleted. 4 and 3 would be deleted.
  • If 2 was in error, DR 1 would not be deleted. DR 4 and 3 would be deleted.
  • if 4 was in error, no other DR would be deleted.

Error Handling

As mentioned before if an error happens during a reconciliation, the reconciliation of other dependent resources will still happen, assuming they don’t depend on the one that failed. If case multiple DRs fail, the workflow would throw an ‘AggregatedOperatorException’ containing all the related exceptions.

The exceptions can be handled by ErrorStatusHandler

Waiting for the actual deletion of Kubernetes Dependent Resources

Let’s consider a case when a Kubernetes Dependent Resources (KDR) depends on another resource, on cleanup the resources will be deleted in reverse order, thus the KDR will be deleted first. However, the workflow implementation currently simply asks the Kubernetes API server to delete the resource. This is, however, an asynchronous process, meaning that the deletion might not occur immediately, in particular if the resource uses finalizers that block the deletion or if the deletion itself takes some time. From the SDK’s perspective, though, the deletion has been requested and it moves on to other tasks without waiting for the resource to be actually deleted from the server (which might never occur if it uses finalizers which are not removed). In situations like these, if your logic depends on resources being actually removed from the cluster before a cleanup workflow can proceed correctly, you need to block the workflow progression using a delete post-condition that checks that the resource is actually removed or that it, at least, doesn’t have any finalizers any longer. JOSDK provides such a delete post-condition implementation in the form of KubernetesResourceDeletedCondition

Also, check usage in an integration test.

In such cases the Kubernetes Dependent Resource should extend CRUDNoGCKubernetesDependentResource and NOT CRUDKubernetesDependentResource since otherwise the Kubernetes Garbage Collector would delete the resources. In other words if a Kubernetes Dependent Resource depends on another dependent resource, it should not implement GargageCollected interface, otherwise the deletion order won’t be guaranteed.

Explicit Managed Workflow Invocation

Managed workflows, i.e. ones that are declared via annotations and therefore completely managed by JOSDK, are reconciled before the primary resource. Each dependent resource that can be reconciled (according to the workflow configuration) will therefore be reconciled before the primary reconciler is called to reconcile the primary resource. There are, however, situations where it would be be useful to perform additional steps before the workflow is reconciled, for example to validate the current state, execute arbitrary logic or even skip reconciliation altogether. Explicit invocation of managed workflow was therefore introduced to solve these issues.

To use this feature, you need to set the explicitInvocation field to true on the @Workflow annotation and then call the reconcileManagedWorkflow method from the ManagedWorkflowAndDependentResourceContext retrieved from the reconciliation Context provided as part of your primary resource reconciler reconcile method arguments.

See related integration test for more details.

For cleanup, if the Cleaner interface is implemented, the cleanupManageWorkflow() needs to be called explicitly. However, if Cleaner interface is not implemented, it will be called implicitly. See related integration test.

While nothing prevents calling the workflow multiple times in a reconciler, it isn’t typical or even recommended to do so. Conversely, if explicit invocation is requested but reconcileManagedWorkflow is not called in the primary resource reconciler, the workflow won’t be reconciled at all.

Notes and Caveats

  • Delete is almost always called on every resource during the cleanup. However, it might be the case that the resources were already deleted in a previous run, or not even created. This should not be a problem, since dependent resources usually cache the state of the resource, so are already aware that the resource does not exist and that nothing needs to be done if delete is called.
  • If a resource has owner references, it will be automatically deleted by the Kubernetes garbage collector if the owner resource is marked for deletion. This might not be desirable, to make sure that delete is handled by the workflow don’t use garbage collected kubernetes dependent resource, use for example CRUDNoGCKubernetesDependentResource .
  • No state is persisted regarding the workflow execution. Every reconciliation causes all the resources to be reconciled again, in other words the whole workflow is again evaluated.

9 - FAQ

Q: How can I access the events which triggered the Reconciliation?

In the v1.* version events were exposed to Reconciler (which was called ResourceController then). This included events (Create, Update) of the custom resource, but also events produced by Event Sources. After long discussions also with developers of golang version (controller-runtime), we decided to remove access to these events. We already advocated to not use events in the reconciliation logic, since events can be lost. Instead, reconcile all the resources on every execution of reconciliation. On first this might sound a little opinionated, but there is a sound agreement between the developers that this is the way to go.

Note that this is also consistent with Kubernetes level based reconciliation approach.

Q: Can I re-schedule a reconciliation, possibly with a specific delay?

Yes, this can be done using UpdateControl and DeleteControl , see:

  @Override
  public UpdateControl<MyCustomResource> reconcile(
     EventSourceTestCustomResource resource, Context context) {
    ...
    return UpdateControl.patchStatus(resource).rescheduleAfter(10, TimeUnit.SECONDS);
  }

without an update:

  @Override
  public UpdateControl<MyCustomResource> reconcile(
     EventSourceTestCustomResource resource, Context context) {
    ...
    return UpdateControl.<MyCustomResource>noUpdate().rescheduleAfter(10, TimeUnit.SECONDS);
  }

Although you might consider using EventSources, to handle reconciliation triggering in a smarter way.

Q: How can I run an operator without cluster scope rights?

By default, JOSDK requires access to CRs at cluster scope. You may not be granted such rights and you will see some error at startup that looks like:

io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.local.svc/apis/mygroup/v1alpha1/mycr. Message: Forbidden! Configured service account doesn't have access. Service account may have been revoked. mycrs.mygroup is forbidden: User "system:serviceaccount:ns:sa" cannot list resource "mycrs" in API group "mygroup" at the cluster scope.

To restrict the operator to a set of namespaces, you may override which namespaces are watched by a reconciler at Reconciler-level configuration:

Operator operator;
Reconciler reconciler;
...
operator.register(reconciler, configOverrider ->
        configOverrider.settingNamespace("mynamespace"));

Note that configuring the watched namespaces can also be done using the @ControllerConfiguration annotation.

Furthermore, you may not be able to list CRDs at startup which is required when checkingCRDAndValidateLocalModel is true (false by default). To disable, set it to false at Operator-level configuration:

Operator operator = new Operator( override -> override.checkingCRDAndValidateLocalModel(false));

Q: I’m managing an external resource that has a generated ID, where should I store that?

It is common that a non-Kubernetes or external resource is managed from a controller. Those external resources might have a generated ID, so are not simply addressable based on the spec of a custom resources. Therefore, the generated ID needs to be stored somewhere in order to address the resource during the subsequent reconciliations.

Usually there are two options you can consider to store the ID:

  1. Create a separate resource (usually ConfigMap, Secret or dedicated CustomResource) where you store the ID.
  2. Store the ID in the status of the custom resource.

Note that both approaches are a bit tricky, since you have to guarantee the resources are cached for the next reconciliation. For example if you patch the status at the end of the reconciliation (UpdateControl.patchStatus(...)) it is not guaranteed that during the next reconciliation you will see the fresh resource. Therefore, controllers which do this, usually cache the updated status in memory to make sure it is present for next reconciliation.

Dependent Resources feature supports the first approach.

Q: How to fix sun.security.provider.certpath.SunCertPathBuilderException on Rancher Desktop and k3d/k3s Kubernetes

It’s a common issue when using k3d and the fabric8 client tries to connect to the cluster an exception is thrown:

Caused by: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
	at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
	at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:352)
	at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:295)

The cause is that fabric8 kubernetes client does not handle elliptical curve encryption by default. To fix this, add the following dependency on the classpath:

<dependency>
  <groupId>org.bouncycastle</groupId>
  <artifactId>bcpkix-jdk15on</artifactId>
</dependency>

10 - Architecture and Internals

This document gives an overview of the internal structure and components of Java Operator SDK core, in order to make it easier for developers to understand and contribute to it. This document is not intended to be a comprehensive reference, rather an introduction to the core concepts and we hope that the other parts should be fairly easy to understand. We will evolve this document based on the community’s feedback.

The Big Picture and Core Components

JOSDK architecture

An Operator is a set of independent controllers . The Controller class, however, is an internal class managed by the framework itself and usually shouldn’t interacted with directly by end users. It manages all the processing units involved with reconciling a single type of Kubernetes resource.

Other components include:

  • Reconciler is the primary entry-point for the developers of the framework to implement the reconciliation logic.
  • EventSource represents a source of events that might eventually trigger a reconciliation.
  • EventSourceManager aggregates all the event sources associated with a controller. Manages the event sources' lifecycle.
  • ControllerResourceEventSource is a central event source that watches the resources associated with the controller (also called primary resources) for changes, propagates events and caches the related state.
  • EventProcessor processes the incoming events and makes sure they are executed in a sequential manner, that is making sure that the events are processed in the order they are received for a given resource, despite requests being processed concurrently overall. The EventProcessor also takes care of re-scheduling or retrying requests as needed.
  • ReconcilerDispatcher is responsible for dispatching requests to the appropriate Reconciler method and handling the reconciliation results, making the instructed Kubernetes API calls.

Typical Workflow

A typical workflows looks like following:

  1. An EventSource produces an event, that is propagated to the EventProcessor.
  2. The resource associated with the event is read from the internal cache.
  3. If the resource is not already being processed, a reconciliation request is submitted to the executor service to be executed in a different thread, encapsulated in a ControllerExecution instance.
  4. This, in turns, calls the ReconcilerDispatcher which dispatches the call to the appropriate Reconciler method, passing along all the required information.
  5. Once the Reconciler is done, what happens depends on the result returned by the Reconciler. If needed, the ReconcilerDispatcher will make the appropriate calls to the Kubernetes API server.
  6. Once the Reconciler is done, the EventProcessor is called back to finalize the execution and update the controller’s state.
  7. The EventProcessor checks if the request needs to be rescheduled or retried and if there are no subsequent events received for the same resource.
  8. When none of this happens, the processing of the event is finished.

11 - Contributing To Java Operator SDK

First of all, we’d like to thank you for considering contributing to the project! We really hope to create a vibrant community around this project but this won’t happen without help from people like you!

Code of Conduct

We are serious about making this a welcoming, happy project. We will not tolerate discrimination, aggressive or insulting behaviour.

To this end, the project and everyone participating in it is bound by the Code of Conduct. By participating, you are expected to uphold this code. Please report unacceptable behaviour to any of the project admins.

Bugs

If you find a bug, please open an issue! Do try to include all the details needed to recreate your problem. This is likely to include:

  • The version of the Operator SDK being used
  • The exact platform and version of the platform that you’re running on
  • The steps taken to cause the bug
  • Reproducer code is also very welcome to help us diagnose the issue and fix it quickly

Building Features and Documentation

If you’re looking for something to work on, take look at the issue tracker, in particular any items labelled good first issue . Please leave a comment on the issue to mention that you have started work, in order to avoid multiple people working on the same issue.

If you have an idea for a feature - whether or not you have time to work on it - please also open an issue describing your feature and label it “enhancement”. We can then discuss it as a community and see what can be done. Please be aware that some features may not align with the project goals and might therefore be closed. In particular, please don’t start work on a new feature without discussing it first to avoid wasting effort. We do commit to listening to all proposals and will do our best to work something out!

Once you’ve got the go ahead to work on a feature, you can start work. Feel free to communicate with team via updates on the issue tracker or the Discord channel and ask for feedback, pointers etc. Once you’re happy with your code, go ahead and open a Pull Request.

Pull Request Process

First, please format your commit messages so that they follow the conventional commit format.

On opening a PR, a GitHub action will execute the test suite against the new code. All code is required to pass the tests, and new code must be accompanied by new tests.

All PRs have to be reviewed and signed off by another developer before being merged. This review will likely ask for some changes to the code - please don’t be alarmed or upset at this; it is expected that all PRs will need tweaks and a normal part of the process.

The PRs are checked to be compliant with the Java Google code style.

Be aware that all Operator SDK code is released under the Apache 2.0 licence.

Development environment setup

Code style

The SDK modules and samples are formatted to follow the Java Google code style. On every compile the code gets formatted automatically, however, to make things simpler (i.e. avoid getting a PR rejected simply because of code style issues), you can import one of the following code style schemes based on the IDE you use:

Thanks

These guidelines were based on several sources, including Atom, PurpleBooth’s advice and the Contributor Covenant.

12 - Configuring JOSDK

Configuration options

The Java Operator SDK (JOSDK) provides several abstractions that work great out of the box. However, while we strive to cover the most common cases with the default behavior, we also recognize that that default behavior is not always what any given user might want for their operator. Numerous configuration options are therefore provided to help people tailor the framework to their needs.

Configuration options act at several levels, depending on which behavior you wish to act upon:

  • Operator-level using ConfigurationService
  • Reconciler-level using ControllerConfiguration
  • DependentResouce-level using the DependentResourceConfigurator interface
  • EventSource-level: some event sources, such as InformerEventSource, might need to be fine-tuned to properly identify which events will trigger the associated reconciler.

Operator-level configuration

Configuration that impacts the whole operator is performed via the ConfigurationService class. ConfigurationService is an abstract class, and the implementation can be different based on which flavor of the framework is used. For example Quarkus Operator SDK replaces the default implementation. Configurations are initialized with sensible defaults, but can be changed during initialization.

For instance, if you wish to not validate that the CRDs are present on your cluster when the operator starts and configure leader election, you would do something similar to:

Operator operator = new Operator( override -> override
        .checkingCRDAndValidateLocalModel(false)
        .withLeaderElectionConfiguration(new LeaderElectionConfiguration("bar", "barNS")));

Reconciler-level configuration

While reconcilers are typically configured using the @ControllerConfiguration annotation, it is also possible to override the configuration at runtime, when the reconciler is registered with the operator instance, either by passing it a completely new ControllerConfiguration instance or by preferably overriding some aspects of the current configuration using a ControllerConfigurationOverrider Consumer:

Operator operator;
Reconciler reconciler;
...
operator.register(reconciler, configOverrider ->
        configOverrider.withFinalizer("my-nifty-operator/finalizer").withLabelSelector("foo=bar"));

DependentResource-level configuration

DependentResource implementations can implement the DependentResourceConfigurator interface to pass information to the implementation. For example, the SDK provides specific support for the KubernetesDependentResource, which can be configured via the @KubernetesDependent annotation. This annotation is, in turn, converted into a KubernetesDependentResourceConfig instance, which is then passed to the configureWith method implementation.

TODO: still subject to change / uniformization

EventSource-level configuration

TODO

13 - Migrations

13.1 - Migrating from v1 to v2

Version 2 of the framework introduces improvements, features and breaking changes for the APIs both internal and user facing ones. The migration should be however trivial in most of the cases. For detailed overview of all major issues until the release of v2.0.0 see milestone on GitHub . For a summary and reasoning behind some naming changes see this issue

User Facing API Changes

The following items are renamed and slightly changed:

  • ResourceController interface is renamed to Reconciler . In addition, methods:
    • createOrUpdateResource renamed to reconcile
    • deleteResource renamed to cleanup
  • Events are removed from the Context of Reconciler methods . The rationale behind this, is that there is a consensus now on the pattern that the events should not be used to implement a reconciliation logic.
  • The init method is extracted from ResourceController / Reconciler to a separate interface called EventSourceInitializer that Reconciler should implement in order to register event sources. The method has been renamed to prepareEventSources and should now return a list of EventSource implementations that the Controller will automatically register. See also sample for usage.
  • EventSourceManager is now an internal class that users shouldn’t need to interact with.
  • @Controller annotation renamed to @ControllerConfiguration
  • The metrics use reconcile, cleanup and resource labels instead of createOrUpdate, delete and cr, respectively to match the new logic.

Event Sources

  • Addressing resources within event sources (and in the framework internally) is now changed from .metadata.uid to a pair of .metadata.name and optional .metadata.namespace of resource. Represented by ResourceID.

The Event API is simplified. Now if an event source produces an event it needs to just produce an instance of this class.

  • EventSource is refactored, but the changes are trivial.

13.2 - Migrating from v2 to v3

Version 3 introduces some breaking changes to APIs, however the migration to these changes should be trivial.

Reconciler

  • Reconciler can throw checked exception (not just runtime exception), and that also can be handled by ErrorStatusHandler.
  • cleanup method is extracted from the Reconciler interface to a separate Cleaner interface. Finalizers only makes sense that the Cleanup is implemented, from now finalizer is only added if the Reconciler implements this interface (or has managed dependent resources implementing Deleter interface, see dependent resource docs).
  • Context object of Reconciler now takes the Primary resource as parametrized type: Context<MyCustomResource>.
  • ErrorStatusHandler result changed, it functionally has been extended to now prevent Exception to be retried and handles checked exceptions as mentioned above.

Event Sources

  • Event Sources are now registered with a name. But utility method is available to make it easy to migrate to a default name.
  • InformerEventSource constructor changed to reflect additional functionality in a non backwards compatible way. All the configuration options from the constructor where moved to InformerConfiguration . See sample usage in WebPageReconciler .
  • PrimaryResourcesRetriever was renamed to SecondaryToPrimaryMapper
  • AssociatedSecondaryResourceIdentifier was renamed to PrimaryToSecondaryMapper
  • getAssociatedResource is now renamed to get getSecondaryResource in multiple places

13.3 - Migrating from v3 to v3.1

ReconciliationMaxInterval Annotation has been renamed to MaxReconciliationInterval

Associated methods on both the ControllerConfiguration class and annotation have also been renamed accordingly.

Workflows Impact on Managed Dependent Resources Behavior

Version 3.1 comes with a workflow engine that replaces the previous behavior of managed dependent resources. See Workflows documentation for further details. The primary impact after upgrade is a change of the order in which managed dependent resources are reconciled. They are now reconciled in parallel with optional ordering defined using the ‘depends_on’ relation to define order between resources if needed. In v3, managed dependent resources were implicitly reconciled in the order they were defined in.

Garbage Collected Kubernetes Dependent Resources

In version 3 all Kubernetes Dependent Resource implementing Deleter interface were meant to be also using owner references (thus garbage collected by Kubernetes). In 3.1 there is a dedicated GarbageCollected interface to distinguish between Kubernetes resources meant to be garbage collected or explicitly deleted. Please refer also to the GarbageCollected javadoc for more details on how this impacts how owner references are managed.

The supporting classes were also updated. Instead of CRUKubernetesDependentResource there are two:

Use the one according to your use case. We anticipate that most people would want to use CRUDKubernetesDependentResource whenever they have to work with Kubernetes dependent resources.

13.4 - Migrating from v4.2 to v4.3

Condition API Change

In Workflows the target of the condition was the managed resource itself, not the target dependent resource. This changed, now the API contains the dependent resource.

New API:

public interface Condition<R, P extends HasMetadata> {

    boolean isMet(DependentResource<R, P> dependentResource, P primary, Context<P> context);

}

Former API:

public interface Condition<R, P extends HasMetadata> {

    boolean isMet(P primary, R secondary, Context<P> context);

}

Migration is trivial. Since the secondary resource can be accessed from the dependent resource. So to access the secondary resource just use dependentResource.getSecondaryResource(primary,context).

HTTP client choice

It is now possible to change the HTTP client used by the Fabric8 client to communicate with the Kubernetes API server. By default, the SDK uses the historical default HTTP client which relies on Okhttp and there shouldn’t be anything needed to keep using this implementation. The tomcat-operator sample has been migrated to use the Vert.X based implementation. You can see how to change the client by looking at that sample POM file:

  • You need to exclude the default implementation (in this case okhttp) from the operator-framework dependency
  • You need to add the appropriate implementation dependency, kubernetes-httpclient-vertx in this case, HTTP client implementations provided as part of the Fabric8 client all following the kubernetes-httpclient-<implementation name> pattern for their artifact identifier.

13.5 - Migrating from v4.3 to v4.4

API changes

ConfigurationService

We have simplified how to deal with the Kubernetes client. Previous versions provided direct access to underlying aspects of the client’s configuration or serialization mechanism. However, the link between these aspects wasn’t as explicit as it should have been. Moreover, the Fabric8 client framework has also revised their serialization architecture in the 6.7 version (see this fabric8 pull request for a discussion of that change), moving from statically configured serialization to a per-client configuration (though it’s still possible to share serialization mechanism between client instances). As a consequence, we made the following changes to the ConfigurationService API:

  • Replaced getClientConfiguration and getObjectMapper methods by a new getKubernetesClient method: instead of providing the configuration and mapper, you now provide a client instance configured according to your needs and the SDK will extract the needed information from it

If you had previously configured a custom configuration or ObjectMapper, it is now recommended that you do so when creating your client instance, as follows, usually using ConfigurationServiceOverrider.withKubernetesClient:


class Example {

  public static void main(String[] args) {
    Config config; // your configuration
    ObjectMapper mapper; // your mapper
    final var operator = new Operator(overrider -> overrider.withKubernetesClient(
            new KubernetesClientBuilder()
                    .withConfig(config)
                    .withKubernetesSerialization(new KubernetesSerialization(mapper, true))
                    .build()
        ));
  }
}

Consequently, it is now recommended to get the client instance from the ConfigurationService.

Operator

It is now recommended to configure your Operator instance by using a ConfigurationServiceOverrider when creating it. This allows you to change the default configuration values as needed. In particular, instead of passing a Kubernetes client instance explicitly to the Operator constructor, it is now recommended to provide that value using ConfigurationServiceOverrider.withKubernetesClient as shown above.

Using Server-Side Apply in Dependent Resources

From this version by default Dependent Resources use Server Side Apply (SSA) to create and update Kubernetes resources. A new default matching algorithm is provided for KubernetesDependentResource that is based on managedFields of SSA. For details see SSABasedGenericKubernetesResourceMatcher

Since those features are hard to completely test, we provided feature flags to revert to the legacy behavior if needed, see in ConfigurationService

Note that it is possible to override the related methods/behavior on class level when extending the KubernetesDependentResource.

The SSA based create/update can be combined with the legacy matcher, simply override the match method and use the GenericKubernetesResourceMatcher directly. See related sample.

Migration from plain Update/Create to SSA Based Patch

Migration to SSA might not be trivial based on the uses cases and the type of managed resources. In general this is not a solved problem is Kubernetes. The Java Operator SDK Team tries to follow the related issues, but in terms of implementation this is not something that the framework explicitly supports. Thus, no code is added that tries to mitigate related issues. Users should thoroughly test the migration, and even consider not to migrate in some cases (see feature flags above).

See some related issues in kubernetes or here. Please create related issue in JOSDK if any.

13.6 - Migrating from v4.4 to v4.5

Version 4.5 introduces improvements related to event handling for Dependent Resources, more precisely the caching and event handling features. As a result the Kubernetes resources managed using KubernetesDependentResource or its subclasses, will add an annotation recording the resource’s version whenever JOSDK updates or creates such resources. This can be turned off using a feature flag if causes some issues in your use case.

Using this feature, JOSDK now tracks versions of cached resources. It also uses, by default, that information to prevent unneeded reconciliations that could occur when, depending on the timing of operations, an outdated resource would happen to be in the cache. This relies on the fact that versions (as recorded by the metadata.resourceVersion field) are currently implemented as monotonically increasing integers (though they should be considered as opaque and their interpretation discouraged). Note that, while this helps preventing unneeded reconciliations, things would eventually reach consistency even in the absence of this feature. Also, if this interpreting of the resource versions causes issues, you can turn the feature off using the following feature flag.

13.7 - Migrating from v4.7 to v5.0

Migrating from v4.7 to v5.0

For migration to v5 see this blogpost.