This is the multi-page printable view of this section. Click here to print.
Releases
Version 5.2 Released!
We’re pleased to announce the release of Java Operator SDK v5.2! This minor version brings several powerful new features and improvements that enhance the framework’s capabilities for building Kubernetes operators. This release focuses on flexibility, external resource management, and advanced reconciliation patterns.
Key Features
ResourceIDMapper for External Resources
One of the most significant improvements in 5.2 is the introduction of a unified approach to working with custom ID types
across the framework through ResourceIDMapper
and ResourceIDProvider.
Previously, when working with external resources (non-Kubernetes resources), the framework assumed resource IDs could always be represented as strings. This limitation made it challenging to work with external systems that use complex ID types.
Now, you can define custom ID types for your external resources by implementing the ResourceIDProvider interface:
public class MyExternalResource implements ResourceIDProvider<MyCustomID> {
@Override
public MyCustomID getResourceID() {
return new MyCustomID(this.id);
}
}
This capability is integrated across multiple components:
ExternalResourceCachingEventSourceExternalBulkDependentResourceAbstractExternalDependentResourceand its subclasses
If you cannot modify the external resource class (e.g., it’s generated or final), you can provide a custom
ResourceIDMapper to the components above.
See the migration guide for detailed migration instructions.
Trigger Reconciliation on All Events
Version 5.2 introduces a new execution mode that provides finer control over when reconciliation occurs. By setting
triggerReconcilerOnAllEvent
to true, your reconcile method will be called for every event, including Delete events.
This is particularly useful when:
- Only some primary resources need finalizers (e.g., some resources create external resources, others don’t)
- You maintain custom in-memory caches that need cleanup without using finalizers
- You need fine-grained control over resource lifecycle
When enabled:
- The
reconcilemethod receives the last known state even if the resource is deleted - Check deletion status using
Context.isPrimaryResourceDeleted() - Retry, rate limiting, and rescheduling work normally
- You manage finalizers explicitly using
PrimaryUpdateAndCacheUtils
Example:
@ControllerConfiguration(triggerReconcilerOnAllEvent = true)
public class MyReconciler implements Reconciler<MyResource> {
@Override
public UpdateControl<MyResource> reconcile(MyResource resource, Context<MyResource> context) {
if (context.isPrimaryResourceDeleted()) {
// Handle deletion
cleanupCache(resource);
return UpdateControl.noUpdate();
}
// Normal reconciliation
return UpdateControl.patchStatus(resource);
}
}
See the detailed documentation and integration test.
Expectation Pattern Support (Experimental)
The framework now provides built-in support for the expectations pattern, a common Kubernetes controller design pattern that ensures secondary resources are in an expected state before proceeding.
The expectation pattern helps avoid race conditions and ensures your controller makes decisions based on the most current
state of your resources. The implementation is available in the
io.javaoperatorsdk.operator.processing.expectation
package.
Example usage:
public class MyReconciler implements Reconciler<MyResource> {
private final ExpectationManager<MyResource> expectationManager = new ExpectationManager<>();
@Override
public UpdateControl<MyResource> reconcile(MyResource primary, Context<MyResource> context) {
// Exit early if expectation is not yet fulfilled or timed out
if (expectationManager.ongoingExpectationPresent(primary, context)) {
return UpdateControl.noUpdate();
}
var deployment = context.getSecondaryResource(Deployment.class);
if (deployment.isEmpty()) {
createDeployment(primary, context);
expectationManager.setExpectation(
primary, Duration.ofSeconds(30), deploymentReadyExpectation(context));
return UpdateControl.noUpdate();
}
// Check if expectation is fulfilled
var result = expectationManager.checkExpectation("deploymentReady", primary, context);
if (result.isFulfilled()) {
return updateStatusReady(primary);
} else if (result.isTimedOut()) {
return updateStatusTimeout(primary);
}
return UpdateControl.noUpdate();
}
}
This feature is marked as @Experimental as we gather feedback and may refine the API based on user experience. Future
versions may integrate this pattern directly into Dependent Resources and Workflows.
See the documentation and integration test.
Field Selectors for InformerEventSource
You can now use field selectors when configuring InformerEventSource, allowing you to filter resources at the server
side before they’re cached locally. This reduces memory usage and network traffic by only watching resources that match
your criteria.
Field selectors work similarly to label selectors but filter on resource fields like metadata.name or status.phase:
@Informer(
fieldSelector = @FieldSelector(
fields = @Field(key = "status.phase", value = "Running")
)
)
This is particularly useful when:
- You only care about resources in specific states
- You want to reduce the memory footprint of your operator
- You’re watching cluster-scoped resources and only need a subset
See the integration test for examples.
AggregatedMetrics for Multiple Metrics Providers
The new AggregatedMetrics class implements the composite pattern, allowing you to combine multiple metrics
implementations. This is useful when you need to send metrics to different monitoring systems simultaneously.
// Create individual metrics instances
Metrics micrometerMetrics = MicrometerMetrics.withoutPerResourceMetrics(registry);
Metrics customMetrics = new MyCustomMetrics();
Metrics loggingMetrics = new LoggingMetrics();
// Combine them into a single aggregated instance
Metrics aggregatedMetrics = new AggregatedMetrics(List.of(
micrometerMetrics,
customMetrics,
loggingMetrics
));
// Use with your operator
Operator operator = new Operator(client, o -> o.withMetrics(aggregatedMetrics));
This enables hybrid monitoring strategies, such as sending metrics to both Prometheus and a custom logging system.
See the observability documentation for more details.
Additional Improvements
GenericRetry Enhancements
GenericRetryno longer provides a mutable singleton instance, improving thread safety- Configurable duration for initial retry interval
Test Infrastructure Improvements
- Ability to override test infrastructure Kubernetes client separately, providing more flexibility in testing scenarios
Fabric8 Client Update
Updated to Fabric8 Kubernetes Client 7.4.0, bringing the latest features and bug fixes from the client library.
Experimental Annotations
Starting with this release, new features marked as experimental will be annotated with @Experimental. This annotation
indicates that while we intend to support the feature, the API may evolve based on user feedback.
Migration Notes
For most users, upgrading to 5.2 should be straightforward. The main breaking change involves the introduction of
ResourceIDMapper for external resources. If you’re using external dependent resources or bulk dependents with custom
ID types, please refer to the migration guide.
Getting Started
Update your dependency to version 5.2.0:
<dependency>
<groupId>io.javaoperatorsdk</groupId>
<artifactId>operator-framework</artifactId>
<version>5.2.0</version>
</dependency>
All Changes
You can see all changes in the comparison view.
Feedback
As always, we welcome your feedback! Please report issues or suggest improvements on our GitHub repository.
Happy operator building! 🚀
Version 5 Released!
We are excited to announce that Java Operator SDK v5 has been released. This significant effort contains various features and enhancements accumulated since the last major release and required changes in our APIs. Within this post, we will go through all the main changes and help you upgrade to this new version, and provide a rationale behind the changes if necessary.
We will omit descriptions of changes that should only require simple code updates; please do contact us if you encounter issues anyway.
You can see an introduction and some important changes and rationale behind them from KubeCon.
Various Changes
- From this release, the minimal Java version is 17.
- Various deprecated APIs are removed. Migration should be easy.
All Changes
You can see all changes here.
Changes in low-level APIs
Server Side Apply (SSA)
Server Side Apply is now a first-class citizen in
the framework and
the default approach for patching the status resource. This means that patching a resource or its status through
UpdateControl and adding
the finalizer in the background will both use SSA.
Migration from a non-SSA based patching to an SSA based one can be problematic. Make sure you test the transition when
you migrate from older version of the frameworks.
To continue to use a non-SSA based on,
set ConfigurationService.useSSAToPatchPrimaryResource
to false.
See some identified problematic migration cases and how to handle them in StatusPatchSSAMigrationIT.
For more detailed description, see our blog post on SSA.
Event Sources related changes
Multi-cluster support in InformerEventSource
InformerEventSource now supports watching remote clusters. You can simply pass a KubernetesClient instance
initialized to connect to a different cluster from the one where the controller runs when configuring your event source.
See InformerEventSourceConfiguration.withKubernetesClient
Such an informer behaves exactly as a regular one. Owner references won’t work in this situation, though, so you have to
specify a SecondaryToPrimaryMapper (probably based on labels or annotations).
See related integration test here
SecondaryToPrimaryMapper now checks resource types
The owner reference based mappers are now checking the type (kind and apiVersion) of the resource when resolving the
mapping. This is important
since a resource may have owner references to a different resource type with the same name.
See implementation details here
InformerEventSource-related changes
There are multiple smaller changes to InformerEventSource and related classes:
InformerConfigurationis renamed toInformerEventSourceConfigurationInformerEventSourceConfigurationdoesn’t requireEventSourceContextto be initialized anymore.
All EventSource are now ResourceEventSources
The EventSource
abstraction is now always aware of the resources and
handles accessing (the cached) resources, filtering, and additional capabilities. Before v5, such capabilities were
present only in a sub-class called ResourceEventSource,
but we decided to merge and remove ResourceEventSource since this has a nice impact on other parts of the system in
terms of architecture.
If you still need to create an EventSource that only supports triggering of your reconciler,
see TimerEventSource
for an example of how this can be accomplished.
Naming event sources
EventSource
are now named. This reduces the ambiguity that might have existed when trying to refer to an EventSource.
ControllerConfiguration annotation related changes
You no longer have to annotate the reconciler with @ControllerConfiguration annotation.
This annotation is (one) way to override the default properties of a controller.
If the annotation is not present, the default values from the annotation are used.
PR: https://github.com/operator-framework/java-operator-sdk/pull/2203
In addition to that, the informer-related configurations are now extracted into
a separate @Informer
annotation within @ControllerConfiguration.
Hopefully this explicits which part of the configuration affects the informer associated with primary resource.
Similarly, the same @Informer annotation is used when configuring the informer associated with a managed
KubernetesDependentResource via the
KubernetesDependent
annotation.
EventSourceInitializer and ErrorStatusHandler are removed
Both the EventSourceInitializer and ErrorStatusHandler interfaces are removed, and their methods moved directly
under Reconciler.
If possible, we try to avoid such marker interfaces since it is hard to deduce related usage just by looking at the
source code.
You can now simply override those methods when implementing the Reconciler interface.
Cloning accessing secondary resources
When accessing the secondary resources using Context.getSecondaryResource(s)(...),
the resources are no longer cloned by default, since
cloning could have an impact on performance. This means that you now need to ensure that these any changes
are now made directly to the underlying cached resource. This should be avoided since the same resource instance may be
present for other reconciliation cycles and would
no longer represent the state on the server.
If you want to still clone resources by default,
set ConfigurationService.cloneSecondaryResourcesWhenGettingFromCache
to true.
Removed automated observed generation handling
The automatic observed generation handling feature was removed since it is easy to implement inside the reconciler, but it made the implementation much more complex, especially if the framework would have to support it both for served side apply and client side apply.
You can check a sample implementation how to do it manually in this integration test.
Dependent Resource related changes
ResourceDiscriminator is removed and related changes
The primary reason ResourceDiscriminator was introduced was to cover the case when there are
more than one dependent resources of a given type associated with a given primary resource. In this situation, JOSDK
needed a generic mechanism to
identify which resources on the cluster should be associated with which dependent resource implementation.
We improved this association mechanism, thus rendering ResourceDiscriminator obsolete.
As a replacement, the dependent resource will select the target resource based on the desired state.
See the generic implementation in AbstractDependentResource.
Calculating the desired state can be costly and might depend on other resources. For KubernetesDependentResource
it is usually enough to provide the name and namespace (if namespace-scoped) of the target resource, which is what the
KubernetesDependentResource implementation does by default. If you can determine which secondary to target without
computing the desired state via its associated ResourceID, then we encourage you to override the
ResourceID targetSecondaryResourceID()
method as shown
in this example
Read-only bulk dependent resources
Read-only bulk dependent resources are now supported; this was a request from multiple users, but it required changes to the underlying APIs. Please check the documentation for further details.
See also the related integration test.
Multiple Dependents with Activation Condition
Until now, activation conditions had a limitation that only one condition was allowed for a specific resource type.
For example, two ConfigMap dependent resources were not allowed, both with activation conditions. The underlying issue
was with the informer registration process. When an activation condition is evaluated as “met” in the background,
the informer is registered dynamically for the target resource type. However, we need to avoid registering multiple
informers of the same kind. To prevent this the dependent resource must specify
the name of the informer.
See the complete example here.
getSecondaryResource is Activation condition aware
When an activation condition for a resource type is not met, no associated informer might be registered for that
resource type. However, in this situation, calling Context.getSecondaryResource
and its alternatives would previously throw an exception. This was, however, rather confusing and a better user
experience would be to return an empty value instead of throwing an error. We changed this behavior in v5 to make it
more user-friendly and attempting to retrieve a secondary resource that is gated by an activation condition will now
return an empty value as if the associated informer existed.
See related issue for details.
Workflow related changes
@Workflow annotation
The managed workflow definition is now a separate @Workflow annotation; it is no longer part of
@ControllerConfiguration.
See sample usage here
Explicit workflow invocation
Before v5, the managed dependents part of a workflow would always be reconciled before the primary Reconciler
reconcile or cleanup methods were called. It is now possible to explictly ask for a workflow reconciliation in your
primary Reconciler, thus allowing you to control when the workflow is reconciled. This mean you can perform all kind
of operations - typically validations - before executing the workflow, as shown in the sample below:
@Workflow(explicitInvocation = true,
dependents = @Dependent(type = ConfigMapDependent.class))
@ControllerConfiguration
public class WorkflowExplicitCleanupReconciler
implements Reconciler<WorkflowExplicitCleanupCustomResource>,
Cleaner<WorkflowExplicitCleanupCustomResource> {
@Override
public UpdateControl<WorkflowExplicitCleanupCustomResource> reconcile(
WorkflowExplicitCleanupCustomResource resource,
Context<WorkflowExplicitCleanupCustomResource> context) {
context.managedWorkflowAndDependentResourceContext().reconcileManagedWorkflow();
return UpdateControl.noUpdate();
}
@Override
public DeleteControl cleanup(WorkflowExplicitCleanupCustomResource resource,
Context<WorkflowExplicitCleanupCustomResource> context) {
context.managedWorkflowAndDependentResourceContext().cleanupManageWorkflow();
// this can be checked
// context.managedWorkflowAndDependentResourceContext().getWorkflowCleanupResult()
return DeleteControl.defaultDelete();
}
}
To turn on this mode of execution, set explicitInvocation
flag to true in the managed workflow definition.
See the following integration tests
for invocation
and cleanup.
Explicit exception handling
If an exception happens during a workflow reconciliation, the framework automatically throws it further.
You can now set handleExceptionsInReconciler
to true for a workflow and check the thrown exceptions explicitly
in the execution results.
@Workflow(handleExceptionsInReconciler = true,
dependents = @Dependent(type = ConfigMapDependent.class))
@ControllerConfiguration
public class HandleWorkflowExceptionsInReconcilerReconciler
implements Reconciler<HandleWorkflowExceptionsInReconcilerCustomResource>,
Cleaner<HandleWorkflowExceptionsInReconcilerCustomResource> {
private volatile boolean errorsFoundInReconcilerResult = false;
private volatile boolean errorsFoundInCleanupResult = false;
@Override
public UpdateControl<HandleWorkflowExceptionsInReconcilerCustomResource> reconcile(
HandleWorkflowExceptionsInReconcilerCustomResource resource,
Context<HandleWorkflowExceptionsInReconcilerCustomResource> context) {
errorsFoundInReconcilerResult = context.managedWorkflowAndDependentResourceContext()
.getWorkflowReconcileResult().erroredDependentsExist();
// check errors here:
Map<DependentResource, Exception> errors = context.getErroredDependents();
return UpdateControl.noUpdate();
}
}
See integration test here.
CRDPresentActivationCondition
Activation conditions are typically used to check if the cluster has specific capabilities (e.g., is cert-manager
available).
Such a check can be done by verifying if a particular custom resource definition (CRD) is present on the cluster. You
can now use the generic CRDPresentActivationCondition
for this
purpose, it will check if the CRD of a target resource type of a dependent resource exists on the cluster.
See usage in integration test here.
Fabric8 client updated to 7.0
The Fabric8 client has been updated to version 7.0.0. This is a new major version which implies that some API might have changed. Please take a look at the Fabric8 client 7.0.0 migration guide.
CRD generator changes
Starting with v5.0 (in accordance with changes made to the Fabric8 client in version 7.0.0), the CRD generator will use the maven plugin instead of the annotation processor as was previously the case. In many instances, you can simply configure the plugin by adding the following stanza to your project’s POM build configuration:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>crd-generator-maven-plugin</artifactId>
<version>${fabric8-client.version}</version>
<executions>
<execution>
<goals>
<goal>generate</goal>
</goals>
</execution>
</executions>
</plugin>
NOTE: If you use the SDK’s JUnit extension for your tests, you might also need to configure the CRD generator plugin to access your test CustomResource implementations as follows:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>crd-generator-maven-plugin</artifactId>
<version>${fabric8-client.version}</version>
<executions>
<execution>
<goals>
<goal>generate</goal>
</goals>
<phase>process-test-classes</phase>
<configuration>
<classesToScan>${project.build.testOutputDirectory}</classesToScan>
<classpath>WITH_ALL_DEPENDENCIES_AND_TESTS</classpath>
</configuration>
</execution>
</executions>
</plugin>
Please refer to the CRD generator documentation for more details.
Experimental
Check if the following reconciliation is imminent
You can now check if the subsequent reconciliation will happen right after the current one because the SDK has already
received an event that will trigger a new reconciliation
This information is available from
the Context.
Note that this could be useful, for example, in situations when a heavy task would be repeated in the follow-up reconciliation. In the current reconciliation, you can check this flag and return to avoid unneeded processing. Note that this is a semi-experimental feature, so please let us know if you found this helpful.
@Override
public UpdateControl<NextReconciliationImminentCustomResource> reconcile(MyCustomResource resource, Context<MyCustomResource> context) {
if (context.isNextReconciliationImminent()) {
// your logic, maybe return?
}
}
See related integration test.
Version 5 Released! (beta1)
See release notes here.