This is the multi-page printable view of this section. Click here to print.
Releases
Version 5.3 Released!
We’re pleased to announce the release of Java Operator SDK v5.3.0! This minor version brings two headline features — read-cache-after-write consistency and a new metrics implementation — along with a configuration adapter system, MDC improvements, and a number of smaller improvements and cleanups.
Key Features
Read-cache-after-write Consistency and Event Filtering
This is the headline feature of 5.3. Informer caches are inherently eventually consistent: after your reconciler updates a resource, there is a window of time before the change is visible in the cache. This can cause subtle bugs, particularly when storing allocated values in the status sub-resource and reading them back in the next reconciliation.
From 5.3.0, the framework provides two guarantees when you use
ResourceOperations
(accessible from Context):
- Read-after-write: Reading from the cache after your update — even within the same reconciliation — returns at least the version of the resource from your update response.
- Event filtering: Events produced by your own writes no longer trigger a redundant reconciliation.
UpdateControl and ErrorStatusUpdateControl use this automatically. Secondary resources benefit
via context.resourceOperations():
public UpdateControl<WebPage> reconcile(WebPage webPage, Context<WebPage> context) {
ConfigMap managedConfigMap = prepareConfigMap(webPage);
// update is cached and will suppress the resulting event
context.resourceOperations().serverSideApply(managedConfigMap);
// fresh resource instantly available from the cache
var upToDateResource = context.getSecondaryResource(ConfigMap.class);
makeStatusChanges(webPage);
// UpdateControl also uses this by default
return UpdateControl.patchStatus(webPage);
}
If your reconciler relied on being re-triggered by its own writes, a new reschedule() method on
UpdateControl lets you explicitly request an immediate re-queue.
Note:
InformerEventSource.list(..)bypasses the additional caches and will not reflect in-flight updates. Usecontext.getSecondaryResources(..)orInformerEventSource.get(ResourceID)instead.
See the related blog post and reconciler docs for details.
MicrometerMetricsV2
A new micrometer-based Metrics implementation designed with low cardinality in mind. All meters
are scoped to the controller, not to individual resources, avoiding unbounded cardinality growth as
resources come and go.
MeterRegistry registry; // initialize your registry
Metrics metrics = MicrometerMetricsV2.newBuilder(registry).build();
Operator operator = new Operator(client, o -> o.withMetrics(metrics));
Optionally attach a namespace tag to per-reconciliation counters (disabled by default):
Metrics metrics = MicrometerMetricsV2.newBuilder(registry)
.withNamespaceAsTag()
.build();
The full list of meters:
| Meter | Type | Description |
|---|---|---|
reconciliations.active | gauge | Reconciler executions currently running |
reconciliations.queue | gauge | Resources queued for reconciliation |
custom_resources | gauge | Resources tracked by the controller |
reconciliations.execution.duration | timer | Execution duration with explicit histogram buckets |
reconciliations.started.total | counter | Reconciliations started |
reconciliations.success.total | counter | Successful reconciliations |
reconciliations.failure.total | counter | Failed reconciliations |
reconciliations.retries.total | counter | Retry attempts |
events.received | counter | Kubernetes events received |
The execution timer uses explicit bucket boundaries (10ms–30s) to ensure compatibility with
histogram_quantile() in both PrometheusMeterRegistry and OtlpMeterRegistry.
A ready-to-use Grafana dashboard is included at
observability/josdk-operator-metrics-dashboard.json.
The
metrics-processing sample operator
provides a complete end-to-end setup with Prometheus, Grafana, and an OpenTelemetry Collector,
installable via observability/install-observability.sh. This is a good starting point for
verifying metrics in a real cluster.
Deprecated: The original
MicrometerMetrics(V1) is deprecated as of 5.3.0. It attaches resource-specific metadata as tags to every meter, causing unbounded cardinality. Migrate toMicrometerMetricsV2.
See the observability docs for the full reference.
Configuration Adapters
A new ConfigLoader bridges any key-value configuration source to the JOSDK operator and
controller configuration APIs. This lets you drive operator behaviour from environment variables,
system properties, YAML files, or any config library without writing glue code by hand.
The default instance stacks environment variables over system properties out of the box:
Operator operator = new Operator(ConfigLoader.getDefault().applyConfigs());
Built-in providers: EnvVarConfigProvider, PropertiesConfigProvider, YamlConfigProvider,
and AggregatePriorityListConfigProvider for explicit priority ordering.
ConfigProvider is a single-method interface, so adapting any config library (MicroProfile Config,
SmallRye Config, etc.) takes only a few lines:
public class SmallRyeConfigProvider implements ConfigProvider {
private final SmallRyeConfig config;
@Override
public <T> Optional<T> getValue(String key, Class<T> type) {
return config.getOptionalValue(key, type);
}
}
Pass the results when constructing the operator and registering reconcilers:
var configLoader = new ConfigLoader(new SmallRyeConfigProvider(smallRyeConfig));
Operator operator = new Operator(configLoader.applyConfigs());
operator.register(new MyReconciler(), configLoader.applyControllerConfigs(MyReconciler.NAME));
See the configuration docs for the full list of supported keys.
Note: This new configuration mechanism is useful when using the SDK by itself. Framework (Spring Boot, Quarkus, …) integrations usually provide their own configuration mechanisms that should be used instead of this new mechanism.
MDC Improvements
MDC in workflow execution: MDC context is now propagated through workflow (dependent resource graph) execution threads, not just the top-level reconciler thread. Logging from dependent resources now carries the same contextual fields as the primary reconciliation.
NO_NAMESPACE for cluster-scoped resources: Instead of omitting the resource.namespace MDC
key for cluster-scoped resources, the framework now emits MDCUtils.NO_NAMESPACE. This makes log
queries for cluster-scoped resources reliable.
De-duplicated Secondary Resources from Context
When multiple event sources manage the same resource type, context.getSecondaryResources(..) now
returns a de-duplicated stream. When the same resource appears from more than one source, only the
copy with the highest resource version is returned.
Record Desired State in Context
Dependent resources now record their desired state in the Context during reconciliation. This allows reconcilers and
downstream dependents in a workflow to inspect what a dependent resource computed as its desired state and guarantees
that the desired state is computed only once per reconciliation.
Informer Health Checks
Informer health checks no longer rely on isWatching. For readiness and startup probes, you should
primarily use hasSynced. Once an informer has started, isWatching is not suitable for liveness
checks.
Additional Improvements
- Annotation removal using locking: Finalizer and annotation management no longer uses
createOrReplace; a locking-basedcreateOrUpdateavoids conflicts under concurrent updates. KubernetesDependentResourceusesResourceOperationsdirectly, removing an indirection layer and automatically benefiting from the read-after-write guarantees.- Skip namespace deletion in JUnit extension: The JUnit extension now supports a flag to skip namespace deletion after a test run, useful for debugging CI failures.
ManagedInformerEventSource.getCachedValue()deprecated: Usecontext.getSecondaryResource(..)instead.- Improved event filtering for multiple parallel updates: The filtering algorithm now handles cases where multiple parallel updates are in flight for the same resource.
exitOnStopLeadingis being prepared for removal from the public API.
Migration Notes
JUnit module rename
<!-- before -->
<artifactId>operator-framework-junit-5</artifactId>
<!-- after -->
<artifactId>operator-framework-junit</artifactId>
Metrics interface renames
| v5.2 | v5.3 |
|---|---|
reconcileCustomResource | reconciliationSubmitted |
reconciliationExecutionStarted | reconciliationStarted |
reconciliationExecutionFinished | reconciliationSucceeded |
failedReconciliation | reconciliationFailed |
finishedReconciliation | reconciliationFinished |
cleanupDoneFor | cleanupDone |
receivedEvent | eventReceived |
reconciliationFinished(..) is extended with RetryInfo. monitorSizeOf(..) is removed.
ResourceAction relocated
ResourceAction in io.javaoperatorsdk.operator.processing.event.source.controller has been
removed. Use io.javaoperatorsdk.operator.processing.event.source.ResourceAction instead.
See the full migration guide for details.
Getting Started
<dependency>
<groupId>io.javaoperatorsdk</groupId>
<artifactId>operator-framework</artifactId>
<version>5.3.0</version>
</dependency>
All Changes
See the comparison view for the full list of changes.
Feedback
Please report issues or suggest improvements on our GitHub repository.
Happy operator building! 🚀
Version 5.2 Released!
We’re pleased to announce the release of Java Operator SDK v5.2! This minor version brings several powerful new features and improvements that enhance the framework’s capabilities for building Kubernetes operators. This release focuses on flexibility, external resource management, and advanced reconciliation patterns.
Key Features
ResourceIDMapper for External Resources
One of the most significant improvements in 5.2 is the introduction of a unified approach to working with custom ID types
across the framework through ResourceIDMapper
and ResourceIDProvider.
Previously, when working with external resources (non-Kubernetes resources), the framework assumed resource IDs could always be represented as strings. This limitation made it challenging to work with external systems that use complex ID types.
Now, you can define custom ID types for your external resources by implementing the ResourceIDProvider interface:
public class MyExternalResource implements ResourceIDProvider<MyCustomID> {
@Override
public MyCustomID getResourceID() {
return new MyCustomID(this.id);
}
}
This capability is integrated across multiple components:
ExternalResourceCachingEventSourceExternalBulkDependentResourceAbstractExternalDependentResourceand its subclasses
If you cannot modify the external resource class (e.g., it’s generated or final), you can provide a custom
ResourceIDMapper to the components above.
See the migration guide for detailed migration instructions.
Trigger Reconciliation on All Events
Version 5.2 introduces a new execution mode that provides finer control over when reconciliation occurs. By setting
triggerReconcilerOnAllEvents
to true, your reconcile method will be called for every event, including Delete events.
This is particularly useful when:
- Only some primary resources need finalizers (e.g., some resources create external resources, others don’t)
- You maintain custom in-memory caches that need cleanup without using finalizers
- You need fine-grained control over resource lifecycle
When enabled:
- The
reconcilemethod receives the last known state even if the resource is deleted - Check deletion status using
Context.isPrimaryResourceDeleted() - Retry, rate limiting, and rescheduling work normally
- You manage finalizers explicitly using
PrimaryUpdateAndCacheUtils
Example:
@ControllerConfiguration(triggerReconcilerOnAllEvents = true)
public class MyReconciler implements Reconciler<MyResource> {
@Override
public UpdateControl<MyResource> reconcile(MyResource resource, Context<MyResource> context) {
if (context.isPrimaryResourceDeleted()) {
// Handle deletion
cleanupCache(resource);
return UpdateControl.noUpdate();
}
// Normal reconciliation
return UpdateControl.patchStatus(resource);
}
}
See the detailed documentation and integration test.
Expectation Pattern Support (Experimental)
The framework now provides built-in support for the expectations pattern, a common Kubernetes controller design pattern that ensures secondary resources are in an expected state before proceeding.
The expectation pattern helps avoid race conditions and ensures your controller makes decisions based on the most current
state of your resources. The implementation is available in the
io.javaoperatorsdk.operator.processing.expectation
package.
Example usage:
public class MyReconciler implements Reconciler<MyResource> {
private final ExpectationManager<MyResource> expectationManager = new ExpectationManager<>();
@Override
public UpdateControl<MyResource> reconcile(MyResource primary, Context<MyResource> context) {
// Exit early if expectation is not yet fulfilled or timed out
if (expectationManager.ongoingExpectationPresent(primary, context)) {
return UpdateControl.noUpdate();
}
var deployment = context.getSecondaryResource(Deployment.class);
if (deployment.isEmpty()) {
createDeployment(primary, context);
expectationManager.setExpectation(
primary, Duration.ofSeconds(30), deploymentReadyExpectation());
return UpdateControl.noUpdate();
}
// Check if expectation is fulfilled
var result = expectationManager.checkExpectation("deploymentReady", primary, context);
if (result.isFulfilled()) {
return updateStatusReady(primary);
} else if (result.isTimedOut()) {
return updateStatusTimeout(primary);
}
return UpdateControl.noUpdate();
}
}
This feature is marked as @Experimental as we gather feedback and may refine the API based on user experience. Future
versions may integrate this pattern directly into Dependent Resources and Workflows.
See the documentation and integration test.
Field Selectors for InformerEventSource
You can now use field selectors when configuring InformerEventSource, allowing you to filter resources at the server
side before they’re cached locally. This reduces memory usage and network traffic by only watching resources that match
your criteria.
Field selectors work similarly to label selectors but filter on resource fields like metadata.name or status.phase:
@Informer(
fieldSelector = @FieldSelector(
fields = @Field(key = "status.phase", value = "Running")
)
)
This is particularly useful when:
- You only care about resources in specific states
- You want to reduce the memory footprint of your operator
- You’re watching cluster-scoped resources and only need a subset
See the integration test for examples.
AggregatedMetrics for Multiple Metrics Providers
The new AggregatedMetrics class implements the composite pattern, allowing you to combine multiple metrics
implementations. This is useful when you need to send metrics to different monitoring systems simultaneously.
// Create individual metrics instances
Metrics micrometerMetrics = MicrometerMetrics.withoutPerResourceMetrics(registry);
Metrics customMetrics = new MyCustomMetrics();
Metrics loggingMetrics = new LoggingMetrics();
// Combine them into a single aggregated instance
Metrics aggregatedMetrics = new AggregatedMetrics(List.of(
micrometerMetrics,
customMetrics,
loggingMetrics
));
// Use with your operator
Operator operator = new Operator(client, o -> o.withMetrics(aggregatedMetrics));
This enables hybrid monitoring strategies, such as sending metrics to both Prometheus and a custom logging system.
See the observability documentation for more details.
Additional Improvements
GenericRetry Enhancements
GenericRetryno longer provides a mutable singleton instance, improving thread safety- Configurable duration for initial retry interval
Test Infrastructure Improvements
- Ability to override test infrastructure Kubernetes client separately, providing more flexibility in testing scenarios
Fabric8 Client Update
Updated to Fabric8 Kubernetes Client 7.4.0, bringing the latest features and bug fixes from the client library.
Experimental Annotations
Starting with this release, new features marked as experimental will be annotated with @Experimental. This annotation
indicates that while we intend to support the feature, the API may evolve based on user feedback.
Migration Notes
For most users, upgrading to 5.2 should be straightforward. The main breaking change involves the introduction of
ResourceIDMapper for external resources. If you’re using external dependent resources or bulk dependents with custom
ID types, please refer to the migration guide.
Getting Started
Update your dependency to version 5.2.0:
<dependency>
<groupId>io.javaoperatorsdk</groupId>
<artifactId>operator-framework</artifactId>
<version>5.2.0</version>
</dependency>
All Changes
You can see all changes in the comparison view.
Feedback
As always, we welcome your feedback! Please report issues or suggest improvements on our GitHub repository.
Happy operator building! 🚀
Version 5 Released!
We are excited to announce that Java Operator SDK v5 has been released. This significant effort contains various features and enhancements accumulated since the last major release and required changes in our APIs. Within this post, we will go through all the main changes and help you upgrade to this new version, and provide a rationale behind the changes if necessary.
We will omit descriptions of changes that should only require simple code updates; please do contact us if you encounter issues anyway.
You can see an introduction and some important changes and rationale behind them from KubeCon.
Various Changes
- From this release, the minimal Java version is 17.
- Various deprecated APIs are removed. Migration should be easy.
All Changes
You can see all changes here.
Changes in low-level APIs
Server Side Apply (SSA)
Server Side Apply is now a first-class citizen in
the framework and
the default approach for patching the status resource. This means that patching a resource or its status through
UpdateControl and adding
the finalizer in the background will both use SSA.
Migration from a non-SSA based patching to an SSA based one can be problematic. Make sure you test the transition when
you migrate from older version of the frameworks.
To continue to use a non-SSA based on,
set ConfigurationService.useSSAToPatchPrimaryResource
to false.
See some identified problematic migration cases and how to handle them in StatusPatchSSAMigrationIT.
For more detailed description, see our blog post on SSA.
Event Sources related changes
Multi-cluster support in InformerEventSource
InformerEventSource now supports watching remote clusters. You can simply pass a KubernetesClient instance
initialized to connect to a different cluster from the one where the controller runs when configuring your event source.
See InformerEventSourceConfiguration.withKubernetesClient
Such an informer behaves exactly as a regular one. Owner references won’t work in this situation, though, so you have to
specify a SecondaryToPrimaryMapper (probably based on labels or annotations).
See related integration test here
SecondaryToPrimaryMapper now checks resource types
The owner reference based mappers are now checking the type (kind and apiVersion) of the resource when resolving the
mapping. This is important
since a resource may have owner references to a different resource type with the same name.
See implementation details here
InformerEventSource-related changes
There are multiple smaller changes to InformerEventSource and related classes:
InformerConfigurationis renamed toInformerEventSourceConfigurationInformerEventSourceConfigurationdoesn’t requireEventSourceContextto be initialized anymore.
All EventSource are now ResourceEventSources
The EventSource
abstraction is now always aware of the resources and
handles accessing (the cached) resources, filtering, and additional capabilities. Before v5, such capabilities were
present only in a sub-class called ResourceEventSource,
but we decided to merge and remove ResourceEventSource since this has a nice impact on other parts of the system in
terms of architecture.
If you still need to create an EventSource that only supports triggering of your reconciler,
see TimerEventSource
for an example of how this can be accomplished.
Naming event sources
EventSource
are now named. This reduces the ambiguity that might have existed when trying to refer to an EventSource.
ControllerConfiguration annotation related changes
You no longer have to annotate the reconciler with @ControllerConfiguration annotation.
This annotation is (one) way to override the default properties of a controller.
If the annotation is not present, the default values from the annotation are used.
PR: https://github.com/operator-framework/java-operator-sdk/pull/2203
In addition to that, the informer-related configurations are now extracted into
a separate @Informer
annotation within @ControllerConfiguration.
Hopefully this explicits which part of the configuration affects the informer associated with primary resource.
Similarly, the same @Informer annotation is used when configuring the informer associated with a managed
KubernetesDependentResource via the
KubernetesDependent
annotation.
EventSourceInitializer and ErrorStatusHandler are removed
Both the EventSourceInitializer and ErrorStatusHandler interfaces are removed, and their methods moved directly
under Reconciler.
If possible, we try to avoid such marker interfaces since it is hard to deduce related usage just by looking at the
source code.
You can now simply override those methods when implementing the Reconciler interface.
Cloning accessing secondary resources
When accessing the secondary resources using Context.getSecondaryResource(s)(...),
the resources are no longer cloned by default, since
cloning could have an impact on performance. This means that you now need to ensure that these any changes
are now made directly to the underlying cached resource. This should be avoided since the same resource instance may be
present for other reconciliation cycles and would
no longer represent the state on the server.
If you want to still clone resources by default,
set ConfigurationService.cloneSecondaryResourcesWhenGettingFromCache
to true.
Removed automated observed generation handling
The automatic observed generation handling feature was removed since it is easy to implement inside the reconciler, but it made the implementation much more complex, especially if the framework would have to support it both for served side apply and client side apply.
You can check a sample implementation how to do it manually in this integration test.
Dependent Resource related changes
ResourceDiscriminator is removed and related changes
The primary reason ResourceDiscriminator was introduced was to cover the case when there are
more than one dependent resources of a given type associated with a given primary resource. In this situation, JOSDK
needed a generic mechanism to
identify which resources on the cluster should be associated with which dependent resource implementation.
We improved this association mechanism, thus rendering ResourceDiscriminator obsolete.
As a replacement, the dependent resource will select the target resource based on the desired state.
See the generic implementation in AbstractDependentResource.
Calculating the desired state can be costly and might depend on other resources. For KubernetesDependentResource
it is usually enough to provide the name and namespace (if namespace-scoped) of the target resource, which is what the
KubernetesDependentResource implementation does by default. If you can determine which secondary to target without
computing the desired state via its associated ResourceID, then we encourage you to override the
ResourceID targetSecondaryResourceID()
method as shown
in this example
Read-only bulk dependent resources
Read-only bulk dependent resources are now supported; this was a request from multiple users, but it required changes to the underlying APIs. Please check the documentation for further details.
See also the related integration test.
Multiple Dependents with Activation Condition
Until now, activation conditions had a limitation that only one condition was allowed for a specific resource type.
For example, two ConfigMap dependent resources were not allowed, both with activation conditions. The underlying issue
was with the informer registration process. When an activation condition is evaluated as “met” in the background,
the informer is registered dynamically for the target resource type. However, we need to avoid registering multiple
informers of the same kind. To prevent this the dependent resource must specify
the name of the informer.
See the complete example here.
getSecondaryResource is Activation condition aware
When an activation condition for a resource type is not met, no associated informer might be registered for that
resource type. However, in this situation, calling Context.getSecondaryResource
and its alternatives would previously throw an exception. This was, however, rather confusing and a better user
experience would be to return an empty value instead of throwing an error. We changed this behavior in v5 to make it
more user-friendly and attempting to retrieve a secondary resource that is gated by an activation condition will now
return an empty value as if the associated informer existed.
See related issue for details.
Workflow related changes
@Workflow annotation
The managed workflow definition is now a separate @Workflow annotation; it is no longer part of
@ControllerConfiguration.
See sample usage here
Explicit workflow invocation
Before v5, the managed dependents part of a workflow would always be reconciled before the primary Reconciler
reconcile or cleanup methods were called. It is now possible to explictly ask for a workflow reconciliation in your
primary Reconciler, thus allowing you to control when the workflow is reconciled. This mean you can perform all kind
of operations - typically validations - before executing the workflow, as shown in the sample below:
@Workflow(explicitInvocation = true,
dependents = @Dependent(type = ConfigMapDependent.class))
@ControllerConfiguration
public class WorkflowExplicitCleanupReconciler
implements Reconciler<WorkflowExplicitCleanupCustomResource>,
Cleaner<WorkflowExplicitCleanupCustomResource> {
@Override
public UpdateControl<WorkflowExplicitCleanupCustomResource> reconcile(
WorkflowExplicitCleanupCustomResource resource,
Context<WorkflowExplicitCleanupCustomResource> context) {
context.managedWorkflowAndDependentResourceContext().reconcileManagedWorkflow();
return UpdateControl.noUpdate();
}
@Override
public DeleteControl cleanup(WorkflowExplicitCleanupCustomResource resource,
Context<WorkflowExplicitCleanupCustomResource> context) {
context.managedWorkflowAndDependentResourceContext().cleanupManageWorkflow();
// this can be checked
// context.managedWorkflowAndDependentResourceContext().getWorkflowCleanupResult()
return DeleteControl.defaultDelete();
}
}
To turn on this mode of execution, set explicitInvocation
flag to true in the managed workflow definition.
See the following integration tests
for invocation
and cleanup.
Explicit exception handling
If an exception happens during a workflow reconciliation, the framework automatically throws it further.
You can now set handleExceptionsInReconciler
to true for a workflow and check the thrown exceptions explicitly
in the execution results.
@Workflow(handleExceptionsInReconciler = true,
dependents = @Dependent(type = ConfigMapDependent.class))
@ControllerConfiguration
public class HandleWorkflowExceptionsInReconcilerReconciler
implements Reconciler<HandleWorkflowExceptionsInReconcilerCustomResource>,
Cleaner<HandleWorkflowExceptionsInReconcilerCustomResource> {
private volatile boolean errorsFoundInReconcilerResult = false;
private volatile boolean errorsFoundInCleanupResult = false;
@Override
public UpdateControl<HandleWorkflowExceptionsInReconcilerCustomResource> reconcile(
HandleWorkflowExceptionsInReconcilerCustomResource resource,
Context<HandleWorkflowExceptionsInReconcilerCustomResource> context) {
errorsFoundInReconcilerResult = context.managedWorkflowAndDependentResourceContext()
.getWorkflowReconcileResult().erroredDependentsExist();
// check errors here:
Map<DependentResource, Exception> errors = context.getErroredDependents();
return UpdateControl.noUpdate();
}
}
See integration test here.
CRDPresentActivationCondition
Activation conditions are typically used to check if the cluster has specific capabilities (e.g., is cert-manager
available).
Such a check can be done by verifying if a particular custom resource definition (CRD) is present on the cluster. You
can now use the generic CRDPresentActivationCondition
for this
purpose, it will check if the CRD of a target resource type of a dependent resource exists on the cluster.
See usage in integration test here.
Fabric8 client updated to 7.0
The Fabric8 client has been updated to version 7.0.0. This is a new major version which implies that some API might have changed. Please take a look at the Fabric8 client 7.0.0 migration guide.
CRD generator changes
Starting with v5.0 (in accordance with changes made to the Fabric8 client in version 7.0.0), the CRD generator will use the maven plugin instead of the annotation processor as was previously the case. In many instances, you can simply configure the plugin by adding the following stanza to your project’s POM build configuration:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>crd-generator-maven-plugin</artifactId>
<version>${fabric8-client.version}</version>
<executions>
<execution>
<goals>
<goal>generate</goal>
</goals>
</execution>
</executions>
</plugin>
NOTE: If you use the SDK’s JUnit extension for your tests, you might also need to configure the CRD generator plugin to access your test CustomResource implementations as follows:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>crd-generator-maven-plugin</artifactId>
<version>${fabric8-client.version}</version>
<executions>
<execution>
<goals>
<goal>generate</goal>
</goals>
<phase>process-test-classes</phase>
<configuration>
<classesToScan>${project.build.testOutputDirectory}</classesToScan>
<classpath>WITH_ALL_DEPENDENCIES_AND_TESTS</classpath>
</configuration>
</execution>
</executions>
</plugin>
Please refer to the CRD generator documentation for more details.
Experimental
Check if the following reconciliation is imminent
You can now check if the subsequent reconciliation will happen right after the current one because the SDK has already
received an event that will trigger a new reconciliation
This information is available from
the Context.
Note that this could be useful, for example, in situations when a heavy task would be repeated in the follow-up reconciliation. In the current reconciliation, you can check this flag and return to avoid unneeded processing. Note that this is a semi-experimental feature, so please let us know if you found this helpful.
@Override
public UpdateControl<NextReconciliationImminentCustomResource> reconcile(MyCustomResource resource, Context<MyCustomResource> context) {
if (context.isNextReconciliationImminent()) {
// your logic, maybe return?
}
}
See related integration test.
Version 5 Released! (beta1)
See release notes here.