diff --git a/README.md b/README.md index a70dbc4..b94ba7d 100644 --- a/README.md +++ b/README.md @@ -1,1135 +1,116 @@ # Operator Component Framework -A Go framework for building highly maintainable Kubernetes operators using a behavioral component model and version- -gated feature mutations. This framework provides feature-level reconciliation, shared lifecycle handling, and -version-aware resource customization for operator authors who need to manage complex resource lifecycles and evolving -feature sets with consistency. +A Go framework for building highly maintainable Kubernetes operators using a behavioral component model and version-gated feature mutations. -The framework is most useful once an operator has multiple logical features, non-trivial lifecycle handling, or -growing version-specific resource logic. +## Introduction -## Why this package exists +The Operator Component Framework is a structured approach to operator development that simplifies the management of complex resource lifecycles and evolving feature sets. It provides: -Kubernetes operators often grow into bloated controllers with repetitive reconciliation logic, inconsistent status -reporting, and complex version-specific conditionals. This framework provides a structured behavioral model to solve -these problems by: -* **Grouping resources into Components**: Manage logical features (like "Web Interface") as a single unit with - aggregated health and shared lifecycle behavior. -* **Using Feature Mutations**: Define a clean baseline for resources and apply optional behavior or version-specific - compatibility as explicit, composable mutations. +* **Behavioral Component Model**: Group related resources into logical features with aggregated health and shared lifecycle behavior. +* **Structured Reconciliation**: Predictable resource management with built-in support for progression, degradation, and suspension. +* **Version-Gated Feature Mutations**: Define a clean baseline for resources and apply optional behavior or version-specific compatibility as explicit, composable mutations. +* **Reusable Kubernetes Resource Primitives**: Powerful abstractions for managing Kubernetes objects with built-in mutation safety. -## In this guide -- [Component Framework](#component-framework): Structured resource management, status aggregation, and lifecycle - orchestration. -- [Feature Mutations](#feature-mutations): Composable, version-gated resource modifications using an additive-first - planner model. +## Why this framework exists -**Suggested reading path:** [Mental Model](#mental-model) → [Minimal Example](#minimal-example) → [Feature Mutations](#feature-mutations) +Kubernetes operators often grow into bloated controllers with: -## Component Framework +- **Large reconciliation functions** coordinating many resources. +- **Inconsistent lifecycle logic** (rollouts, suspension, degradation) implemented repeatedly. +- **Scattered status reporting** that varies across different features. +- **Complex version compatibility** logic mixed with orchestration. -The `component` package provides a structured way to manage logical features in a Kubernetes operator by grouping -related resources into **Components**. +The framework addresses these problems through: -### Logic Fragmentation - -In many Kubernetes operators, the controller ends up owning most of the system's behavior. Resource creation, -lifecycle management, status reporting, and feature-specific logic are all implemented inside the controller's -reconciliation loop. - -As the operator grows, this often leads to: - -- large reconciliation functions coordinating many resources -- lifecycle logic (rollouts, suspension, degradation) implemented repeatedly -- status reporting handled differently across features -- resource configuration mixed together with orchestration logic - -Over time, controllers become harder to reason about and changes to lifecycle or status behavior must be replicated -across multiple parts of the codebase. - -### Behavioral Units - -This framework introduces a **Component** as a first-class abstraction for managing a logical feature of an operator. - -Instead of implementing all orchestration inside the controller reconciliation loop, related resources are grouped -into a behavioral unit responsible for managing their lifecycle and reporting their combined health. - -A Component becomes responsible for: - -- Reconciling resources in a consistent, predictable way. -- Aggregating resource health into a single user-facing condition. -- Applying shared lifecycle semantics (progression, degradation, suspension). -- Centralizing error handling and condition updates. - -Controllers then focus on **deciding which components should exist** based on the desired state of the custom resource, -while the framework provides the **shared mechanics for managing those resources**. - -> A **Component** manages a set of resources as one logical feature and reports exactly one **Condition Type** on the owning CRD. +- **Components**: Manage logical features as a single unit. +- **Reusable Primitives**: Encapsulate the desired state and behavior of individual Kubernetes objects. +- **Mutation-based Customization**: Decouple version and feature logic from the baseline resource definition. ## Mental Model -A good way to think about the framework is as a hierarchy of responsibility: +The framework uses a hierarchy of responsibility to maintain thin controllers and consistent behavior: ```text Controller - └── Component - └── Resource wrappers - └── Kubernetes objects -``` - -Each layer has a different role: - -1. **Controller** - Decides which components should exist based on the owner spec and orchestrates reconciliation at a high level. - -2. **Component** - Represents one logical feature, reconciles its resources, and reports one user-facing condition such as - `WebInterfaceReady`. - -3. **Resource wrapper** - Encapsulates the desired state and lifecycle behavior of a Kubernetes object. This is where you define how a - Deployment, Service, or custom resource behaves inside the framework. - -4. **Kubernetes object** - The raw `client.Object` that is eventually persisted to the cluster. - -This separation maintains thin controllers, consistent status handling, and reusable resource-specific behavior. - -## Core Concepts - -### Component - -A `Component` is the top-level coordinator for a single logical feature. - -It is responsible for: - -* reconciling all of its registered resources -* aggregating their status into one condition -* applying grace-period behavior -* handling suspension and deletion behavior -* reporting failures consistently - -A `Component` can be initialized using a builder: - -```go -comp, err := component.NewComponentBuilder(owner.Spec.Suspended). - WithName("web-interface"). - WithConditionType("WebInterfaceReady"). - WithResource(res, false, false). - WithGracePeriod(5 * time.Minute). - Build() -``` - -### Resource - -A `Resource` wraps a Kubernetes object and defines how the framework should manage it. - -The key responsibilities are: - -* applying all fields from the core resource to the cluster object during reconciliation (`Mutate(current client.Object)`) -* providing a stable identity for logging and error reporting (`Identity() string`) -* exposing a fresh copy of the baseline resource object (`Object()`) - -This abstraction separates **how an object should look** from **how the framework reconciles it**. - -### Alive - -`Alive` is an optional interface for resources with observable runtime health. - -Implement it when a resource has meaningful readiness semantics beyond “the object exists.” - -Implementation of the `ConvergingStatus` method: - -```go -func (r *DeploymentResource) ConvergingStatus(op component.ConvergingOperation) (component.ConvergingStatusWithReason, error) { - desiredReplicas := int32(1) - if r.desired.Spec.Replicas != nil { - desiredReplicas = *r.desired.Spec.Replicas - } - - if r.desired.Status.ReadyReplicas == desiredReplicas { - return component.ConvergingStatusWithReason{ - Status: component.ConvergingStatusReady, - Reason: "All replicas are ready", - }, nil - } - - var status component.ConvergingStatus - switch op { - case component.ConvergingOperationCreated: - status = component.ConvergingStatusCreating - case component.ConvergingOperationUpdated: - status = component.ConvergingStatusUpdating - default: - status = component.ConvergingStatusScaling - } - - return component.ConvergingStatusWithReason{ - Status: status, - Reason: fmt.Sprintf("Waiting for replicas: %d/%d ready", r.desired.Status.ReadyReplicas, desiredReplicas), - }, nil -} + └─ Component + └─ Resource Primitive + └─ Kubernetes Object ``` -`Alive` enables two related status models: - -* **Converging status**: how the resource is progressing toward readiness -* **Grace status**: how unhealthy the resource is after a grace period has expired - -This allows the framework to distinguish between: - -* “the resource is still rolling out” -* “the resource has had enough time and is now degraded or down” - -### Suspendable +* **Controller**: Decides which components should exist and orchestrates reconciliation at a high level. +* **Component**: Represents one logical feature (e.g., "Web Interface"), reconciles its resources, and reports a single user-facing condition. +* **Resource Primitive**: Encapsulates the desired state and lifecycle behavior of a single Kubernetes object. +* **Kubernetes Object**: The raw `client.Object` (e.g., a `Deployment`) persisted to the cluster. -`Suspendable` is an optional interface for resources that support being paused, scaled down, or otherwise hibernated. +## Quick Start Example -Defining a suspension strategy: +A minimal example showing how to build and reconcile a component with a single resource primitive: ```go -func (r *DeploymentResource) Suspend() error { - r.suspender = func(obj *appsv1.Deployment) error { - defer func() { r.suspender = nil }() - obj.Spec.Replicas = ptr.To(int32(0)) - return nil - } - return nil -} -``` - -The framework treats suspension as a first-class lifecycle, ensuring it is not a controller-specific afterthought. - -### DataExtractable - -`DataExtractable` is an optional interface for resources that need to expose internal data after they have been -synchronized with the cluster. - -Registering a data extractor in a builder: - -```go -func (b *DeploymentBuilder) WithDataExtractor( - extractor func(appsv1.Deployment) error, -) *DeploymentBuilder { - if extractor == nil { - return b - } - b.res.dataExtractors = append(b.res.dataExtractors, extractor) - return b -} -``` - -By using `DataExtractable`, you avoid the need to retain concrete resource types elsewhere just to pull data back out. -The framework handles the extraction automatically during reconciliation. - -Data extraction is: - -* **Observational**: It should be a read-only operation on the resource's underlying object. -* **Automatic**: It is triggered during `Reconcile()` after all creation and read-only resources are updated from the cluster. -* **Safe**: Extraction only happens during normal reconciliation and is skipped when the component is suspended. - -### ReconcileContext - -`ReconcileContext` carries the shared dependencies a component needs during reconciliation, such as: - -* Kubernetes `Client` -* `Scheme` -* event recorder -* metrics recorder -* the owning CRD - -It is intentionally small and explicit. The component receives everything it needs to reconcile without reaching into -controller state directly. - -## Status Model - -One of the biggest advantages of the framework is that it reduces the state of many resources into one meaningful -condition on the owner. - -This is done through a prioritized state model. - -### Converging states - -Converging states represent progress toward readiness: - -* `Creating` -* `Updating` -* `Scaling` -* `Ready` - -These are used while a resource is still actively converging. - -Within converging states, the framework uses priority so the most meaningful non-ready state wins. - -### Grace states - -If a component remains non-ready after its configured grace period, the framework switches from reporting **progress** -to reporting **health**. - -Grace states are: - -* `Ready` -* `Degraded` -* `Down` - -This is what prevents components from appearing permanently “Creating” or “Scaling” even when something is actually -broken. - -### Suspension states - -When suspension is requested, the component moves through a separate lifecycle: - -* `PendingSuspension` -* `Suspending` -* `Suspended` - -This makes suspension visible and explicit in status instead of burying it inside custom controller logic. - -### Condition priority - -At the component level, statuses are aggregated using priority so that the most important explanation wins. - -Conceptually, the order is: - -1. **Error** -2. **Down** -3. **Degraded** -4. **Suspension states** -5. **Progression states** -6. **Ready** - -This means: - -* real failures dominate everything -* suspension dominates ordinary rollout progress -* progress dominates steady-state ready - -The result is a top-level condition that communicates the real state of the feature. - -## Design Philosophy - -### Reconcile logical features, not raw objects - -Users think in terms of features: *“Is the web UI ready?”* not *“Did the Service and Deployment reconcile separately?”* - -The framework encourages modeling reconciliation around logical units that map to how humans reason about the system. - -### Keep lifecycle behavior consistent - -Without a shared framework, every controller tends to invent its own: - -* readiness rules -* error handling style -* status transitions -* suspension behavior - -That leads to drift and confusion. The component package centralizes these rules so features behave consistently. - -### Separate object logic from orchestration - -The controller should decide **what** to reconcile. -The component should decide **how the feature behaves**. -The resource wrapper should decide **how a specific object is configured**. - -That separation improves readability, reuse, and testability. - -### Treat status as stateful, not just reactive - -Condition progression is intentionally stateful. The framework uses previous condition state and timestamps to avoid -flapping and to distinguish between: - -* a normal in-progress rollout -* a prolonged unhealthy state -* a deliberate suspension flow - -This is one of the main reasons the resulting conditions tend to be more useful than naïve “latest observation only” -reporting. - -## Benefits - -Using the framework provides several concrete advantages: - -* **Consistent reconciliation behavior** across features. -* **Cleaner controllers** that focus on assembling components and calling `Reconcile`. -* **Richer status conditions** with improved reasons and lifecycle semantics. -* **Built-in suspension and grace handling**. -* **Reusable resource abstractions**. -* **Enhanced testability** at both resource and component levels. - -In practice, this allows controllers to focus on business logic while the framework handles the repetitive status and -lifecycle machinery. - -## Flexibility and Extension Points - -The framework is intentionally flexible. - -You can adapt it by: - -* wrapping any `client.Object` in your own `Resource` -* implementing `Alive` only where health is meaningful -* implementing `Suspendable` only where suspension is meaningful -* combining managed, read-only, and delete-only resources in one component -* injecting custom policy into generic wrappers through builders or handler functions - -A particularly useful pattern is to build reusable resource wrappers with optional behavior injection. That lets you -keep Kubernetes mechanics in one place while still allowing feature-specific policies. - -## Example: Custom Deployment Resource - -The example implementation shows a custom `Deployment` resource wrapper that implements: - -* `Resource` -* `Alive` -* `Suspendable` -* `DataExtractable` - -It also uses a `DeploymentBuilder` to allow optional injection of custom behavior for status handling and suspension. - -This demonstrates one of the framework’s biggest strengths: the wrapper can stay generic while the feature-specific -rules remain configurable. - -### Resource construction - -The following snippet illustrates how to construct the core resource baseline and then apply version-gated feature mutations: - -```go -func (r *ExampleController) Reconcile(ctx context.Context, owner *ExamplePlatform) error { - // Build the resource with features based on owner version. - deployment := resources.NewCoreDeployment(owner.Name + "-web-ui", owner.Namespace) - res, err := resources.NewDeploymentBuilder(deployment). - WithMutation(features.NewTracingFeature(owner.Spec.Version, owner.Spec.EnableTracing)). - Build() - // ... -} -``` - -### Resource Implementation - -The following snippet illustrates how `Mutate` applies the core desired state and then uses a restricted mutator to -apply version-gated feature mutations: - -```go -func (r *DeploymentResource) Mutate(current client.Object) error { - currentDeployment, ok := current.(*appsv1.Deployment) - if !ok { - return fmt.Errorf("expected *appsv1.Deployment, got %T", current) - } - - // 1. Apply core desired state to the current object. - // This ensures that the base fields are always correct before features apply their changes. - r.applyCoreDesiredState(currentDeployment) - - // 2. Apply feature mutations via a restricted mutator interface - // We've applied all desired fields from the core object and can now continue working - // on mutations against the current object exclusively. - mutator := NewDeploymentResourceMutator(currentDeployment) - - for _, m := range r.mutations { - // Apply the **intent** of each mutator - if err := m.ApplyIntent(mutator); err != nil { - return fmt.Errorf("failed to apply mutation intent for %s: %w", m.Name, err) - } - } - - // Apply all gathered mutations using the mutator - if err := mutator.Apply(); err != nil { - return fmt.Errorf("failed to apply planned mutations: %w", err) - } - - // 3. Apply a deferred suspension mutation if one was requested. - if r.suspender != nil { - if err := r.suspender(currentDeployment); err != nil { - return err - } - } - - // 4. Update internal desired state with the mutated current object. - // This ensures that subsequent calls to ConvergingStatus and ExtractData - // use the fully mutated state, including status. - r.desired = currentDeployment - - return nil -} -``` -For the full implementation of this custom resource and its builder, see the [example resources directory](/examples/component-architecture-basics/resources/). - -## Minimal Example - -If you do not need custom behavior injection, usage can stay very small. - -A minimal implementation of a component and its reconciliation: - -```go -deployment := resources.NewCoreDeployment("web", owner.Namespace) +// 1. Create a resource primitive (using a builder) +deployment := resources.NewCoreDeployment("web-server", owner.Namespace) res, err := resources.NewDeploymentBuilder(deployment).Build() if err != nil { return err } -component, err := component.NewComponentBuilder(owner.Spec.Suspended). - WithName("WebInterface"). - WithConditionType("WebInterfaceReady"). - WithResource(res, false, false). - WithGracePeriod(5 * time.Minute). - Build() -if err != nil { - return err -} - -err = component.Reconcile(ctx, recCtx) -if err != nil { - return err -} -``` - -This keeps the “happy path” simple while still allowing more advanced customization later. - -## Component assembly - -A typical controller flow has three steps: - -1. construct resource wrappers -2. assemble them into a component -3. call `Reconcile` - -A full assembly example within a controller: - -```go -// 1. Construct resources using builders -deployment := resources.NewCoreDeployment("web", owner.Namespace) -res, err := resources.NewDeploymentBuilder(deployment). - WithMutation(features.NewTracingFeature(version, owner.Spec.TracingEnabled)). - Build() -if err != nil { - return err -} - -// 2. Assemble the component +// 2. Build a component comp, err := component.NewComponentBuilder(owner.Spec.Suspended). WithName("web-interface"). WithConditionType("WebInterfaceReady"). WithResource(res, false, false). + WithGracePeriod(5 * time.Minute). Build() - if err != nil { return err } -// 3. Reconcile +// 3. Reconcile the component recCtx := component.ReconcileContext{ Client: r.Client, Scheme: r.Scheme, Recorder: r.Recorder, - Metrics: r.Metrics, Owner: owner, } err = comp.Reconcile(ctx, recCtx) ``` -This style keeps the controller focused on composition and leaves lifecycle orchestration to the framework. - -## Practical Guidance - -### When should I create a new `Resource` implementation? - -Create a new wrapper when: - -* the object needs non-trivial desired-state logic -* readiness or lifecycle semantics matter -* you want to reuse the wrapper across multiple components -* a built-in or existing wrapper would mix too much feature-specific policy into one place - -### When should I implement `Alive`? - -Implement `Alive` for resources where existence is not enough. - -Good candidates: - -* `Deployment` -* `StatefulSet` -* `Job` -* custom resources with meaningful status - -Usually skip it for: - -* `ConfigMap` -* `Secret` -* `ServiceAccount` -* RBAC resources - -Those resources are usually either present or absent; they do not normally have their own convergence lifecycle. - -### When should I implement `DataExtractable`? - -Implement `DataExtractable` when you need to "read back" information from a resource after it has been reconciled. - -Good candidates: - -* `Secret` or `ConfigMap` with auto-generated values -* `Service` with a dynamically assigned `LoadBalancer` IP -* Any resource where the cluster-side state contains data needed by other parts of the operator - -This pattern allows your controller to remain decoupled from the specific resource implementation while still being -able to access the data it needs. - -### When should I implement `Suspendable`? - -Implement `Suspendable` when the resource: - -* represents active workload -* consumes meaningful cost or compute -* can be safely paused or scaled down -* needs explicit lifecycle behavior when suspension is requested - -Examples: - -* scaling a Deployment to zero -* mutating retention or deletion settings before shutdown -* choosing whether a resource should remain present while suspended - -### When should I use read-only resources? - -Use read-only resources when the component depends on something it does not own. - -Examples: - -* a frontend that depends on a database managed elsewhere -* a feature that should only become ready after another controller has reconciled a shared dependency - -This allows a component to observe health without taking ownership of the object. - -### When should I split one feature into multiple components? - -Use multiple components when: - -* they need separate status conditions -* they can fail independently -* they can be suspended independently -* they represent different user-facing features - -Keep resources in one component when they share a common lifecycle and should be understood as one feature. - -A good rule of thumb is: - -> if users would expect separate readiness or suspension semantics, use separate components. - -## Testing - -The framework improves testing in two ways. - -### Resource-level testing - -You can test a wrapper in isolation: - -* does `Mutate()` produce the desired object spec? -* does `ConvergingStatus()` report the right rollout state? -* does `Suspend()` apply the expected mutation? - -This can often be done without a running cluster. - -### Component-level testing - -You can test component orchestration separately from individual resource logic: - -* does the component aggregate statuses correctly? -* does it transition through grace states correctly? -* does suspension take precedence? -* does it set error conditions consistently? - -This lets you test lifecycle behavior once and then focus feature-specific tests on your resource wrappers. - -## Summary - -The component framework gives operator authors a consistent way to model: - -* logical features instead of scattered objects -* resource lifecycle and readiness -* suspension and grace periods -* aggregated conditions on the owner - -It is especially useful once an operator grows beyond a handful of simple objects and needs clearer status reporting, -more reusable reconciliation patterns, and more predictable lifecycle behavior. - -## Connecting Components and Feature Mutations - -While **Components** define how a logical feature is reconciled and reported to the user, **Feature Mutations** define -how individual resources inside those features remain readable and composable as optional behavior and version -constraints accumulate. - -Together, they allow you to build operators where the high-level orchestration is consistent and the low-level resource -configuration is modular and version-aware. - -## Feature Mutations - -### The problem with historical layering - -A recurring problem in operator development is that resource construction gradually becomes a mix of: -- application-version compatibility logic -- feature-specific behavior -- incremental mutations layered on top of older resource definitions -- one-off conditional branches for special cases - -This usually starts out innocently. A resource begins with a “core” implementation, and support for a new application -version is added with a few conditionals. Later, another feature is introduced; then another version changes behavior -again. - -Over time, the resource stops expressing a clear **desired state** and instead becomes a record of **how it evolved**. - -Historical layering often creates several structural problems: - -1. **Desired state is obscured**: The core question—"What should this resource look like now?"—is buried under years of `if/else` logic. -2. **Logic coupling**: Version compatibility, optional features, and baseline config are tightly mixed, making changes risky. -3. **Fragmented patterns**: Each resource evolves its own style for version checks and slice manipulation (env vars, args). -4. **Implicit interactions**: Multiple features modifying the same fields (like `Env` or `Args`) result in fragile, order-dependent code. - -### The shift to baseline + mutations - -To solve this, we invert the model: - -> The core resource expresses the **baseline desired state for the current version**, and optional behavior is applied -> through explicit, composable **feature mutations**. - -This move shifts focus away from "patches on old logic" toward: -1. **Core Resource**: Defines the current baseline desired state. -2. **Feature Gates**: Decide if a mutation applies based on version or custom logic. -3. **Feature Mutations**: Apply small, focused modifications via a controlled planner interface. - -### Feature mutation design principles - -When using feature mutations, follow these principles: - -#### 1. The core resource must define the current baseline - -The core should describe the default desired state for the latest implementation. It should not represent historical -compatibility logic; it should represent the **baseline desired state**. - -#### 2. Feature logic must not be embedded in the baseline - -Feature-specific behavior should be expressed as explicit mutations registered on the resource rather than inlined -into core construction logic. - -#### 3. Feature mutations are additive-first -A mutation should only change the fields necessary for that feature. Use the planner/mutator interface to add -environment variables or arguments rather than replacing entire slices. - -*Note: While additive behavior is preferred, compatibility may sometimes require narrowly scoped removal or override -behavior, which should still be performed through the mutator interface.* - -#### 4. Feature mutations should be idempotent -Applying the same mutation more than once should not produce duplicates or corrupt the object. - -#### 5. Composition must be intentional -Mutations should assume they may run alongside other features. Maintain narrow scope and minimal mutation surface. - -### Feature gates - -Feature mutations are controlled by version-aware feature gates. - -A `ResourceFeature` manages the conditions under which a mutation is enabled. It uses a **logical AND** model: a -feature is enabled only when **all** registered semver constraints match the current version and **all** additional -truth conditions are true. - -```go -type ResourceFeature struct { - current string - versionConstraints []VersionConstraint - requiredTruths []bool -} - -func NewResourceFeature(currentVersion string, versionConstraints []VersionConstraint) *ResourceFeature { - return &ResourceFeature{ - current: currentVersion, - versionConstraints: versionConstraints, - } -} - -// When adds a boolean condition that must be true for the feature to be enabled. -// All values passed through When must be true for Enabled() to return true. -func (f *ResourceFeature) When(truth bool) *ResourceFeature { - f.requiredTruths = append(f.requiredTruths, truth) - return f -} - -func (f *ResourceFeature) Enabled() (bool, error) { - for _, truth := range f.requiredTruths { - if !truth { - return false, nil - } - } - - for _, constraint := range f.versionConstraints { - enabled, err := constraint.Enabled(f.current) - if err != nil { - return false, err - } - if !enabled { - return false, nil - } - } - - return true, nil -} -``` - -Defining a feature with specific version constraints: - -```go -feature := NewResourceFeature(currentVersion, []feature.VersionConstraint{ - feats.FromSemver(">=8.0.0"), - feats.FromSemver("<9.0.0"), -}).When(enableSomething) -``` - -This ensures that a feature only applies when all specified conditions (version and custom logic) are satisfied, -keeping version-gating logic small and explicit. - -### Feature mutation model - -A feature mutation represents a small, self-contained modification intent for a resource. - -```go -type Mutation[T any] struct { - Name string - Feature *ResourceFeature - Mutate func(T) error -} -``` - -Feature mutations express **intent**. When `ApplyIntent` is called, the mutation records what it wants to change in -a **mutation planner** (the mutator). The actual modification of the Kubernetes resource happens later in a final -`Apply()` phase. - -A resource evaluates and applies enabled mutations during `Mutate()`: - -```go -func (r *DeploymentResource) Mutate(current client.Object) error { - currentDeployment, ok := current.(*appsv1.Deployment) - if !ok { - return fmt.Errorf("expected *appsv1.Deployment, got %T", current) - } - - // 1. Apply core desired state to the current object. - // This ensures that the base fields are always correct before features apply their changes. - r.applyCoreDesiredState(currentDeployment) - - // 2. Apply feature mutations via a restricted mutator interface - // We've applied all desired fields from the core object and can now continue working - // on mutations against the current object exclusively. - mutator := NewDeploymentResourceMutator(currentDeployment) - - for _, m := range r.mutations { - // Apply the **intent** of each mutator - if err := m.ApplyIntent(mutator); err != nil { - return fmt.Errorf("failed to apply mutation intent for %s: %w", m.Name, err) - } - } - - // Apply all gathered mutations using the mutator - if err := mutator.Apply(); err != nil { - return fmt.Errorf("failed to apply planned mutations: %w", err) - } - - // 3. Apply a deferred suspension mutation if one was requested. - if r.suspender != nil { - if err := r.suspender(); err != nil { - return err - } - } - - r.deployment = currentDeployment - - return nil -} -``` - -### Example: Deployment with additive feature mutations - -The [example implementation](/examples/component-architecture-basics/) demonstrates a resource that: - -* supports version-gated feature mutations -* applies mutations through a **restricted mutator interface** that acts as a **mutation planner** -* avoids repeated slice scanning using internal maps - -### Resource wrapper - -The `DeploymentResource` holds the underlying Deployment and the list of mutations: - -```go -type DeploymentResource struct { - deployment *appsv1.Deployment - mutations []feature.Mutation[*DeploymentResourceMutator] -} -``` - -### Feature Mutations - -Feature mutations express **intent**, which the mutator gathers as a plan. The final resource state is applied once -via a final `Apply()` call. - -```go -func (r *DeploymentResource) SetMutable() error { - mutator := NewDeploymentResourceMutator(r) - - for _, m := range r.mutations { - if err := m.ApplyIntent(mutator); err != nil { - return fmt.Errorf("failed to apply mutation %s: %w", m.Name, err) - } - } - - return mutator.Apply() -} -``` - -### Why use a mutator interface - -Feature mutations are intentionally **not applied directly to the resource wrapper or the underlying Kubernetes object**. - -Instead, feature mutations operate on a **restricted mutator** (for example `DeploymentResourceMutator`) that acts as -a **mutation planner**. - -At first glance this may seem unnecessary — after all, a feature mutation could simply receive `*appsv1.Deployment`. - -However, the mutator interface exists for several important reasons. - -### 1. It prevents uncontrolled mutation of the resource - -If feature mutations receive the full resource object, they can modify anything. - -That makes it very easy to accidentally introduce: - -* destructive changes -* non-additive mutations -* logic that bypasses shared conventions - -For example, a feature could accidentally replace an entire environment variable list instead of adding a single entry. - -```go -// dangerous mutation style -container.Env = []corev1.EnvVar{ ... } -``` - -Once patterns like this spread across features, it becomes difficult to reason about how multiple features interact. - -By restricting mutations to a **restricted mutator**, the resource controls **how mutations happen**, not just -**when they happen**. - -### 2. It enforces additive mutation patterns - -The mutator interface acts as a **controlled mutation surface** that ensures mutations follow the design principles: -- **Additive-first**: Use `EnsureContainerEnvVar` to add environment variables or arguments instead of replacing slices. -- **Idempotent**: Applying the same mutation twice has no ill effect. -- **Composable**: Multiple features can modify the same resource safely via a shared planner. -- **Predictable**: The resource wrapper controls exactly how and when fields are updated. - -### 3. It keeps mutation semantics consistent across features - -Without the mutator interface, every feature would implement its own logic for modifying Kubernetes objects. - -This often leads to: - -* repeated slice scanning -* inconsistent mutation styles -* duplicate env vars or args -* unpredictable ordering behavior - -The mutator interface centralizes these patterns inside the resource implementation. - -This provides two major benefits: - -* feature authors do not need to reimplement low-level mutation logic -* mutation behavior stays consistent across the operator - -### 4. It allows the resource to implement efficient mutation helpers - -Another benefit of the mutator abstraction is that the resource can maintain **internal indexes** or caches to make -repeated mutations efficient. - -For example, when multiple features add environment variables, repeatedly scanning the container’s env list becomes inefficient. - -A mutator implementation can build lookup maps once and reuse them across all feature mutations. - -This keeps feature logic simple while allowing the resource wrapper to implement efficient mutation internally. - -### 5. It preserves clear ownership boundaries - -Feature mutations should describe **what to add or change**, not **how the Kubernetes object is structured internally**. - -By introducing a mutator interface, the resource wrapper retains control over: - -* how containers are located -* how slices are modified -* how duplicates are avoided -* how internal indexes are maintained - -Feature mutations remain focused on **feature intent**, not Kubernetes implementation details. - -### Recommended implementation style - -When writing feature mutations, developers should treat the mutator interface as the **only supported way to modify the -resource**. The goal is not simply to provide convenience helpers, but to enforce a **disciplined mutation model**. - -Feature mutations should express a clear and narrow intent, modifying only the fields necessary for the feature while -relying on the resource wrapper to handle mutation details. - -A tracing feature implemented as a version-gated mutation: - -```go -func TracingFeature(version string, enabled bool) feature.Mutation[*resources.DeploymentResourceMutator] { - return feature.Mutation[*resources.DeploymentResourceMutator]{ - Name: "tracing", - Feature: feature.NewResourceFeature(version, []feature.VersionConstraint{ - feats.FromSemver(">=8.1.0"), - }).When(enabled), - Mutate: func(m *resources.DeploymentResourceMutator) error { - // Narrow intent: only ensure the environment variable is present - m.EnsureContainerEnvVar("ENABLE_TRACING", "true") - return nil - }, - } -} -``` - -This keeps feature logic clear, safe, and composable without directly manipulating Kubernetes slices. - -### Separation of responsibilities - -The mutator interface intentionally separates two concerns. - -#### Feature logic - -Feature mutations define **what behavior should be enabled**. - -Examples: - -* enable tracing -* enable legacy compatibility -* inject a sidecar -* configure a probe - -Feature mutations should remain small and declarative. - -#### Resource mutation mechanics - -The resource wrapper defines **how the Kubernetes object is modified safely**. - -Examples: - -* ensuring env vars are unique -* avoiding duplicate args -* maintaining lookup indexes -* managing slice mutations safely - -These responsibilities belong to the resource wrapper, not individual features. - -### Why this improves long-term maintainability - -This helps ensure that as an operator grows: - -* resource definitions remain readable -* feature logic remains modular -* mutation behavior remains consistent -* feature interactions remain predictable - -### The restricted mutator - -Feature mutations operate through a restricted mutator interface rather than modifying the resource directly. The -mutator acts as a **mutation planner**, gathering operations and applying them in a final phase. - -This allows the wrapper to provide **safe helper methods** for additive mutations. - -For a complete example of a mutator implementation, see the [deployment_mutator.go](/examples/component-architecture-basics/resources/deployment_mutator.go) file. - -### Registering feature mutations - -A builder can register mutations like this: - -```go -func (b *DeploymentBuilder) WithMutation( - mutation feature.Mutation[*DeploymentResourceMutator], -) *DeploymentBuilder { - b.res.mutations = append(b.res.mutations, mutation) - return b -} -``` - -#### Example feature mutations - -An example feature mutation for tracing: - -```go -func TracingFeature(version string, enabled bool) feature.Mutation[*resources.DeploymentResourceMutator] { - return feature.Mutation[*resources.DeploymentResourceMutator]{ - Name: "tracing", - Feature: feature.NewResourceFeature( - version, - []feature.VersionConstraint{ - feats.FromSemver(">=8.1.0"), - }, - ).When(enabled), - Mutate: func(mutator *resources.DeploymentResourceMutator) error { - mutator.EnsureContainerEnvVar("ENABLE_TRACING", "true") - return nil - }, - } -} -``` - -### Using the builder - -```go -deploymentObject := &appsv1.Deployment{ - ObjectMeta: metav1.ObjectMeta{ - Name: "my-app", - Namespace: namespace, - }, -} -deployment, err := resources.NewDeploymentBuilder(deploymentObject). - WithMutation(features.TracingFeature(version, owner.Spec.TracingEnabled)). - WithMutation(features.LegacyCompatibilityFeature(version)). - Build() -if err != nil { - return err -} -``` - -### Why this model works - -#### The baseline stays readable - -The resource clearly expresses the intended modern desired state. +## Architecture Overview -#### Feature behavior becomes explicit +The framework is divided into two main subsystems: -Optional behavior is registered as isolated mutations rather than hidden conditionals. +### Component Layer +Responsible for high-level feature orchestration, lifecycle management, and condition aggregation. It ensures that features behave consistently across the operator. -#### Version gating is localized +Detailed documentation: [Component Framework](docs/component.md) -Semver logic no longer spreads across resource construction. +### Primitive Layer +Responsible for Kubernetes resource abstractions, the mutation system, and safe field application. It handles the low-level details of how objects are constructed and modified. -#### Composition becomes the default +Detailed documentation: [Resource Primitives](docs/primitives.md) -Feature logic becomes small, composable building blocks. +## Feature Mutations (high-level) -#### The code reflects the present, not the history +Feature mutations allow you to evolve resources safely as features and versions accumulate: -The resource implementation describes the **baseline desired state**, not the path that led there. +1. **Baseline Desired State**: The core resource defines the current, modern desired state. +2. **Optional Version-Gated Mutations**: Explicit modifications are applied only when specific version constraints or conditions are met. +3. **Composable Customization**: Multiple mutations can be layered onto a single resource without conflicting or producing inconsistent results. -### Summary +This approach keeps your code focused on the present state while explicitly managing its history and optional behaviors. -Feature mutations solve a practical problem: resource construction becomes difficult to understand when version -compatibility and optional behavior are layered directly into the core resource. +## Custom Resources -The model is simple: -1. Define the baseline desired state. -2. Register version-aware feature mutations. -3. Apply enabled mutations in a controlled sequence. -4. Keep mutations additive, idempotent, and composable. +Users can implement their own resource wrappers by fulfilling the `Resource` interface. This is appropriate when: +- An object has unusual lifecycle behavior. +- You are managing custom CRDs with specialized status or readiness logic. +- You need specialized mutation semantics not covered by the built-in primitives. -This allows an operator to evolve safely as features and versions accumulate, without turning resource construction -into an unmaintainable set of conditionals. +For examples of how to implement custom resource wrappers, see the [examples directory](examples/). -### Conclusion +## Examples -By combining **Components** for high-level orchestration with **Feature Mutations** for modular resource configuration, -this framework provides a complete architecture for modern Kubernetes operators. The result is a codebase that remains -readable, testable, and maintainable as it evolves to support new features and application versions. \ No newline at end of file +The [examples directory](examples/component-architecture-basics/) provides complete implementations demonstrating: +- **Custom resource implementation**: How to wrap standard or custom Kubernetes objects. +- **Component assembly**: How to group resources into functional units. +- **Feature mutation usage**: How to apply version-gated logic to resources. \ No newline at end of file diff --git a/docs/component.md b/docs/component.md new file mode 100644 index 0000000..48a8498 --- /dev/null +++ b/docs/component.md @@ -0,0 +1,125 @@ +# Component System + +The `component` package provides a structured way to manage logical features in a Kubernetes operator by grouping related resources into **Components**. + +A Component acts as a behavioral unit responsible for reconciling multiple resources, managing their shared lifecycle, and reporting their aggregate health through a single condition on the owner CRD. + +## Purpose + +In complex operators, reconciliation logic often becomes fragmented across large controller loops. This leads to: +* **Controller Logic Fragmentation**: Reconcilers coordinating dozens of unrelated resources in a single function. +* **Inconsistent Lifecycle Handling**: Manual implementation of rollouts, suspension, and degradation for every feature. +* **Scattered Status Reporting**: Inconsistent ways of determining if a feature is truly "Ready" or "Degraded". + +Components solve these problems by providing: +* **Structured Reconciliation**: A clear, repeatable pattern for resource synchronization. +* **Lifecycle Orchestration**: Built-in support for progression, grace periods, and suspension. +* **Consistent Status Aggregation**: Automated calculation of a single, meaningful status condition from multiple underlying resources. + +## Component Responsibilities + +A Component is responsible for: +* **Resource Reconciliation**: Ensuring all registered resources (Deployments, Services, ConfigMaps, etc.) match their desired state. +* **Health Aggregation**: Monitoring the status of each resource and determining the overall health of the logical feature. +* **Lifecycle Semantics**: Applying high-level behaviors like "waiting for readiness" (grace periods) or "scaling down" (suspension). +* **Status Exposure**: Maintaining exactly one `Condition` on the owner object's status that represents the component's state. + +## Resource Registration + +Resources are registered to a component using the `Builder`. The registration defines how the component interacts with each resource during reconciliation. + +```go +builder := component.NewComponentBuilder(false). + WithName("web-interface"). + WithConditionType("WebInterfaceReady"). + WithResource(deployment, false, false). // Managed (Create/Update) + WithResource(configMap, false, true). // Read-only + WithResource(oldService, true, false) // Delete-only +``` + +### Resource Flags + +* **Managed (Default)**: The component ensures the resource exists and matches the desired state. Its health contributes to the aggregate status. +* **Read-only**: The component only reads the resource's state (e.g., to extract data or check health) but never modifies it in the cluster. +* **Delete-only**: The component ensures the resource is removed from the cluster. + +These flags dictate the reconciliation phase: managed resources are updated, read-only resources are only fetched, and delete-only resources are removed. + +## Reconciliation Lifecycle + +The `Reconcile` method follows a conceptual four-phase process: + +1. **Resource Synchronization**: All registered resources are processed. Managed resources are created or updated, delete-only resources are removed, and read-only resources are fetched. +2. **Lifecycle Evaluation**: The component determines the current lifecycle mode (Normal or Suspended) and evaluates the progress of resources (e.g., checking if a Deployment is still rolling out). +3. **Status Aggregation**: The individual states of all resources are collected and compared. +4. **Condition Update**: A single aggregate `Condition` is calculated and applied to the owner CRD's status. + +## Status Model + +The framework categorizes component states into three functional groups: + +### Converging States +These states occur during normal operation as the component moves toward a steady state. +* **Creating**: Resources are being provisioned for the first time. +* **Updating**: Existing resources are being modified. +* **Scaling**: Resources (like Deployments) are changing their replica counts. +* **Ready**: All resources are healthy and match the desired state. + +### Grace States +These states are triggered when a component fails to reach "Ready" within its configured grace period. +* **Ready**: All resources are healthy. +* **Degraded**: The component is functional but some non-critical resources are unhealthy or it's taking longer than expected to converge. +* **Down**: Critical resources are failing or the component is completely non-functional. + +### Suspension States +These states manage the intentional deactivation of a component. +* **PendingSuspension**: The suspension request is acknowledged, but work hasn't started. +* **Suspending**: Resources are actively being scaled down or cleaned up. +* **Suspended**: All resources have reached their suspended state (e.g., scaled to 0). + +## Grace Period + +A **Grace Period** defines how long a component is allowed to remain in "progressing" states (Creating, Updating, Scaling) before it is considered unhealthy. + +* During the grace period, the component reports its actual converging state (e.g., `Updating`). +* After the grace period expires, if the component is still not `Ready`, the framework transitions the condition to **Degraded** or **Down** based on the resource health. + +This prevents premature "False" readiness reports during normal operations like rolling updates. + +## Suspension Lifecycle + +Suspension allows an operator to intentionally "turn off" a component without deleting its configuration. + +When a component is marked as suspended: +1. It calls `Suspend()` on all `Suspendable` resources. +2. Resources may scale down (e.g., Deployments to 0 replicas) or perform cleanup. +3. The component tracks the `SuspensionStatus` of each resource. +4. Once all resources report `Suspended`, the component condition transitions to `Suspended`. + +## Condition Priority + +When aggregating multiple resources, the framework uses a priority system to ensure the most critical information is reported. Failure states take precedence over progressing states, which take precedence over "Ready". + +Conceptual priority (highest to lowest): +1. **Error / Down / Degraded**: Something is wrong. +2. **Suspension States**: The component is intentionally inactive. +3. **Converging States**: The component is working toward readiness. +4. **Ready**: Everything is healthy. + +## ReconcileContext + +The `ReconcileContext` is passed to the `Reconcile` method and provides all dependencies required for reconciliation: +* **Kubernetes Client**: For interacting with the API server. +* **Scheme**: For resource GVK lookups. +* **Event Recorder**: For emitting Kubernetes Events. +* **Metrics**: For recording component-level health metrics. +* **Owner Object**: The CRD that owns the component. + +Dependencies are passed explicitly to ensure the component remains testable and decoupled from global state or specific controller-runtime implementation details. + +## Best Practices + +* **Keep Controllers Thin**: The controller should only be responsible for fetching the owner CRD and invoking component reconciliation. +* **Model Logical Features**: Create one component per user-visible feature (e.g., "API", "UI", "Database"). +* **Group by Lifecycle**: Put resources that must live and die together into the same component. +* **Split for Granularity**: If two features should report separate "Ready" conditions in the CRD status, they should be separate components. diff --git a/docs/primitives.md b/docs/primitives.md new file mode 100644 index 0000000..fc5095f --- /dev/null +++ b/docs/primitives.md @@ -0,0 +1,205 @@ +# Resource Primitives + +The `primitives` system provides a resource-centric abstraction layer for Kubernetes objects. It acts as the bridge +between high-level **Components** and raw Kubernetes resources, handling the complexities of state synchronization, +mutation, and lifecycle management. + +## 1. What primitives are + +Primitives are reusable, type-safe resource wrappers for Kubernetes objects. They encapsulate the logic required to +reconcile a specific kind of resource (like a `Deployment` or `ConfigMap`) within the framework's behavioral model. + +Each primitive encapsulates: + +- **Desired state baseline**: A template or builder for the resource's "ideal" configuration. +- **Lifecycle integration**: Built-in support for readiness detection, grace periods, and suspension. +- **Mutation surfaces**: Controlled APIs for modifying resources based on active features or versions. +- **Field application behavior**: Precise rules for how fields are merged or preserved during reconciliation. + +By using primitives, operator authors can avoid writing repetitive "create-or-update" boilerplate and instead focus on +defining how their resources should behave. + +--- + +## 2. Primitive categories + +The framework distinguishes between three primary categories of primitives based on their operational characteristics. + +### Static Primitives +Examples: `ConfigMap`, `Secret`, `ServiceAccount`, RBAC objects (`Role`, `RoleBinding`). + +- **Characteristics**: These resources have a mostly static desired state. They are typically created once or updated + based on configuration changes but do not have complex runtime convergence or scaling behaviors. +- **Lifecycle**: Usually considered "Ready" as soon as they are successfully applied to the API server. + +### Workload Primitives +Examples: `Deployment`, `StatefulSet`, `DaemonSet`. + +- **Characteristics**: These resources represent long-running processes that require runtime convergence (e.g., + pods being scheduled and becoming ready). +- **Behavior**: They support advanced features like suspension (scaling to zero), grace handling for slow rollouts, and + complex feature-based mutations. + +### Batch Primitives + +TBD + +--- + +## 3. Field application model + +Primitives use a structured pipeline to synchronize the desired state with the current state in the cluster. This +process is managed by a **Field Applicator**. + +### The Pipeline Order +When a primitive is reconciled, it follows a strict order of operations: + +1. **Baseline field application**: The `FieldApplicator` merges the "baseline" desired state onto the current object. +2. **Flavor adjustments**: Post-baseline merge policies (Flavors) are applied to preserve specific fields. +3. **Mutation edits**: Feature-specific or version-specific edits are applied (Workload primitives only). + +This ensures that mutations always operate on a predictable, fully-formed baseline. + +--- + +## 4. Field application flavors + +**Flavors** are reusable merge policies that run after the baseline application but before mutations. Their primary +purpose is to preserve fields that may be managed by other controllers or external systems (like sidecar injectors +or autoscalers). + +### Examples of Flavors: +- **Preserving Labels/Annotations**: Ensuring that metadata added by external tools is not wiped out during + reconciliation. +- **Preserving Pod Template Metadata**: Keeping sidecar-related annotations on a Deployment's pod template. + +Flavors allow the framework to be "good citizens" in a cluster where multiple controllers might be touching the same +resources. + +--- + +## 5. Mutation system + +Workload primitives employ a **plan-and-apply pattern** for modifications. Instead of mutating the Kubernetes object +directly and repeatedly, the framework records "edit intent" through a series of planned mutations. + +### Why this pattern exists: +- **Prevents uncontrolled mutation**: Changes are staged and applied in a single, controlled pass. +- **Improves composability**: Multiple independent features can contribute edits without knowing about each other. +- **Predictable Ordering**: Features are applied in the order they are registered. Later features observe the resource state after earlier features have already applied their changes. +- **Efficiency**: Avoids expensive and error-prone manual slice manipulations (like searching for a container by name + multiple times). + +### Internal Ordering within a Feature: +While features apply in registration order, the internal operations within a single feature follow a fixed category-based sequence to ensure consistency: +1. Deployment metadata edits +2. DeploymentSpec edits +3. Pod template metadata edits +4. Pod spec edits +5. Regular container presence operations +6. Regular container edits (using a snapshot taken after presence operations) +7. Init container presence operations +8. Init container edits (using a snapshot taken after presence operations) + +--- + +## 6. Mutation editors + +**Editors** provide a scoped, typed API for making changes to specific parts of a resource. They ensure that mutations +are safe and follow Kubernetes best practices. + +Common editors include: +- `ContainerEditor`: For modifying environment variables, arguments, and resource limits. +- `PodSpecEditor`: For managing volumes, affinity, or service account names. +- `DeploymentSpecEditor`: For controlling replicas, strategy, and selectors. +- `ObjectMetaEditor`: For manipulating labels and annotations. + +Editors act as a protective layer, offering helper methods like `EnsureEnvVar` or `RemoveArg`. + +--- + +## 7. Selectors + +**Selectors** determine which parts of a resource an editor should target. This is particularly important for +multi-container pods. + +For example, a `ContainerSelector` can be used to: +- Target all containers (`AllContainers()`). +- Target a specific container by name (`ContainerNamed("sidecar")`). +- Target containers at specific indices (`ContainerAtIndex(0)`). + +Selectors allow mutations to be precise and reusable across different resource configurations. + +--- + +## 8. Raw mutation escape hatch + +While editors provide safe wrappers, there are times when you need to perform advanced customizations that the +framework doesn't explicitly support. For these cases, every editor provides a `Raw()` method. + +- **Purpose**: Gives direct access to the underlying Kubernetes struct (e.g., `*corev1.Container`). +- **Safety**: The mutation remains scoped to the editor's target (e.g., you can't accidentally delete the entire PodSpec from a ContainerEditor). +- **Flexibility**: Ensures that the framework never blocks you from using new Kubernetes features or edge-case configurations. + +--- + +## 9. Default lifecycle behavior + +Workload primitives come with "sane defaults" for lifecycle management, integrated directly into the Component status model: + +- **Convergence detection**: Automatically determines if a Deployment is "Ready", "Scaling", or "Updating" based on its status fields. +- **Grace handling**: Monitors how long a resource has been non-ready and reports "Degraded" or "Down" if it exceeds a grace period. +- **Suspension behavior**: Provides the logic for scaling resources down to zero and reporting the "Suspended" state. + +These defaults can be overridden via the primitive's `Builder` if specialized behavior is required. + +--- + +## 10. When to implement a custom resource + +While the provided primitives cover the most common Kubernetes objects, you may need to implement a custom resource +wrapper when: + +- You are managing **custom CRDs** that require specific health checks. +- You have **unusual lifecycle semantics** (e.g., a resource that must be deleted and recreated instead of updated). +- You need **highly specialized mutation behavior** not covered by standard editors. + +Custom resource wrappers can still leverage the framework's core interfaces (`component.Resource`, `component.Alive`, +`component.Suspendable`). See the `examples/` directory for patterns on implementing custom resource wrappers. + +--- + +## Examples + +### Creating a primitive resource +```go +// Define a baseline Deployment +deployment := &appsv1.Deployment{ ... } + +// Use the builder to create a primitive +resource, err := deployment.NewBuilder(deployment). + WithFieldApplicationFlavor(deployment.PreserveCurrentLabels). + Build() +``` + +### Adding mutation edits +```go +// Mutations are typically defined within Feature objects +mutation := deployment.Mutation{ + Name: "add-proxy-sidecar", + ApplyIntent: func(m *deployment.Mutator) error { + m.EditContainers(selectors.ContainerNamed("app"), func(e *editors.ContainerEditor) { + e.EnsureEnvVar(corev1.EnvVar{Name: "PROXY_ENABLED", Value: "true"}) + }) + return nil + }, +} +``` + +### Selecting containers for mutation +```go +// Targeting multiple specific containers +m.EditContainers(selectors.ContainersNamed("web", "api"), func(e *editors.ContainerEditor) { + e.EnsureArg("--verbose") +}) +``` diff --git a/examples/component-architecture-basics/features/compatibility_cleanup_feature.go b/examples/component-architecture-basics/features/compatibility_cleanup_feature.go index b560b39..5f73d47 100644 --- a/examples/component-architecture-basics/features/compatibility_cleanup_feature.go +++ b/examples/component-architecture-basics/features/compatibility_cleanup_feature.go @@ -4,6 +4,7 @@ package features import ( "github.com/sourcehawk/operator-component-framework/examples/component-architecture-basics/resources" "github.com/sourcehawk/operator-component-framework/pkg/feature" + corev1 "k8s.io/api/core/v1" ) var legacyBehaviorFeature = MustRegister("LegacyBehaviorExample", "< 8.0.0") @@ -19,7 +20,7 @@ func NewLegacyBehaviorFeature(version string) feature.Mutation[*resources.Deploy ), Mutate: func(m *resources.DeploymentResourceMutator) error { // Set a deprecated env var - m.EnsureContainerEnvVar("DEPRECATED_SETTING", "legacy-value") + m.EnsureContainerEnvVar(corev1.EnvVar{Name: "DEPRECATED_SETTING", Value: "legacy-value"}) // Remove the new one as it's not supported in legacy versions m.RemoveContainerEnvVar("NEW_MANDATORY_SETTING") return nil diff --git a/examples/component-architecture-basics/features/tracing_feature.go b/examples/component-architecture-basics/features/tracing_feature.go index 6d82f11..acde8a2 100644 --- a/examples/component-architecture-basics/features/tracing_feature.go +++ b/examples/component-architecture-basics/features/tracing_feature.go @@ -3,6 +3,7 @@ package features import ( "github.com/sourcehawk/operator-component-framework/examples/component-architecture-basics/resources" "github.com/sourcehawk/operator-component-framework/pkg/feature" + corev1 "k8s.io/api/core/v1" ) var tracingFeature = MustRegister("TracingExample", ">= 8.1.0") @@ -18,7 +19,7 @@ func NewTracingFeature(version string, enabled bool) feature.Mutation[*resources version, []feature.VersionConstraint{tracingFeature}, ).When(enabled), Mutate: func(m *resources.DeploymentResourceMutator) error { - m.EnsureContainerEnvVar("ENABLE_TRACING", "true") + m.EnsureContainerEnvVar(corev1.EnvVar{Name: "ENABLE_TRACING", Value: "true"}) return nil }, } diff --git a/examples/component-architecture-basics/resources/deployment_mutator.go b/examples/component-architecture-basics/resources/deployment_mutator.go index d33874e..5439d42 100644 --- a/examples/component-architecture-basics/resources/deployment_mutator.go +++ b/examples/component-architecture-basics/resources/deployment_mutator.go @@ -24,7 +24,7 @@ type DeploymentResourceMutator struct { type envOp struct { remove bool - value string + ev corev1.EnvVar } type argOp struct { @@ -45,17 +45,17 @@ func NewDeploymentResourceMutator(current *appsv1.Deployment) *DeploymentResourc // EnsureContainerEnvVar records that an environment variable should exist in all containers. // -// If the env var already exists when Apply() runs, its value will be updated. +// If the env var already exists when Apply() runs, it will be replaced. // If it does not exist, it will be appended. // If the same env var was previously marked for removal, this overrides that removal. -func (m *DeploymentResourceMutator) EnsureContainerEnvVar(name, value string) { +func (m *DeploymentResourceMutator) EnsureContainerEnvVar(ev corev1.EnvVar) { for i := range m.current.Spec.Template.Spec.Containers { if _, ok := m.envOps[i]; !ok { m.envOps[i] = map[string]envOp{} } - m.envOps[i][name] = envOp{ + m.envOps[i][ev.Name] = envOp{ remove: false, - value: value, + ev: ev, } } } @@ -143,8 +143,7 @@ func (m *DeploymentResourceMutator) applyEnvOps(containerIndex int) { continue } - env.Value = op.value - newEnv = append(newEnv, env) + newEnv = append(newEnv, op.ev) continue } @@ -152,15 +151,12 @@ func (m *DeploymentResourceMutator) applyEnvOps(containerIndex int) { } // Append ensured env vars that were not already present. - for name, op := range ops { - if op.remove || seen[name] { + for _, op := range ops { + if op.remove || seen[op.ev.Name] { continue } - newEnv = append(newEnv, corev1.EnvVar{ - Name: name, - Value: op.value, - }) + newEnv = append(newEnv, op.ev) } container.Env = newEnv diff --git a/internal/generic/builder_static.go b/internal/generic/builder_static.go new file mode 100644 index 0000000..cc4002a --- /dev/null +++ b/internal/generic/builder_static.go @@ -0,0 +1,95 @@ +// Package generic provides generic builders and resources for operator components. +package generic + +import ( + "errors" + + "sigs.k8s.io/controller-runtime/pkg/client" +) + +// StaticBuilder configures a generic internal static resource for Kubernetes objects +// such as ConfigMaps and Secrets. +// +// It captures the common framework concepts for static desired-state resources while +// leaving concrete identity and default field application behavior to the caller. +type StaticBuilder[T client.Object] struct { + res *StaticResource[T] +} + +// NewStaticBuilder creates a new generic static builder. +// +// The provided object is treated as the desired base state. The identity function +// must return a stable framework identity for the object. The default applicator +// defines the baseline field application strategy when no custom applicator is set. +func NewStaticBuilder[T client.Object]( + obj T, + identityFunc func(T) string, + defaultApplicator FieldApplicator[T], +) *StaticBuilder[T] { + return &StaticBuilder[T]{ + res: &StaticResource[T]{ + Object: obj, + IdentityFunc: identityFunc, + DefaultFieldApplicator: defaultApplicator, + }, + } +} + +// WithCustomFieldApplicator overrides the default baseline field applicator. +func (b *StaticBuilder[T]) WithCustomFieldApplicator( + applicator FieldApplicator[T], +) *StaticBuilder[T] { + b.res.CustomFieldApplicator = applicator + return b +} + +// WithFieldApplicationFlavor registers a post-baseline field application flavor. +// +// Flavors are applied after the default or custom field applicator has produced the +// baseline applied object. Flavors run in registration order. +func (b *StaticBuilder[T]) WithFieldApplicationFlavor( + flavor FieldApplicationFlavor[T], +) *StaticBuilder[T] { + if flavor != nil { + b.res.FieldFlavors = append(b.res.FieldFlavors, flavor) + } + + return b +} + +// WithDataExtractor registers a typed data extractor to run after successful +// reconciliation. +func (b *StaticBuilder[T]) WithDataExtractor( + extractor func(T) error, +) *StaticBuilder[T] { + if extractor != nil { + b.res.DataExtractors = append(b.res.DataExtractors, extractor) + } + + return b +} + +// Build validates the static builder configuration and returns the initialized resource. +func (b *StaticBuilder[T]) Build() (*StaticResource[T], error) { + if isNil(b.res.Object) { + return nil, errors.New("object cannot be nil") + } + + if b.res.Object.GetName() == "" { + return nil, errors.New("object name cannot be empty") + } + + if b.res.Object.GetNamespace() == "" { + return nil, errors.New("object namespace cannot be empty") + } + + if b.res.IdentityFunc == nil { + return nil, errors.New("identity function cannot be nil") + } + + if b.res.DefaultFieldApplicator == nil { + return nil, errors.New("default field applicator cannot be nil") + } + + return b.res, nil +} diff --git a/internal/generic/builder_static_test.go b/internal/generic/builder_static_test.go new file mode 100644 index 0000000..b6d3b73 --- /dev/null +++ b/internal/generic/builder_static_test.go @@ -0,0 +1,82 @@ +package generic + +import ( + "testing" + + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +func TestStaticBuilder(t *testing.T) { + obj := &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-cm", + Namespace: "default", + }, + } + identityFunc := func(cm *corev1.ConfigMap) string { return cm.Name } + defaultApp := func(_, _ *corev1.ConfigMap) error { return nil } + + t.Run("successful build", func(t *testing.T) { + builder := NewStaticBuilder(obj, identityFunc, defaultApp) + res, err := builder.Build() + if err != nil { + t.Fatalf("Build() error = %v", err) + } + if res.Object != obj { + t.Errorf("expected object %v, got %v", obj, res.Object) + } + }) + + t.Run("with custom applicator", func(t *testing.T) { + customApp := func(_, _ *corev1.ConfigMap) error { return nil } + builder := NewStaticBuilder(obj, identityFunc, defaultApp).WithCustomFieldApplicator(customApp) + res, _ := builder.Build() + if reflectValueOf(res.CustomFieldApplicator).Pointer() != reflectValueOf(customApp).Pointer() { + t.Errorf("custom applicator not set correctly") + } + }) + + t.Run("with field application flavor", func(t *testing.T) { + flavor := func(_, _, _ *corev1.ConfigMap) error { return nil } + builder := NewStaticBuilder(obj, identityFunc, defaultApp).WithFieldApplicationFlavor(flavor) + res, _ := builder.Build() + if len(res.FieldFlavors) != 1 { + t.Errorf("expected 1 flavor, got %d", len(res.FieldFlavors)) + } + }) + + t.Run("with data extractor", func(t *testing.T) { + extractor := func(_ *corev1.ConfigMap) error { return nil } + builder := NewStaticBuilder(obj, identityFunc, defaultApp).WithDataExtractor(extractor) + res, _ := builder.Build() + if len(res.DataExtractors) != 1 { + t.Errorf("expected 1 extractor, got %d", len(res.DataExtractors)) + } + }) + + t.Run("validation errors", func(t *testing.T) { + tests := []struct { + name string + obj *corev1.ConfigMap + idFunc func(*corev1.ConfigMap) string + defApp FieldApplicator[*corev1.ConfigMap] + wantErr string + }{ + {"nil object", nil, identityFunc, defaultApp, "object cannot be nil"}, + {"empty name", &corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Namespace: "default"}}, identityFunc, defaultApp, "object name cannot be empty"}, + {"empty namespace", &corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Name: "test"}}, identityFunc, defaultApp, "object namespace cannot be empty"}, + {"nil identity", obj, nil, defaultApp, "identity function cannot be nil"}, + {"nil applicator", obj, identityFunc, nil, "default field applicator cannot be nil"}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + _, err := NewStaticBuilder(tt.obj, tt.idFunc, tt.defApp).Build() + if err == nil || err.Error() != tt.wantErr { + t.Errorf("expected error %q, got %v", tt.wantErr, err) + } + }) + } + }) +} diff --git a/internal/generic/builder_workload.go b/internal/generic/builder_workload.go new file mode 100644 index 0000000..27f593c --- /dev/null +++ b/internal/generic/builder_workload.go @@ -0,0 +1,143 @@ +package generic + +import ( + "errors" + + "github.com/sourcehawk/operator-component-framework/pkg/component" + "github.com/sourcehawk/operator-component-framework/pkg/feature" + "sigs.k8s.io/controller-runtime/pkg/client" +) + +// WorkloadBuilder configures a generic internal workload resource for Kubernetes primitives +// such as Deployments, StatefulSets, and DaemonSets. +// +// It captures the common framework concepts while leaving kind-specific defaults and wrappers +// to the concrete workload packages. +type WorkloadBuilder[T client.Object, M MutatorApplier] struct { + res *WorkloadResource[T, M] +} + +// NewWorkloadBuilder creates a new generic workload builder. +// +// The provided object is treated as the desired base state. The mutator factory is used to +// construct the typed mutator during Mutate. +func NewWorkloadBuilder[T client.Object, M MutatorApplier]( + obj T, + identityFunc func(T) string, + defaultApplicator FieldApplicator[T], + newMutator func(T) M, +) *WorkloadBuilder[T, M] { + return &WorkloadBuilder[T, M]{ + res: &WorkloadResource[T, M]{ + Object: obj, + IdentityFunc: identityFunc, + DefaultFieldApplicator: defaultApplicator, + NewMutator: newMutator, + }, + } +} + +// WithMutation registers a typed feature mutation for the workload. +func (b *WorkloadBuilder[T, M]) WithMutation( + m feature.Mutation[M], +) *WorkloadBuilder[T, M] { + b.res.Mutations = append(b.res.Mutations, m) + return b +} + +// WithCustomFieldApplicator overrides the default baseline field applicator. +func (b *WorkloadBuilder[T, M]) WithCustomFieldApplicator( + applicator FieldApplicator[T], +) *WorkloadBuilder[T, M] { + b.res.CustomFieldApplicator = applicator + return b +} + +// WithFieldApplicationFlavor registers a post-baseline field application flavor. +func (b *WorkloadBuilder[T, M]) WithFieldApplicationFlavor( + flavor FieldApplicationFlavor[T], +) *WorkloadBuilder[T, M] { + if flavor != nil { + b.res.FieldFlavors = append(b.res.FieldFlavors, flavor) + } + return b +} + +// WithDataExtractor registers a typed data extractor to run after successful reconciliation. +func (b *WorkloadBuilder[T, M]) WithDataExtractor( + extractor func(T) error, +) *WorkloadBuilder[T, M] { + if extractor != nil { + b.res.DataExtractors = append(b.res.DataExtractors, extractor) + } + return b +} + +// WithCustomConvergeStatus overrides the workload convergence status handler. +func (b *WorkloadBuilder[T, M]) WithCustomConvergeStatus( + handler func(component.ConvergingOperation, T) (component.ConvergingStatusWithReason, error), +) *WorkloadBuilder[T, M] { + b.res.ConvergingStatusHandler = handler + return b +} + +// WithCustomGraceStatus overrides the workload grace status handler. +func (b *WorkloadBuilder[T, M]) WithCustomGraceStatus( + handler func(T) (component.GraceStatusWithReason, error), +) *WorkloadBuilder[T, M] { + b.res.GraceStatusHandler = handler + return b +} + +// WithCustomSuspendStatus overrides the workload suspension status handler. +func (b *WorkloadBuilder[T, M]) WithCustomSuspendStatus( + handler func(T) (component.SuspensionStatusWithReason, error), +) *WorkloadBuilder[T, M] { + b.res.SuspendStatusHandler = handler + return b +} + +// WithCustomSuspendMutation overrides the workload suspension mutation handler. +func (b *WorkloadBuilder[T, M]) WithCustomSuspendMutation( + handler func(M) error, +) *WorkloadBuilder[T, M] { + b.res.SuspendMutationHandler = handler + return b +} + +// WithCustomSuspendDeletionDecision overrides the workload delete-on-suspend decision handler. +func (b *WorkloadBuilder[T, M]) WithCustomSuspendDeletionDecision( + handler func(T) bool, +) *WorkloadBuilder[T, M] { + b.res.DeleteOnSuspendHandler = handler + return b +} + +// Build validates the workload builder configuration and returns the initialized resource. +func (b *WorkloadBuilder[T, M]) Build() (*WorkloadResource[T, M], error) { + if isNil(b.res.Object) { + return nil, errors.New("object cannot be nil") + } + + if b.res.Object.GetName() == "" { + return nil, errors.New("object name cannot be empty") + } + + if b.res.Object.GetNamespace() == "" { + return nil, errors.New("object namespace cannot be empty") + } + + if b.res.IdentityFunc == nil { + return nil, errors.New("identity function cannot be nil") + } + + if b.res.DefaultFieldApplicator == nil { + return nil, errors.New("default field applicator cannot be nil") + } + + if b.res.NewMutator == nil { + return nil, errors.New("mutator factory cannot be nil") + } + + return b.res, nil +} diff --git a/internal/generic/builder_workload_test.go b/internal/generic/builder_workload_test.go new file mode 100644 index 0000000..9e97e73 --- /dev/null +++ b/internal/generic/builder_workload_test.go @@ -0,0 +1,99 @@ +package generic + +import ( + "testing" + + "github.com/sourcehawk/operator-component-framework/pkg/component" + "github.com/sourcehawk/operator-component-framework/pkg/feature" + appsv1 "k8s.io/api/apps/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +func TestWorkloadBuilder(t *testing.T) { + obj := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-deploy", + Namespace: "default", + }, + } + identityFunc := func(d *appsv1.Deployment) string { return d.Name } + defaultApp := func(_, _ *appsv1.Deployment) error { return nil } + newMutator := func(d *appsv1.Deployment) *mockMutator { return &mockMutator{deployment: d} } + + t.Run("successful build", func(t *testing.T) { + builder := NewWorkloadBuilder(obj, identityFunc, defaultApp, newMutator) + res, err := builder.Build() + if err != nil { + t.Fatalf("Build() error = %v", err) + } + if res.Object != obj { + t.Errorf("expected object %v, got %v", obj, res.Object) + } + }) + + t.Run("with mutation", func(t *testing.T) { + mut := feature.Mutation[*mockMutator]{ + Name: "test-mutation", + Feature: feature.NewResourceFeature("1.0.0", nil), + Mutate: func(_ *mockMutator) error { + return nil + }, + } + builder := NewWorkloadBuilder(obj, identityFunc, defaultApp, newMutator).WithMutation(mut) + res, _ := builder.Build() + if len(res.Mutations) != 1 { + t.Errorf("expected 1 mutation, got %d", len(res.Mutations)) + } + }) + + t.Run("with handlers", func(t *testing.T) { + builder := NewWorkloadBuilder(obj, identityFunc, defaultApp, newMutator). + WithCustomConvergeStatus(func(_ component.ConvergingOperation, _ *appsv1.Deployment) (component.ConvergingStatusWithReason, error) { + return component.ConvergingStatusWithReason{}, nil + }). + WithCustomGraceStatus(func(_ *appsv1.Deployment) (component.GraceStatusWithReason, error) { + return component.GraceStatusWithReason{}, nil + }). + WithCustomSuspendStatus(func(_ *appsv1.Deployment) (component.SuspensionStatusWithReason, error) { + return component.SuspensionStatusWithReason{}, nil + }). + WithCustomSuspendMutation(func(_ *mockMutator) error { + return nil + }). + WithCustomSuspendDeletionDecision(func(_ *appsv1.Deployment) bool { + return true + }) + + res, _ := builder.Build() + if res.ConvergingStatusHandler == nil || res.GraceStatusHandler == nil || res.SuspendStatusHandler == nil || res.SuspendMutationHandler == nil || res.DeleteOnSuspendHandler == nil { + t.Errorf("one or more handlers not set") + } + }) + + t.Run("validation errors", func(t *testing.T) { + tests := []struct { + name string + obj *appsv1.Deployment + idFunc func(*appsv1.Deployment) string + defApp FieldApplicator[*appsv1.Deployment] + newMut func(*appsv1.Deployment) *mockMutator + wantErr string + }{ + {"nil object", nil, identityFunc, defaultApp, newMutator, "object cannot be nil"}, + {"empty name", &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Namespace: "default"}}, identityFunc, defaultApp, newMutator, "object name cannot be empty"}, + {"empty namespace", &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Name: "test"}}, identityFunc, defaultApp, newMutator, "object namespace cannot be empty"}, + {"nil identity", obj, nil, defaultApp, newMutator, "identity function cannot be nil"}, + {"nil applicator", obj, identityFunc, nil, newMutator, "default field applicator cannot be nil"}, + {"nil mutator factory", obj, identityFunc, defaultApp, nil, "mutator factory cannot be nil"}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + _, err := NewWorkloadBuilder(tt.obj, tt.idFunc, tt.defApp, tt.newMut).Build() + if err == nil || err.Error() != tt.wantErr { + t.Errorf("expected error %q, got %v", tt.wantErr, err) + } + }) + } + }) +} diff --git a/internal/generic/common_test.go b/internal/generic/common_test.go new file mode 100644 index 0000000..641c2d7 --- /dev/null +++ b/internal/generic/common_test.go @@ -0,0 +1,22 @@ +package generic + +import ( + "reflect" + + appsv1 "k8s.io/api/apps/v1" +) + +// reflectValueOf is a helper for testing function equality. +func reflectValueOf(i any) reflect.Value { + return reflect.ValueOf(i) +} + +type mockMutator struct { + deployment *appsv1.Deployment + applied bool +} + +func (m *mockMutator) Apply() error { + m.applied = true + return nil +} diff --git a/internal/generic/field_applicator.go b/internal/generic/field_applicator.go new file mode 100644 index 0000000..25b6e57 --- /dev/null +++ b/internal/generic/field_applicator.go @@ -0,0 +1,74 @@ +package generic + +import ( + "fmt" + "reflect" + + "sigs.k8s.io/controller-runtime/pkg/client" +) + +func isNil(i any) bool { + if i == nil { + return true + } + v := reflect.ValueOf(i) + switch v.Kind() { + case reflect.Chan, reflect.Func, reflect.Map, reflect.Ptr, reflect.UnsafePointer, reflect.Interface, reflect.Slice: + return v.IsNil() + default: + return false + } +} + +// FieldApplicator applies desired state onto the current object to create +// the baseline applied state before field flavors and mutations run. +type FieldApplicator[T client.Object] func(current, desired T) error + +// applyBaselineAndFlavors runs the standard field application pipeline: +// +// 1. snapshot the original current object +// 2. run the custom applicator if present, otherwise the default applicator +// 3. run all field application flavors in registration order +// +// The current object is mutated in place and returned. +func applyBaselineAndFlavors[T client.Object]( + current T, + desired T, + defaultApplicator FieldApplicator[T], + customApplicator FieldApplicator[T], + flavors []FieldApplicationFlavor[T], +) (T, error) { + originalCurrent, ok := current.DeepCopyObject().(T) + if !ok { + var zero T + return zero, fmt.Errorf("failed to deep copy current object of type %T", current) + } + + applicator := defaultApplicator + if customApplicator != nil { + applicator = customApplicator + } + + if applicator == nil { + var zero T + return zero, fmt.Errorf("no field applicator configured") + } + + if err := applicator(current, desired); err != nil { + var zero T + return zero, fmt.Errorf("failed to apply baseline fields: %w", err) + } + + for _, flavor := range flavors { + if flavor == nil { + continue + } + + if err := flavor(current, originalCurrent, desired); err != nil { + var zero T + return zero, fmt.Errorf("failed to apply field application flavor: %w", err) + } + } + + return current, nil +} diff --git a/internal/generic/field_applicator_test.go b/internal/generic/field_applicator_test.go new file mode 100644 index 0000000..16fe68e --- /dev/null +++ b/internal/generic/field_applicator_test.go @@ -0,0 +1,140 @@ +package generic + +import ( + "testing" + + corev1 "k8s.io/api/core/v1" +) + +func TestIsNil(t *testing.T) { + var ( + ptr *corev1.ConfigMap + slice []string + m map[string]string + ch chan int + f func() + ) + + tests := []struct { + name string + input any + expected bool + }{ + {"nil any", nil, true}, + {"nil pointer", ptr, true}, + {"nil slice", slice, true}, + {"nil map", m, true}, + {"nil chan", ch, true}, + {"nil func", f, true}, + {"non-nil pointer", &corev1.ConfigMap{}, false}, + {"non-nil slice", []string{}, false}, + {"non-nil map", map[string]string{}, false}, + {"non-nil chan", make(chan int), false}, + {"non-nil func", func() {}, false}, + {"int", 1, false}, + {"string", "test", false}, + {"struct", corev1.ConfigMap{}, false}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + if got := isNil(tt.input); got != tt.expected { + t.Errorf("isNil() = %v, want %v", got, tt.expected) + } + }) + } +} + +func TestApplyBaselineAndFlavors(t *testing.T) { + defaultApp := func(current, desired *corev1.ConfigMap) error { + current.Data = desired.Data + return nil + } + + tests := []struct { + name string + current *corev1.ConfigMap + desired *corev1.ConfigMap + defaultApp FieldApplicator[*corev1.ConfigMap] + customApp FieldApplicator[*corev1.ConfigMap] + flavors []FieldApplicationFlavor[*corev1.ConfigMap] + wantErr bool + validate func(*testing.T, *corev1.ConfigMap) + }{ + { + name: "use default applicator", + current: &corev1.ConfigMap{}, + desired: &corev1.ConfigMap{Data: map[string]string{"foo": "bar"}}, + defaultApp: defaultApp, + validate: func(t *testing.T, cm *corev1.ConfigMap) { + if cm.Data["foo"] != "bar" { + t.Errorf("expected foo=bar, got %v", cm.Data["foo"]) + } + }, + }, + { + name: "use custom applicator", + current: &corev1.ConfigMap{}, + desired: &corev1.ConfigMap{Data: map[string]string{"foo": "bar"}}, + defaultApp: defaultApp, + customApp: func(current, _ *corev1.ConfigMap) error { + current.Data = map[string]string{"custom": "value"} + return nil + }, + validate: func(t *testing.T, cm *corev1.ConfigMap) { + if cm.Data["custom"] != "value" { + t.Errorf("expected custom=value, got %v", cm.Data["custom"]) + } + if _, ok := cm.Data["foo"]; ok { + t.Errorf("did not expect foo to be set") + } + }, + }, + { + name: "run flavors", + current: &corev1.ConfigMap{ + Data: map[string]string{"preserved": "old"}, + }, + desired: &corev1.ConfigMap{Data: map[string]string{"foo": "bar"}}, + defaultApp: defaultApp, + flavors: []FieldApplicationFlavor[*corev1.ConfigMap]{ + func(applied, current, _ *corev1.ConfigMap) error { + if val, ok := current.Data["preserved"]; ok { + if applied.Data == nil { + applied.Data = make(map[string]string) + } + applied.Data["preserved"] = val + } + return nil + }, + }, + validate: func(t *testing.T, cm *corev1.ConfigMap) { + if cm.Data["foo"] != "bar" { + t.Errorf("expected foo=bar") + } + if cm.Data["preserved"] != "old" { + t.Errorf("expected preserved=old") + } + }, + }, + { + name: "no applicator", + current: &corev1.ConfigMap{}, + desired: &corev1.ConfigMap{}, + wantErr: true, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + got, err := applyBaselineAndFlavors(tt.current, tt.desired, tt.defaultApp, tt.customApp, tt.flavors) + if (err != nil) != tt.wantErr { + t.Errorf("applyBaselineAndFlavors() error = %v, wantErr %v", err, tt.wantErr) + return + } + if !tt.wantErr && tt.validate != nil { + tt.validate(t, got) + } + }) + } +} diff --git a/internal/generic/flavor.go b/internal/generic/flavor.go new file mode 100644 index 0000000..11b0dd6 --- /dev/null +++ b/internal/generic/flavor.go @@ -0,0 +1,8 @@ +package generic + +import "sigs.k8s.io/controller-runtime/pkg/client" + +// FieldApplicationFlavor defines a function signature for applying "flavors" to a resource. +// A flavor typically preserves certain fields from the current (live) object after the +// baseline field application has occurred. +type FieldApplicationFlavor[T client.Object] func(applied, current, desired T) error diff --git a/internal/generic/resource_static.go b/internal/generic/resource_static.go new file mode 100644 index 0000000..3b8146c --- /dev/null +++ b/internal/generic/resource_static.go @@ -0,0 +1,88 @@ +package generic + +import ( + "fmt" + + "sigs.k8s.io/controller-runtime/pkg/client" +) + +// StaticResource is a generic internal resource implementation for Kubernetes objects +// that are treated as static desired-state resources, such as ConfigMaps and Secrets. +// +// It supports: +// - default or custom baseline field application +// - post-baseline field application flavors +// - data extraction after reconciliation +// +// Unlike workload resources, it does not support feature mutations, convergence status, +// or suspension behavior. +type StaticResource[T client.Object] struct { + Object T + + IdentityFunc func(T) string + + DefaultFieldApplicator FieldApplicator[T] + CustomFieldApplicator FieldApplicator[T] + FieldFlavors []FieldApplicationFlavor[T] + + DataExtractors []func(T) error +} + +// Identity returns the stable framework identity for the resource. +func (r *StaticResource[T]) Identity() string { + return r.IdentityFunc(r.Object) +} + +// GetObject returns a deep copy of the desired resource object. +func (r *StaticResource[T]) GetObject() (client.Object, error) { + obj, ok := r.Object.DeepCopyObject().(client.Object) + if !ok { + return nil, fmt.Errorf("failed to deep copy object of type %T", r.Object) + } + + return obj, nil +} + +// Mutate applies the baseline field applicator and all registered field application +// flavors to the provided current object. +func (r *StaticResource[T]) Mutate(current client.Object) error { + currentTyped, ok := current.(T) + if !ok { + return fmt.Errorf("expected %T, got %T", r.Object, current) + } + + applied, err := applyBaselineAndFlavors( + currentTyped, + r.Object, + r.DefaultFieldApplicator, + r.CustomFieldApplicator, + r.FieldFlavors, + ) + if err != nil { + return err + } + + r.Object = applied + return nil +} + +// ExtractData executes all registered data extractors against a deep copy of the +// reconciled object. +func (r *StaticResource[T]) ExtractData() error { + copyObj, ok := r.Object.DeepCopyObject().(T) + if !ok { + return fmt.Errorf("failed to deep copy object of type %T", r.Object) + } + + for _, extractor := range r.DataExtractors { + if extractor == nil { + continue + } + + if err := extractor(copyObj); err != nil { + return err + } + } + + return nil +} diff --git a/internal/generic/resource_static_test.go b/internal/generic/resource_static_test.go new file mode 100644 index 0000000..89f41f6 --- /dev/null +++ b/internal/generic/resource_static_test.go @@ -0,0 +1,93 @@ +package generic + +import ( + "errors" + "testing" + + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +func TestStaticResource(t *testing.T) { + const testVal = "bar" + obj := &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-cm", + Namespace: "default", + }, + Data: map[string]string{"foo": testVal}, + } + identityFunc := func(cm *corev1.ConfigMap) string { return cm.Name } + defaultApp := func(current, desired *corev1.ConfigMap) error { + current.Data = desired.Data + return nil + } + + res := &StaticResource[*corev1.ConfigMap]{ + Object: obj, + IdentityFunc: identityFunc, + DefaultFieldApplicator: defaultApp, + } + + t.Run("Identity", func(t *testing.T) { + if res.Identity() != "test-cm" { + t.Errorf("expected identity test-cm, got %s", res.Identity()) + } + }) + + t.Run("GetObject", func(t *testing.T) { + got, err := res.GetObject() + if err != nil { + t.Fatalf("GetObject() error = %v", err) + } + if got.GetName() != "test-cm" { + t.Errorf("expected name test-cm, got %s", got.GetName()) + } + if got == res.Object { + t.Errorf("GetObject should return a copy, not the same object") + } + }) + + t.Run("Mutate", func(t *testing.T) { + current := &corev1.ConfigMap{} + err := res.Mutate(current) + if err != nil { + t.Fatalf("Mutate() error = %v", err) + } + if current.Data["foo"] != testVal { + t.Errorf("expected foo=%s, got %v", testVal, current.Data["foo"]) + } + }) + + t.Run("ExtractData", func(t *testing.T) { + extracted := false + res.DataExtractors = []func(*corev1.ConfigMap) error{ + func(cm *corev1.ConfigMap) error { + extracted = true + if cm.Data["foo"] != testVal { + t.Errorf("expected foo=%s in extractor", testVal) + } + return nil + }, + } + err := res.ExtractData() + if err != nil { + t.Fatalf("ExtractData() error = %v", err) + } + if !extracted { + t.Errorf("extractor was not called") + } + }) + + t.Run("ExtractData error", func(t *testing.T) { + res.DataExtractors = []func(*corev1.ConfigMap) error{ + func(_ *corev1.ConfigMap) error { + return errors.New("extract error") + }, + } + err := res.ExtractData() + if err == nil || err.Error() != "extract error" { + t.Errorf("expected extract error, got %v", err) + } + }) +} diff --git a/internal/generic/resource_workload.go b/internal/generic/resource_workload.go new file mode 100644 index 0000000..541526b --- /dev/null +++ b/internal/generic/resource_workload.go @@ -0,0 +1,193 @@ +package generic + +import ( + "fmt" + + "github.com/sourcehawk/operator-component-framework/pkg/component" + "github.com/sourcehawk/operator-component-framework/pkg/feature" + "sigs.k8s.io/controller-runtime/pkg/client" +) + +// MutatorApplier is implemented by workload mutators that can apply their planned changes +// to the underlying Kubernetes object. +type MutatorApplier interface { + Apply() error +} + +// FeatureMutator is implemented by workload mutators that support defining feature boundaries. +type FeatureMutator interface { + MutatorApplier + BeginFeature() +} + +// WorkloadResource is a generic internal resource implementation for long-running Kubernetes +// workload objects such as Deployments, StatefulSets, and DaemonSets. +// +// It provides shared behavior for: +// - baseline field application +// - field application flavors +// - feature mutations +// - suspension mutations +// - data extraction +// +// Concrete workload packages are expected to wrap this type and provide kind-specific +// identity and status logic. +type WorkloadResource[T client.Object, M MutatorApplier] struct { + Object T + + IdentityFunc func(T) string + + DefaultFieldApplicator FieldApplicator[T] + CustomFieldApplicator FieldApplicator[T] + FieldFlavors []FieldApplicationFlavor[T] + + DataExtractors []func(T) error + + NewMutator func(T) M + Mutations []feature.Mutation[M] + + Suspender func(M) error + + ConvergingStatusHandler func(component.ConvergingOperation, T) (component.ConvergingStatusWithReason, error) + GraceStatusHandler func(T) (component.GraceStatusWithReason, error) + SuspendStatusHandler func(T) (component.SuspensionStatusWithReason, error) + SuspendMutationHandler func(M) error + DeleteOnSuspendHandler func(T) bool +} + +// Identity returns the stable framework identity for the workload. +func (r *WorkloadResource[T, M]) Identity() string { + return r.IdentityFunc(r.Object) +} + +// GetObject returns a deep copy of the desired workload object. +func (r *WorkloadResource[T, M]) GetObject() (client.Object, error) { + return r.Object.DeepCopyObject().(client.Object), nil +} + +// Mutate applies the baseline field applicator, field application flavors, feature mutations, +// and any active suspension mutation to the provided current object. +func (r *WorkloadResource[T, M]) Mutate(current client.Object) error { + currentTyped, ok := current.(T) + if !ok { + return fmt.Errorf("expected %T, got %T", r.Object, current) + } + + applied, err := r.ApplyBaselineAndFlavors(currentTyped) + if err != nil { + return err + } + + mutator := r.NewMutator(applied) + fm, isFeatureMutator := any(mutator).(FeatureMutator) + + for _, mutation := range r.Mutations { + if isFeatureMutator { + fm.BeginFeature() + } + + if err := mutation.ApplyIntent(mutator); err != nil { + return fmt.Errorf("failed to apply mutation intent for %s: %w", mutation.Name, err) + } + } + + if err := mutator.Apply(); err != nil { + return fmt.Errorf("failed to apply planned mutations: %w", err) + } + + if r.Suspender != nil { + if isFeatureMutator { + fm.BeginFeature() + } + + if err := r.Suspender(mutator); err != nil { + return err + } + + if err := mutator.Apply(); err != nil { + return fmt.Errorf("failed to apply suspension mutations: %w", err) + } + } + + r.Object = applied + + return nil +} + +// ApplyBaselineAndFlavors runs the standard field application pipeline on the provided current object. +func (r *WorkloadResource[T, M]) ApplyBaselineAndFlavors(current T) (T, error) { + return applyBaselineAndFlavors( + current, + r.Object, + r.DefaultFieldApplicator, + r.CustomFieldApplicator, + r.FieldFlavors, + ) +} + +// ExtractData runs all registered data extractors against a deep copy of the reconciled object. +func (r *WorkloadResource[T, M]) ExtractData() error { + copyObj, ok := r.Object.DeepCopyObject().(T) + if !ok { + return fmt.Errorf("failed to deep copy object of type %T", r.Object) + } + + for _, extractor := range r.DataExtractors { + if extractor == nil { + continue + } + if err := extractor(copyObj); err != nil { + return err + } + } + + return nil +} + +// ConvergingStatus reports the workload's convergence status using the configured handler. +func (r *WorkloadResource[T, M]) ConvergingStatus( + op component.ConvergingOperation, +) (component.ConvergingStatusWithReason, error) { + if r.ConvergingStatusHandler == nil { + return component.ConvergingStatusWithReason{}, fmt.Errorf("converging status handler is not configured") + } + return r.ConvergingStatusHandler(op, r.Object) +} + +// GraceStatus reports the workload's grace status using the configured handler. +func (r *WorkloadResource[T, M]) GraceStatus() (component.GraceStatusWithReason, error) { + if r.GraceStatusHandler == nil { + return component.GraceStatusWithReason{}, fmt.Errorf("grace status handler is not configured") + } + return r.GraceStatusHandler(r.Object) +} + +// DeleteOnSuspend reports whether the workload should be deleted when suspended. +func (r *WorkloadResource[T, M]) DeleteOnSuspend() bool { + if r.DeleteOnSuspendHandler == nil { + return false + } + return r.DeleteOnSuspendHandler(r.Object) +} + +// Suspend registers the configured suspension mutation for the next mutate cycle. +func (r *WorkloadResource[T, M]) Suspend() error { + if r.SuspendMutationHandler == nil { + return fmt.Errorf("suspend mutation handler is not configured") + } + + r.Suspender = func(m M) error { + defer func() { r.Suspender = nil }() + return r.SuspendMutationHandler(m) + } + + return nil +} + +// SuspensionStatus reports the workload's suspension status using the configured handler. +func (r *WorkloadResource[T, M]) SuspensionStatus() (component.SuspensionStatusWithReason, error) { + if r.SuspendStatusHandler == nil { + return component.SuspensionStatusWithReason{}, fmt.Errorf("suspend status handler is not configured") + } + return r.SuspendStatusHandler(r.Object) +} diff --git a/internal/generic/resource_workload_test.go b/internal/generic/resource_workload_test.go new file mode 100644 index 0000000..3b7d32c --- /dev/null +++ b/internal/generic/resource_workload_test.go @@ -0,0 +1,131 @@ +package generic + +import ( + "testing" + + "github.com/sourcehawk/operator-component-framework/pkg/component" + "github.com/sourcehawk/operator-component-framework/pkg/feature" + appsv1 "k8s.io/api/apps/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +func TestWorkloadResource(t *testing.T) { + obj := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-deploy", + Namespace: "default", + }, + } + identityFunc := func(d *appsv1.Deployment) string { return d.Name } + defaultApp := func(current, desired *appsv1.Deployment) error { + current.Spec = desired.Spec + return nil + } + newMutator := func(d *appsv1.Deployment) *mockMutator { return &mockMutator{deployment: d} } + + res := &WorkloadResource[*appsv1.Deployment, *mockMutator]{ + Object: obj, + IdentityFunc: identityFunc, + DefaultFieldApplicator: defaultApp, + NewMutator: newMutator, + } + + t.Run("Identity", func(t *testing.T) { + if res.Identity() != "test-deploy" { + t.Errorf("expected identity test-deploy, got %s", res.Identity()) + } + }) + + t.Run("GetObject", func(t *testing.T) { + got, err := res.GetObject() + if err != nil { + t.Fatalf("GetObject() error = %v", err) + } + if got.GetName() != "test-deploy" { + t.Errorf("expected name test-deploy, got %s", got.GetName()) + } + }) + + t.Run("Mutate", func(t *testing.T) { + current := &appsv1.Deployment{} + mutCalled := false + res.Mutations = []feature.Mutation[*mockMutator]{ + { + Name: "test-mut", + Feature: feature.NewResourceFeature("1.0.0", nil), + Mutate: func(_ *mockMutator) error { + mutCalled = true + return nil + }, + }, + } + + err := res.Mutate(current) + if err != nil { + t.Fatalf("Mutate() error = %v", err) + } + if !mutCalled { + t.Errorf("mutation was not called") + } + }) + + t.Run("Suspend", func(t *testing.T) { + suspendMutCalled := false + res.SuspendMutationHandler = func(_ *mockMutator) error { + suspendMutCalled = true + return nil + } + + err := res.Suspend() + if err != nil { + t.Fatalf("Suspend() error = %v", err) + } + + current := &appsv1.Deployment{} + err = res.Mutate(current) + if err != nil { + t.Fatalf("Mutate() error = %v", err) + } + + if !suspendMutCalled { + t.Errorf("suspend mutation was not called") + } + if res.Suspender != nil { + t.Errorf("suspender should be nil after use") + } + }) + + t.Run("Status handlers", func(t *testing.T) { + res.ConvergingStatusHandler = func(_ component.ConvergingOperation, _ *appsv1.Deployment) (component.ConvergingStatusWithReason, error) { + return component.ConvergingStatusWithReason{Status: component.ConvergingStatusReady}, nil + } + res.GraceStatusHandler = func(_ *appsv1.Deployment) (component.GraceStatusWithReason, error) { + return component.GraceStatusWithReason{Status: component.GraceStatusReady}, nil + } + res.SuspendStatusHandler = func(_ *appsv1.Deployment) (component.SuspensionStatusWithReason, error) { + return component.SuspensionStatusWithReason{Status: component.SuspensionStatusSuspended}, nil + } + res.DeleteOnSuspendHandler = func(_ *appsv1.Deployment) bool { + return true + } + + cs, _ := res.ConvergingStatus(component.ConvergingOperationCreated) + if cs.Status != component.ConvergingStatusReady { + t.Errorf("expected ready") + } + + gs, _ := res.GraceStatus() + if gs.Status != component.GraceStatusReady { + t.Errorf("expected ready") + } + + ss, _ := res.SuspensionStatus() + if ss.Status != component.SuspensionStatusSuspended { + t.Errorf("expected suspended") + } + + if !res.DeleteOnSuspend() { + t.Errorf("expected delete on suspend true") + } + }) +} diff --git a/pkg/flavors/flavors.go b/pkg/flavors/flavors.go new file mode 100644 index 0000000..3dd055c --- /dev/null +++ b/pkg/flavors/flavors.go @@ -0,0 +1,32 @@ +// Package flavors provides utilities for managing flavors of component configurations. +package flavors + +import ( + "github.com/sourcehawk/operator-component-framework/pkg/flavors/utils" + "sigs.k8s.io/controller-runtime/pkg/client" +) + +// FieldApplicationFlavor defines a function signature for applying "flavors" to a resource. +// A flavor typically preserves certain fields from the current (live) object after the +// baseline field application has occurred. +type FieldApplicationFlavor[T client.Object] func(applied, current, desired T) error + +// PreserveCurrentLabels ensures that any labels present on the current live +// resource but missing from the applied (desired) object are preserved. +// If a label exists in both, the applied value wins. +func PreserveCurrentLabels[T client.Object]() FieldApplicationFlavor[T] { + return func(applied, current, _ T) error { + applied.SetLabels(utils.PreserveMap(applied.GetLabels(), current.GetLabels())) + return nil + } +} + +// PreserveCurrentAnnotations ensures that any annotations present on the current +// live resource but missing from the applied (desired) object are preserved. +// If an annotation exists in both, the applied value wins. +func PreserveCurrentAnnotations[T client.Object]() FieldApplicationFlavor[T] { + return func(applied, current, _ T) error { + applied.SetAnnotations(utils.PreserveMap(applied.GetAnnotations(), current.GetAnnotations())) + return nil + } +} diff --git a/pkg/flavors/flavors_test.go b/pkg/flavors/flavors_test.go new file mode 100644 index 0000000..0702a6d --- /dev/null +++ b/pkg/flavors/flavors_test.go @@ -0,0 +1,78 @@ +package flavors + +import ( + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +func TestPreserveCurrentLabels(t *testing.T) { + t.Run("preserves current-only keys", func(t *testing.T) { + applied := &corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{"keep": "applied"}}} + current := &corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{"extra": "current"}}} + + err := PreserveCurrentLabels[*corev1.ConfigMap]()(applied, current, nil) + require.NoError(t, err) + assert.Equal(t, "applied", applied.Labels["keep"]) + assert.Equal(t, "current", applied.Labels["extra"]) + }) + + t.Run("does not overwrite applied keys", func(t *testing.T) { + applied := &corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{"overlap": "applied"}}} + current := &corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{"overlap": "current"}}} + + err := PreserveCurrentLabels[*corev1.ConfigMap]()(applied, current, nil) + require.NoError(t, err) + assert.Equal(t, "applied", applied.Labels["overlap"]) + }) + + t.Run("handles nil maps", func(t *testing.T) { + applied := &corev1.ConfigMap{} + current := &corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{"extra": "current"}}} + + err := PreserveCurrentLabels[*corev1.ConfigMap]()(applied, current, nil) + require.NoError(t, err) + assert.Equal(t, "current", applied.Labels["extra"]) + + appliedEmpty := &corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{"keep": "applied"}}} + currentNil := &corev1.ConfigMap{} + err = PreserveCurrentLabels[*corev1.ConfigMap]()(appliedEmpty, currentNil, nil) + require.NoError(t, err) + assert.Equal(t, "applied", appliedEmpty.Labels["keep"]) + assert.Len(t, appliedEmpty.Labels, 1) + }) +} + +func TestPreserveCurrentAnnotations(t *testing.T) { + t.Run("preserves current-only keys", func(t *testing.T) { + applied := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Annotations: map[string]string{"keep": "applied"}}} + current := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Annotations: map[string]string{"extra": "current"}}} + + err := PreserveCurrentAnnotations[*appsv1.Deployment]()(applied, current, nil) + require.NoError(t, err) + assert.Equal(t, "applied", applied.Annotations["keep"]) + assert.Equal(t, "current", applied.Annotations["extra"]) + }) + + t.Run("does not overwrite applied keys", func(t *testing.T) { + applied := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Annotations: map[string]string{"overlap": "applied"}}} + current := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Annotations: map[string]string{"overlap": "current"}}} + + err := PreserveCurrentAnnotations[*appsv1.Deployment]()(applied, current, nil) + require.NoError(t, err) + assert.Equal(t, "applied", applied.Annotations["overlap"]) + }) + + t.Run("handles nil maps", func(t *testing.T) { + applied := &appsv1.Deployment{} + current := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Annotations: map[string]string{"extra": "current"}}} + + err := PreserveCurrentAnnotations[*appsv1.Deployment]()(applied, current, nil) + require.NoError(t, err) + assert.Equal(t, "current", applied.Annotations["extra"]) + }) +} diff --git a/pkg/flavors/utils/maps.go b/pkg/flavors/utils/maps.go new file mode 100644 index 0000000..315073e --- /dev/null +++ b/pkg/flavors/utils/maps.go @@ -0,0 +1,18 @@ +// Package utils provides common utility functions for working with Go collections. +package utils + +// PreserveMap merges keys from current into applied only if they are missing in applied. +func PreserveMap(applied, current map[string]string) map[string]string { + if len(current) == 0 { + return applied + } + if applied == nil { + applied = make(map[string]string) + } + for k, v := range current { + if _, exists := applied[k]; !exists { + applied[k] = v + } + } + return applied +} diff --git a/pkg/mutation/editors/container.go b/pkg/mutation/editors/container.go new file mode 100644 index 0000000..9bbc560 --- /dev/null +++ b/pkg/mutation/editors/container.go @@ -0,0 +1,110 @@ +// Package editors provides editors for mutating Kubernetes objects. +package editors + +import ( + "slices" + + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" +) + +// ContainerEditor provides a typed API for mutating a Kubernetes Container. +type ContainerEditor struct { + container *corev1.Container +} + +// NewContainerEditor creates a new ContainerEditor for the given container. +func NewContainerEditor(container *corev1.Container) *ContainerEditor { + return &ContainerEditor{container: container} +} + +// Raw returns the underlying *corev1.Container. +func (e *ContainerEditor) Raw() *corev1.Container { + return e.container +} + +// EnsureEnvVar ensures an environment variable exists in the container. +// +// Behavior: +// - If an environment variable with the same name exists, it is replaced with the provided EnvVar. +// - If it does not exist, the new EnvVar is appended. +func (e *ContainerEditor) EnsureEnvVar(ev corev1.EnvVar) { + for i := range e.container.Env { + if e.container.Env[i].Name == ev.Name { + e.container.Env[i] = ev + return + } + } + e.container.Env = append(e.container.Env, ev) +} + +// EnsureEnvVars ensures multiple environment variables exist in the container. +func (e *ContainerEditor) EnsureEnvVars(vars []corev1.EnvVar) { + for _, ev := range vars { + e.EnsureEnvVar(ev) + } +} + +// RemoveEnvVar removes all environment variables with the specified name. +func (e *ContainerEditor) RemoveEnvVar(name string) { + e.container.Env = slices.DeleteFunc(e.container.Env, func(ev corev1.EnvVar) bool { + return ev.Name == name + }) +} + +// RemoveEnvVars removes all environment variables with any of the specified names. +func (e *ContainerEditor) RemoveEnvVars(names []string) { + for _, name := range names { + e.RemoveEnvVar(name) + } +} + +// EnsureArg ensures the specified argument exists in the container's args list. +// If the argument is already present, it is not added again. +func (e *ContainerEditor) EnsureArg(arg string) { + if !slices.Contains(e.container.Args, arg) { + e.container.Args = append(e.container.Args, arg) + } +} + +// EnsureArgs ensures multiple arguments exist in the container's args list. +func (e *ContainerEditor) EnsureArgs(args []string) { + for _, arg := range args { + e.EnsureArg(arg) + } +} + +// RemoveArg removes all occurrences of the specified argument from the container's args list. +func (e *ContainerEditor) RemoveArg(arg string) { + e.container.Args = slices.DeleteFunc(e.container.Args, func(a string) bool { + return a == arg + }) +} + +// RemoveArgs removes all occurrences of the specified arguments from the container's args list. +func (e *ContainerEditor) RemoveArgs(args []string) { + for _, arg := range args { + e.RemoveArg(arg) + } +} + +// SetResourceLimit sets the resource limit for the container. +func (e *ContainerEditor) SetResourceLimit(name corev1.ResourceName, quantity resource.Quantity) { + if e.container.Resources.Limits == nil { + e.container.Resources.Limits = make(corev1.ResourceList) + } + e.container.Resources.Limits[name] = quantity +} + +// SetResourceRequest sets the resource request for the container. +func (e *ContainerEditor) SetResourceRequest(name corev1.ResourceName, quantity resource.Quantity) { + if e.container.Resources.Requests == nil { + e.container.Resources.Requests = make(corev1.ResourceList) + } + e.container.Resources.Requests[name] = quantity +} + +// SetResources sets the resource requirements for the container. +func (e *ContainerEditor) SetResources(res corev1.ResourceRequirements) { + e.container.Resources = res +} diff --git a/pkg/mutation/editors/container_test.go b/pkg/mutation/editors/container_test.go new file mode 100644 index 0000000..4f940aa --- /dev/null +++ b/pkg/mutation/editors/container_test.go @@ -0,0 +1,191 @@ +package editors + +import ( + "testing" + + "github.com/stretchr/testify/assert" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" +) + +func TestContainerEditor_Raw(t *testing.T) { + c := &corev1.Container{} + e := NewContainerEditor(c) + assert.Equal(t, c, e.Raw()) +} + +func TestContainerEditor_EnsureEnvVar(t *testing.T) { + c := &corev1.Container{ + Env: []corev1.EnvVar{ + {Name: "FOO", Value: "BAR"}, + { + Name: "EXISTING_VAL_FROM", + ValueFrom: &corev1.EnvVarSource{ + SecretKeyRef: &corev1.SecretKeySelector{ + Key: "key", + }, + }, + }, + }, + } + e := NewContainerEditor(c) + + e.EnsureEnvVar(corev1.EnvVar{Name: "FOO", Value: "NEW"}) + assert.Equal(t, "NEW", c.Env[0].Value) + assert.Nil(t, c.Env[0].ValueFrom) + + e.EnsureEnvVar(corev1.EnvVar{Name: "EXISTING_VAL_FROM", Value: "LITERAL"}) + assert.Equal(t, "LITERAL", c.Env[1].Value) + assert.Nil(t, c.Env[1].ValueFrom) + + e.EnsureEnvVar(corev1.EnvVar{Name: "BAR", Value: "VAL"}) + assert.Len(t, c.Env, 3) + assert.Equal(t, "BAR", c.Env[2].Name) + assert.Equal(t, "VAL", c.Env[2].Value) +} + +func TestContainerEditor_EnsureEnvVar_ValueFrom(t *testing.T) { + c := &corev1.Container{} + e := NewContainerEditor(c) + + ev := corev1.EnvVar{ + Name: "SECRET_VAR", + ValueFrom: &corev1.EnvVarSource{ + SecretKeyRef: &corev1.SecretKeySelector{ + LocalObjectReference: corev1.LocalObjectReference{Name: "my-secret"}, + Key: "my-key", + }, + }, + } + e.EnsureEnvVar(ev) + assert.Len(t, c.Env, 1) + assert.Equal(t, "SECRET_VAR", c.Env[0].Name) + assert.Equal(t, "", c.Env[0].Value) + assert.NotNil(t, c.Env[0].ValueFrom) + assert.Equal(t, "my-secret", c.Env[0].ValueFrom.SecretKeyRef.Name) +} + +func TestContainerEditor_EnsureEnvVars(t *testing.T) { + c := &corev1.Container{} + e := NewContainerEditor(c) + + e.EnsureEnvVars([]corev1.EnvVar{ + {Name: "MULTI1", Value: "VAL1"}, + {Name: "MULTI2", Value: "VAL2"}, + }) + assert.Len(t, c.Env, 2) + assert.Equal(t, "MULTI1", c.Env[0].Name) + assert.Equal(t, "VAL1", c.Env[0].Value) + assert.Equal(t, "MULTI2", c.Env[1].Name) + assert.Equal(t, "VAL2", c.Env[1].Value) +} + +func TestContainerEditor_RemoveEnvVar(t *testing.T) { + c := &corev1.Container{ + Env: []corev1.EnvVar{ + {Name: "FOO", Value: "BAR"}, + {Name: "BAZ", Value: "QUX"}, + }, + } + e := NewContainerEditor(c) + + e.RemoveEnvVar("FOO") + assert.Len(t, c.Env, 1) + assert.Equal(t, "BAZ", c.Env[0].Name) +} + +func TestContainerEditor_RemoveEnvVars(t *testing.T) { + c := &corev1.Container{ + Env: []corev1.EnvVar{ + {Name: "FOO", Value: "BAR"}, + {Name: "BAZ", Value: "QUX"}, + {Name: "KEEP", Value: "STAY"}, + }, + } + e := NewContainerEditor(c) + + e.RemoveEnvVars([]string{"FOO", "BAZ"}) + assert.Len(t, c.Env, 1) + assert.Equal(t, "KEEP", c.Env[0].Name) +} + +func TestContainerEditor_EnsureArg(t *testing.T) { + c := &corev1.Container{ + Args: []string{"--arg1"}, + } + e := NewContainerEditor(c) + + e.EnsureArg("--arg2") + assert.Contains(t, c.Args, "--arg2") + e.EnsureArg("--arg1") + assert.Len(t, c.Args, 2) +} + +func TestContainerEditor_EnsureArgs(t *testing.T) { + c := &corev1.Container{} + e := NewContainerEditor(c) + + e.EnsureArgs([]string{"--arg3", "--arg4"}) + assert.Len(t, c.Args, 2) + assert.Contains(t, c.Args, "--arg3") + assert.Contains(t, c.Args, "--arg4") +} + +func TestContainerEditor_RemoveArg(t *testing.T) { + c := &corev1.Container{ + Args: []string{"--arg1", "--arg2"}, + } + e := NewContainerEditor(c) + + e.RemoveArg("--arg1") + assert.Len(t, c.Args, 1) + assert.Equal(t, "--arg2", c.Args[0]) +} + +func TestContainerEditor_RemoveArgs(t *testing.T) { + c := &corev1.Container{ + Args: []string{"--arg1", "--arg2", "--arg3"}, + } + e := NewContainerEditor(c) + + e.RemoveArgs([]string{"--arg1", "--arg3"}) + assert.Len(t, c.Args, 1) + assert.Equal(t, "--arg2", c.Args[0]) +} + +func TestContainerEditor_SetResourceLimit(t *testing.T) { + c := &corev1.Container{} + e := NewContainerEditor(c) + + q := resource.MustParse("100m") + e.SetResourceLimit(corev1.ResourceCPU, q) + + assert.Equal(t, q, c.Resources.Limits[corev1.ResourceCPU]) +} + +func TestContainerEditor_SetResourceRequest(t *testing.T) { + c := &corev1.Container{} + e := NewContainerEditor(c) + + q := resource.MustParse("128Mi") + e.SetResourceRequest(corev1.ResourceMemory, q) + + assert.Equal(t, q, c.Resources.Requests[corev1.ResourceMemory]) +} + +func TestContainerEditor_SetResources(t *testing.T) { + c := &corev1.Container{} + e := NewContainerEditor(c) + + res := corev1.ResourceRequirements{ + Limits: corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("100m"), + }, + Requests: corev1.ResourceList{ + corev1.ResourceMemory: resource.MustParse("128Mi"), + }, + } + e.SetResources(res) + + assert.Equal(t, res, c.Resources) +} diff --git a/pkg/mutation/editors/deploymentspec.go b/pkg/mutation/editors/deploymentspec.go new file mode 100644 index 0000000..a5684b8 --- /dev/null +++ b/pkg/mutation/editors/deploymentspec.go @@ -0,0 +1,49 @@ +package editors + +import ( + appsv1 "k8s.io/api/apps/v1" +) + +// DeploymentSpecEditor provides a typed API for mutating a Kubernetes DeploymentSpec. +type DeploymentSpecEditor struct { + spec *appsv1.DeploymentSpec +} + +// NewDeploymentSpecEditor creates a new DeploymentSpecEditor for the given DeploymentSpec. +func NewDeploymentSpecEditor(spec *appsv1.DeploymentSpec) *DeploymentSpecEditor { + return &DeploymentSpecEditor{spec: spec} +} + +// Raw returns the underlying *appsv1.DeploymentSpec. +// +// This is an escape hatch for cases where the typed API is insufficient. +func (e *DeploymentSpecEditor) Raw() *appsv1.DeploymentSpec { + return e.spec +} + +// SetReplicas sets the number of replicas for the deployment. +func (e *DeploymentSpecEditor) SetReplicas(replicas int32) { + e.spec.Replicas = &replicas +} + +// SetPaused sets the paused field of the deployment. +func (e *DeploymentSpecEditor) SetPaused(paused bool) { + e.spec.Paused = paused +} + +// SetMinReadySeconds sets the minimum number of seconds for which a newly created pod should be ready +// without any of its container crashing, for it to be considered available. +func (e *DeploymentSpecEditor) SetMinReadySeconds(seconds int32) { + e.spec.MinReadySeconds = seconds +} + +// SetRevisionHistoryLimit sets the number of old ReplicaSets to retain to allow rollback. +func (e *DeploymentSpecEditor) SetRevisionHistoryLimit(limit int32) { + e.spec.RevisionHistoryLimit = &limit +} + +// SetProgressDeadlineSeconds sets the maximum time in seconds for a deployment to make progress before it +// is considered to be failed. +func (e *DeploymentSpecEditor) SetProgressDeadlineSeconds(seconds int32) { + e.spec.ProgressDeadlineSeconds = &seconds +} diff --git a/pkg/mutation/editors/deploymentspec_test.go b/pkg/mutation/editors/deploymentspec_test.go new file mode 100644 index 0000000..8564a52 --- /dev/null +++ b/pkg/mutation/editors/deploymentspec_test.go @@ -0,0 +1,51 @@ +package editors + +import ( + "testing" + + "github.com/stretchr/testify/assert" + appsv1 "k8s.io/api/apps/v1" +) + +func TestDeploymentSpecEditor(t *testing.T) { + t.Run("SetReplicas", func(t *testing.T) { + spec := &appsv1.DeploymentSpec{} + editor := NewDeploymentSpecEditor(spec) + editor.SetReplicas(3) + assert.Equal(t, int32(3), *spec.Replicas) + }) + + t.Run("SetPaused", func(t *testing.T) { + spec := &appsv1.DeploymentSpec{} + editor := NewDeploymentSpecEditor(spec) + editor.SetPaused(true) + assert.True(t, spec.Paused) + }) + + t.Run("SetMinReadySeconds", func(t *testing.T) { + spec := &appsv1.DeploymentSpec{} + editor := NewDeploymentSpecEditor(spec) + editor.SetMinReadySeconds(10) + assert.Equal(t, int32(10), spec.MinReadySeconds) + }) + + t.Run("SetRevisionHistoryLimit", func(t *testing.T) { + spec := &appsv1.DeploymentSpec{} + editor := NewDeploymentSpecEditor(spec) + editor.SetRevisionHistoryLimit(5) + assert.Equal(t, int32(5), *spec.RevisionHistoryLimit) + }) + + t.Run("SetProgressDeadlineSeconds", func(t *testing.T) { + spec := &appsv1.DeploymentSpec{} + editor := NewDeploymentSpecEditor(spec) + editor.SetProgressDeadlineSeconds(600) + assert.Equal(t, int32(600), *spec.ProgressDeadlineSeconds) + }) + + t.Run("Raw", func(t *testing.T) { + spec := &appsv1.DeploymentSpec{} + editor := NewDeploymentSpecEditor(spec) + assert.Equal(t, spec, editor.Raw()) + }) +} diff --git a/pkg/mutation/editors/editor.go b/pkg/mutation/editors/editor.go new file mode 100644 index 0000000..6d3e8e6 --- /dev/null +++ b/pkg/mutation/editors/editor.go @@ -0,0 +1,10 @@ +package editors + +// RawEditor provides access to the raw underlying Kubernetes object. +// +// This interface allows users to perform unconstrained modifications to the +// object when the provided typed helpers are insufficient. +type RawEditor[T any] interface { + // Raw returns a pointer to the raw Kubernetes object. + Raw() *T +} diff --git a/pkg/mutation/editors/objectmeta.go b/pkg/mutation/editors/objectmeta.go new file mode 100644 index 0000000..835a9b9 --- /dev/null +++ b/pkg/mutation/editors/objectmeta.go @@ -0,0 +1,50 @@ +package editors + +import ( + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +// ObjectMetaEditor provides a typed API for mutating Kubernetes ObjectMeta. +type ObjectMetaEditor struct { + meta *metav1.ObjectMeta +} + +// NewObjectMetaEditor creates a new ObjectMetaEditor for the given ObjectMeta. +func NewObjectMetaEditor(meta *metav1.ObjectMeta) *ObjectMetaEditor { + return &ObjectMetaEditor{meta: meta} +} + +// Raw returns the underlying *metav1.ObjectMeta. +func (e *ObjectMetaEditor) Raw() *metav1.ObjectMeta { + return e.meta +} + +// EnsureLabel ensures a label with the given key and value exists. +func (e *ObjectMetaEditor) EnsureLabel(key, value string) { + if e.meta.Labels == nil { + e.meta.Labels = make(map[string]string) + } + e.meta.Labels[key] = value +} + +// RemoveLabel removes a label with the given key. +func (e *ObjectMetaEditor) RemoveLabel(key string) { + if e.meta.Labels != nil { + delete(e.meta.Labels, key) + } +} + +// EnsureAnnotation ensures an annotation with the given key and value exists. +func (e *ObjectMetaEditor) EnsureAnnotation(key, value string) { + if e.meta.Annotations == nil { + e.meta.Annotations = make(map[string]string) + } + e.meta.Annotations[key] = value +} + +// RemoveAnnotation removes an annotation with the given key. +func (e *ObjectMetaEditor) RemoveAnnotation(key string) { + if e.meta.Annotations != nil { + delete(e.meta.Annotations, key) + } +} diff --git a/pkg/mutation/editors/objectmeta_test.go b/pkg/mutation/editors/objectmeta_test.go new file mode 100644 index 0000000..33e4078 --- /dev/null +++ b/pkg/mutation/editors/objectmeta_test.go @@ -0,0 +1,68 @@ +package editors + +import ( + "testing" + + "github.com/stretchr/testify/assert" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +func TestObjectMetaEditor_Raw(t *testing.T) { + meta := &metav1.ObjectMeta{} + e := NewObjectMetaEditor(meta) + assert.Equal(t, meta, e.Raw()) +} + +func TestObjectMetaEditor_EnsureLabel(t *testing.T) { + meta := &metav1.ObjectMeta{} + e := NewObjectMetaEditor(meta) + + e.EnsureLabel("foo", "bar") + assert.Equal(t, "bar", meta.Labels["foo"]) + e.EnsureLabel("foo", "updated") + assert.Equal(t, "updated", meta.Labels["foo"]) +} + +func TestObjectMetaEditor_RemoveLabel(t *testing.T) { + meta := &metav1.ObjectMeta{ + Labels: map[string]string{"foo": "bar"}, + } + e := NewObjectMetaEditor(meta) + + e.RemoveLabel("foo") + assert.NotContains(t, meta.Labels, "foo") + e.RemoveLabel("nonexistent") // safe + + // Nil safety + meta2 := &metav1.ObjectMeta{} + e2 := NewObjectMetaEditor(meta2) + e2.RemoveLabel("any") + assert.Nil(t, meta2.Labels) +} + +func TestObjectMetaEditor_EnsureAnnotation(t *testing.T) { + meta := &metav1.ObjectMeta{} + e := NewObjectMetaEditor(meta) + + e.EnsureAnnotation("ann", "val") + assert.Equal(t, "val", meta.Annotations["ann"]) + e.EnsureAnnotation("ann", "updated") + assert.Equal(t, "updated", meta.Annotations["ann"]) +} + +func TestObjectMetaEditor_RemoveAnnotation(t *testing.T) { + meta := &metav1.ObjectMeta{ + Annotations: map[string]string{"ann": "val"}, + } + e := NewObjectMetaEditor(meta) + + e.RemoveAnnotation("ann") + assert.NotContains(t, meta.Annotations, "ann") + e.RemoveAnnotation("nonexistent") // safe + + // Nil safety + meta2 := &metav1.ObjectMeta{} + e2 := NewObjectMetaEditor(meta2) + e2.RemoveAnnotation("any") + assert.Nil(t, meta2.Annotations) +} diff --git a/pkg/mutation/editors/podspec.go b/pkg/mutation/editors/podspec.go new file mode 100644 index 0000000..a3f4926 --- /dev/null +++ b/pkg/mutation/editors/podspec.go @@ -0,0 +1,143 @@ +package editors + +import ( + "reflect" + + corev1 "k8s.io/api/core/v1" +) + +// PodSpecEditor provides a typed API for mutating a Kubernetes PodSpec. +// It remains a scoped editor, and Raw() can be used for less common changes. +type PodSpecEditor struct { + spec *corev1.PodSpec +} + +// NewPodSpecEditor creates a new PodSpecEditor for the given PodSpec. +func NewPodSpecEditor(spec *corev1.PodSpec) *PodSpecEditor { + return &PodSpecEditor{spec: spec} +} + +// Raw returns the underlying *corev1.PodSpec. +func (e *PodSpecEditor) Raw() *corev1.PodSpec { + return e.spec +} + +// SetServiceAccountName sets the spec.ServiceAccountName field. +func (e *PodSpecEditor) SetServiceAccountName(name string) { + e.spec.ServiceAccountName = name +} + +// EnsureImagePullSecret appends the given secret name to spec.ImagePullSecrets if it's not already present. +// It identifies existing secrets by name. +func (e *PodSpecEditor) EnsureImagePullSecret(name string) { + for _, s := range e.spec.ImagePullSecrets { + if s.Name == name { + return + } + } + e.spec.ImagePullSecrets = append(e.spec.ImagePullSecrets, corev1.LocalObjectReference{Name: name}) +} + +// RemoveImagePullSecret removes all entries from spec.ImagePullSecrets that match the given name. +// It's a no-op if no matching entries are present. +func (e *PodSpecEditor) RemoveImagePullSecret(name string) { + newSecrets := make([]corev1.LocalObjectReference, 0, len(e.spec.ImagePullSecrets)) + for _, s := range e.spec.ImagePullSecrets { + if s.Name != name { + newSecrets = append(newSecrets, s) + } + } + e.spec.ImagePullSecrets = newSecrets +} + +// EnsureNodeSelector initializes spec.NodeSelector if nil and upserts the given key/value pair. +func (e *PodSpecEditor) EnsureNodeSelector(key, value string) { + if e.spec.NodeSelector == nil { + e.spec.NodeSelector = make(map[string]string) + } + e.spec.NodeSelector[key] = value +} + +// RemoveNodeSelector removes the key from spec.NodeSelector. +// It's a no-op if NodeSelector is nil or the key is not present. +func (e *PodSpecEditor) RemoveNodeSelector(key string) { + if e.spec.NodeSelector == nil { + return + } + delete(e.spec.NodeSelector, key) +} + +// EnsureToleration appends the given toleration to spec.Tolerations if an equal toleration is not already present. +// It uses exact struct equality for comparison. +func (e *PodSpecEditor) EnsureToleration(t corev1.Toleration) { + for _, existing := range e.spec.Tolerations { + if reflect.DeepEqual(existing, t) { + return + } + } + e.spec.Tolerations = append(e.spec.Tolerations, t) +} + +// RemoveTolerations removes all tolerations for which the match function returns true. +// It ignores the call if the match function is nil. +func (e *PodSpecEditor) RemoveTolerations(match func(corev1.Toleration) bool) { + if match == nil { + return + } + newTolerations := make([]corev1.Toleration, 0, len(e.spec.Tolerations)) + for _, t := range e.spec.Tolerations { + if !match(t) { + newTolerations = append(newTolerations, t) + } + } + e.spec.Tolerations = newTolerations +} + +// EnsureVolume replaces an existing volume with the same name in spec.Volumes or appends it if missing. +// It identifies volumes by their Name field. +func (e *PodSpecEditor) EnsureVolume(v corev1.Volume) { + for i, existing := range e.spec.Volumes { + if existing.Name == v.Name { + e.spec.Volumes[i] = v + return + } + } + e.spec.Volumes = append(e.spec.Volumes, v) +} + +// RemoveVolume removes all volumes with the given name from spec.Volumes. +// It's a no-op if no volume with the given name is present. +func (e *PodSpecEditor) RemoveVolume(name string) { + newVolumes := make([]corev1.Volume, 0, len(e.spec.Volumes)) + for _, v := range e.spec.Volumes { + if v.Name != name { + newVolumes = append(newVolumes, v) + } + } + e.spec.Volumes = newVolumes +} + +// SetPriorityClassName sets the spec.PriorityClassName field. +func (e *PodSpecEditor) SetPriorityClassName(name string) { + e.spec.PriorityClassName = name +} + +// SetHostNetwork sets the spec.HostNetwork field. +func (e *PodSpecEditor) SetHostNetwork(enabled bool) { + e.spec.HostNetwork = enabled +} + +// SetHostPID sets the spec.HostPID field. +func (e *PodSpecEditor) SetHostPID(enabled bool) { + e.spec.HostPID = enabled +} + +// SetHostIPC sets the spec.HostIPC field. +func (e *PodSpecEditor) SetHostIPC(enabled bool) { + e.spec.HostIPC = enabled +} + +// SetSecurityContext sets the spec.SecurityContext field. +func (e *PodSpecEditor) SetSecurityContext(ctx *corev1.PodSecurityContext) { + e.spec.SecurityContext = ctx +} diff --git a/pkg/mutation/editors/podspec_test.go b/pkg/mutation/editors/podspec_test.go new file mode 100644 index 0000000..fbd0cd9 --- /dev/null +++ b/pkg/mutation/editors/podspec_test.go @@ -0,0 +1,150 @@ +package editors + +import ( + "testing" + + "github.com/stretchr/testify/assert" + corev1 "k8s.io/api/core/v1" +) + +func TestPodSpecEditor_Raw(t *testing.T) { + spec := &corev1.PodSpec{} + e := NewPodSpecEditor(spec) + + assert.Equal(t, spec, e.Raw()) + + e.Raw().ServiceAccountName = "test-sa" + assert.Equal(t, "test-sa", spec.ServiceAccountName) +} + +func TestPodSpecEditor_SetServiceAccountName(t *testing.T) { + spec := &corev1.PodSpec{} + e := NewPodSpecEditor(spec) + e.SetServiceAccountName("my-sa") + assert.Equal(t, "my-sa", spec.ServiceAccountName) +} + +func TestPodSpecEditor_ImagePullSecrets(t *testing.T) { + spec := &corev1.PodSpec{} + e := NewPodSpecEditor(spec) + + // EnsureImagePullSecret + e.EnsureImagePullSecret("secret1") + assert.Equal(t, []corev1.LocalObjectReference{{Name: "secret1"}}, spec.ImagePullSecrets) + + e.EnsureImagePullSecret("secret1") // no-op + assert.Equal(t, []corev1.LocalObjectReference{{Name: "secret1"}}, spec.ImagePullSecrets) + + e.EnsureImagePullSecret("secret2") + assert.Equal(t, []corev1.LocalObjectReference{{Name: "secret1"}, {Name: "secret2"}}, spec.ImagePullSecrets) + + // RemoveImagePullSecret + spec.ImagePullSecrets = append(spec.ImagePullSecrets, corev1.LocalObjectReference{Name: "secret1"}) + e.RemoveImagePullSecret("secret1") + assert.Equal(t, []corev1.LocalObjectReference{{Name: "secret2"}}, spec.ImagePullSecrets) + + e.RemoveImagePullSecret("non-existent") // safe + assert.Equal(t, []corev1.LocalObjectReference{{Name: "secret2"}}, spec.ImagePullSecrets) +} + +func TestPodSpecEditor_NodeSelector(t *testing.T) { + spec := &corev1.PodSpec{} + e := NewPodSpecEditor(spec) + + // EnsureNodeSelector + e.EnsureNodeSelector("key1", "val1") + assert.Equal(t, map[string]string{"key1": "val1"}, spec.NodeSelector) + + e.EnsureNodeSelector("key1", "val2") // update + assert.Equal(t, map[string]string{"key1": "val2"}, spec.NodeSelector) + + e.EnsureNodeSelector("key2", "val2") + assert.Equal(t, map[string]string{"key1": "val2", "key2": "val2"}, spec.NodeSelector) + + // RemoveNodeSelector + e.RemoveNodeSelector("key1") + assert.Equal(t, map[string]string{"key2": "val2"}, spec.NodeSelector) + + e.RemoveNodeSelector("non-existent") // safe + assert.Equal(t, map[string]string{"key2": "val2"}, spec.NodeSelector) + + spec.NodeSelector = nil + e.RemoveNodeSelector("key2") // safe on nil map + assert.Nil(t, spec.NodeSelector) +} + +func TestPodSpecEditor_Tolerations(t *testing.T) { + spec := &corev1.PodSpec{} + e := NewPodSpecEditor(spec) + tol1 := corev1.Toleration{Key: "key1", Operator: corev1.TolerationOpEqual, Value: "val1"} + + // EnsureToleration + e.EnsureToleration(tol1) + assert.Equal(t, []corev1.Toleration{tol1}, spec.Tolerations) + + e.EnsureToleration(tol1) // no-op + assert.Equal(t, []corev1.Toleration{tol1}, spec.Tolerations) + + // RemoveTolerations + tol2 := corev1.Toleration{Key: "key2", Operator: corev1.TolerationOpEqual, Value: "val2"} + spec.Tolerations = append(spec.Tolerations, tol2, tol1) + + e.RemoveTolerations(func(t corev1.Toleration) bool { return t.Key == "key1" }) + assert.Equal(t, []corev1.Toleration{tol2}, spec.Tolerations) + + e.RemoveTolerations(nil) // no-op + assert.Equal(t, []corev1.Toleration{tol2}, spec.Tolerations) +} + +func TestPodSpecEditor_Volumes(t *testing.T) { + spec := &corev1.PodSpec{} + e := NewPodSpecEditor(spec) + vol1 := corev1.Volume{Name: "vol1", VolumeSource: corev1.VolumeSource{EmptyDir: &corev1.EmptyDirVolumeSource{}}} + + // EnsureVolume + e.EnsureVolume(vol1) + assert.Equal(t, []corev1.Volume{vol1}, spec.Volumes) + + vol1Updated := corev1.Volume{Name: "vol1", VolumeSource: corev1.VolumeSource{HostPath: &corev1.HostPathVolumeSource{Path: "/foo"}}} + e.EnsureVolume(vol1Updated) // replace + assert.Equal(t, []corev1.Volume{vol1Updated}, spec.Volumes) + + // RemoveVolume + spec.Volumes = append(spec.Volumes, corev1.Volume{Name: "vol2"}, vol1Updated) + e.RemoveVolume("vol1") + assert.Equal(t, []corev1.Volume{{Name: "vol2"}}, spec.Volumes) + + e.RemoveVolume("non-existent") // safe + assert.Equal(t, []corev1.Volume{{Name: "vol2"}}, spec.Volumes) +} + +func TestPodSpecEditor_SetPriorityClassName(t *testing.T) { + spec := &corev1.PodSpec{} + e := NewPodSpecEditor(spec) + e.SetPriorityClassName("high-priority") + assert.Equal(t, "high-priority", spec.PriorityClassName) +} + +func TestPodSpecEditor_HostToggles(t *testing.T) { + spec := &corev1.PodSpec{} + e := NewPodSpecEditor(spec) + + e.SetHostNetwork(true) + assert.True(t, spec.HostNetwork) + e.SetHostPID(true) + assert.True(t, spec.HostPID) + e.SetHostIPC(true) + assert.True(t, spec.HostIPC) +} + +func TestPodSpecEditor_SetSecurityContext(t *testing.T) { + spec := &corev1.PodSpec{} + e := NewPodSpecEditor(spec) + sc := &corev1.PodSecurityContext{RunAsUser: ptr(int64(1000))} + e.SetSecurityContext(sc) + assert.Equal(t, sc, spec.SecurityContext) +} + +func ptr[T any](v T) *T { + return &v +} diff --git a/pkg/mutation/selectors/container.go b/pkg/mutation/selectors/container.go new file mode 100644 index 0000000..8c834b3 --- /dev/null +++ b/pkg/mutation/selectors/container.go @@ -0,0 +1,44 @@ +// Package selectors provides selectors for filtering Kubernetes objects. +package selectors + +import ( + "slices" + + corev1 "k8s.io/api/core/v1" +) + +// ContainerSelector is a function that determines if a container matches +// a specific criteria for mutation. +// +// In the context of a Mutator Apply() pass, the selector is evaluated against +// the original container snapshot before any edits are applied. This ensures +// stable matching even if an earlier edit renames the container. +type ContainerSelector func(index int, c *corev1.Container) bool + +// AllContainers returns a ContainerSelector that matches all containers. +func AllContainers() ContainerSelector { + return func(int, *corev1.Container) bool { + return true + } +} + +// ContainerNamed returns a ContainerSelector that matches a container with the given name. +func ContainerNamed(name string) ContainerSelector { + return func(_ int, c *corev1.Container) bool { + return c.Name == name + } +} + +// ContainersNamed returns a ContainerSelector that matches containers with any of the given names. +func ContainersNamed(names ...string) ContainerSelector { + return func(_ int, c *corev1.Container) bool { + return slices.Contains(names, c.Name) + } +} + +// ContainerAtIndex returns a ContainerSelector that matches a container at the given index. +func ContainerAtIndex(index int) ContainerSelector { + return func(i int, _ *corev1.Container) bool { + return i == index + } +} diff --git a/pkg/mutation/selectors/container_test.go b/pkg/mutation/selectors/container_test.go new file mode 100644 index 0000000..a07d2ff --- /dev/null +++ b/pkg/mutation/selectors/container_test.go @@ -0,0 +1,41 @@ +package selectors + +import ( + "testing" + + "github.com/stretchr/testify/assert" + corev1 "k8s.io/api/core/v1" +) + +func TestAllContainers(t *testing.T) { + c1 := &corev1.Container{Name: "c1"} + c2 := &corev1.Container{Name: "c2"} + + assert.True(t, AllContainers()(0, c1)) + assert.True(t, AllContainers()(1, c2)) +} + +func TestContainerNamed(t *testing.T) { + c1 := &corev1.Container{Name: "c1"} + c2 := &corev1.Container{Name: "c2"} + + assert.True(t, ContainerNamed("c1")(0, c1)) + assert.False(t, ContainerNamed("c1")(1, c2)) +} + +func TestContainersNamed(t *testing.T) { + c1 := &corev1.Container{Name: "c1"} + c2 := &corev1.Container{Name: "c2"} + + assert.True(t, ContainersNamed("c1", "c3")(0, c1)) + assert.False(t, ContainersNamed("c1", "c3")(1, c2)) +} + +func TestContainerAtIndex(t *testing.T) { + c1 := &corev1.Container{Name: "c1"} + c2 := &corev1.Container{Name: "c2"} + + assert.True(t, ContainerAtIndex(0)(0, c1)) + assert.False(t, ContainerAtIndex(0)(1, c2)) + assert.True(t, ContainerAtIndex(1)(1, c2)) +} diff --git a/pkg/primitives/deployment/builder.go b/pkg/primitives/deployment/builder.go new file mode 100644 index 0000000..9100fa8 --- /dev/null +++ b/pkg/primitives/deployment/builder.go @@ -0,0 +1,218 @@ +// Package deployment provides a builder and resource for managing Kubernetes Deployments. +package deployment + +import ( + "fmt" + + "github.com/sourcehawk/operator-component-framework/internal/generic" + "github.com/sourcehawk/operator-component-framework/pkg/component" + "github.com/sourcehawk/operator-component-framework/pkg/feature" + appsv1 "k8s.io/api/apps/v1" +) + +// Builder is a configuration helper for creating and customizing a Deployment Resource. +// +// It provides a fluent API for registering mutations, status handlers, and +// data extractors. This builder ensures that the resulting Resource is +// properly initialized and validated before use in a reconciliation loop. +type Builder struct { + base *generic.WorkloadBuilder[*appsv1.Deployment, *Mutator] +} + +// NewBuilder initializes a new Builder with the provided Deployment object. +// +// The Deployment object passed here serves as the "desired base state". During +// reconciliation, the Resource will attempt to make the cluster's state match +// this base state, modified by any registered mutations. +// +// The provided deployment must have at least a Name and Namespace set, which +// is validated during the Build() call. +func NewBuilder(deployment *appsv1.Deployment) *Builder { + identityFunc := func(d *appsv1.Deployment) string { + return fmt.Sprintf("apps/v1/Deployment/%s/%s", d.Namespace, d.Name) + } + + base := generic.NewWorkloadBuilder[*appsv1.Deployment, *Mutator]( + deployment, + identityFunc, + DefaultFieldApplicator, + NewMutator, + ) + + base. + WithCustomConvergeStatus(DefaultConvergingStatusHandler). + WithCustomGraceStatus(DefaultGraceStatusHandler). + WithCustomSuspendStatus(DefaultSuspensionStatusHandler). + WithCustomSuspendMutation(DefaultSuspendMutationHandler). + WithCustomSuspendDeletionDecision(DefaultDeleteOnSuspendHandler) + + return &Builder{ + base: base, + } +} + +// WithMutation registers a feature-based mutation for the Deployment. +// +// Mutations are applied sequentially during the Mutate() phase of reconciliation. +// They are typically used by Features to inject environment variables, +// arguments, or other configuration into the Deployment's containers. +// +// Since mutations are often version-gated, the provided feature.Mutation +// should contain the logic to determine if and how the mutation is applied +// based on the component's current version or configuration. +func (b *Builder) WithMutation(m feature.Mutation[*Mutator]) *Builder { + b.base.WithMutation(m) + return b +} + +// WithCustomFieldApplicator sets a custom strategy for applying the desired +// state to the existing Deployment in the cluster. +// +// There is a default field applicator (DefaultFieldApplicator) that overwrites +// the entire spec of the current object with the desired state. Using a custom +// applicator is necessary when: +// - Other controllers (e.g., HPA) manage specific fields like 'replicas'. +// - Sidecar injectors add containers or volumes that should be preserved. +// - Defaulting webhooks add fields that would otherwise cause perpetual diffs. +// +// The applicator function receives both the 'current' object from the API +// server and the 'desired' object from the Resource. It is responsible for +// merging the desired changes into the current object. +// +// If a custom applicator is set, it overrides the default baseline application +// logic. Post-application flavors and mutations are still applied afterward. +func (b *Builder) WithCustomFieldApplicator( + applicator func(current *appsv1.Deployment, desired *appsv1.Deployment) error, +) *Builder { + b.base.WithCustomFieldApplicator(applicator) + return b +} + +// WithFieldApplicationFlavor registers a reusable post-application "flavor" for +// the Deployment. +// +// Flavors are applied in the order they are registered, after the baseline field +// applicator (default or custom) has already run. They are typically used to +// preserve selected live fields from the current object that should not be +// overwritten by the desired state. +// +// If the provided flavor is nil, it is ignored. +func (b *Builder) WithFieldApplicationFlavor(flavor FieldApplicationFlavor) *Builder { + b.base.WithFieldApplicationFlavor(generic.FieldApplicationFlavor[*appsv1.Deployment](flavor)) + return b +} + +// WithCustomConvergeStatus overrides the default logic for determining if the +// Deployment has reached its desired state. +// +// The default behavior uses DefaultConvergingStatusHandler, which considers a +// Deployment ready when its ReadyReplicas count matches the desired replica count. +// Use this method if your Deployment requires more complex health checks, such +// as waiting for specific annotations, status conditions, or external signals. +// +// If you want to augment the default behavior, you can call DefaultConvergingStatusHandler +// within your custom handler. +func (b *Builder) WithCustomConvergeStatus( + handler func(component.ConvergingOperation, *appsv1.Deployment) (component.ConvergingStatusWithReason, error), +) *Builder { + b.base.WithCustomConvergeStatus(handler) + return b +} + +// WithCustomGraceStatus overrides how the Deployment reports its health while +// it is still converging (e.g., during a rollout). +// +// The default behavior uses DefaultGraceStatusHandler. +// +// This is used to provide more granular feedback in the component's status +// about the severity of a rollout's progress or failure. +// +// If you want to augment the default behavior, you can call DefaultGraceStatusHandler +// within your custom handler. +func (b *Builder) WithCustomGraceStatus( + handler func(*appsv1.Deployment) (component.GraceStatusWithReason, error), +) *Builder { + b.base.WithCustomGraceStatus(handler) + return b +} + +// WithCustomSuspendStatus overrides how the progress of suspension is reported. +// +// The default behavior uses DefaultSuspensionStatusHandler, which reports the +// progress of scaling down to zero replicas. Use this if your custom suspension +// strategy involves other measurable states. +// +// If you want to augment the default behavior, you can call DefaultSuspensionStatusHandler +// within your custom handler. +func (b *Builder) WithCustomSuspendStatus( + handler func(*appsv1.Deployment) (component.SuspensionStatusWithReason, error), +) *Builder { + b.base.WithCustomSuspendStatus(handler) + return b +} + +// WithCustomSuspendMutation defines how the Deployment should be modified when +// the component is suspended. +// +// The default behavior uses DefaultSuspendMutationHandler, which scales the +// Deployment to zero replicas. You might override this if you want to suspend +// the workload by other means, such as changing labels, annotations, or +// updating a 'suspended' field in a custom controller. +// +// If you want to augment the default behavior, you can call DefaultSuspendMutationHandler +// within your custom handler. +func (b *Builder) WithCustomSuspendMutation( + handler func(*Mutator) error, +) *Builder { + b.base.WithCustomSuspendMutation(handler) + return b +} + +// WithCustomSuspendDeletionDecision overrides the decision of whether to delete +// the Deployment when the component is suspended. +// +// The default behavior uses DefaultDeleteOnSuspendHandler, which does not +// delete Deployments during suspension (only scaled down). Return true from +// this handler if you want the Deployment to be completely removed from the +// cluster when suspended. +// +// If you want to augment the default behavior, you can call DefaultDeleteOnSuspendHandler +// within your custom handler. +func (b *Builder) WithCustomSuspendDeletionDecision( + handler func(*appsv1.Deployment) bool, +) *Builder { + b.base.WithCustomSuspendDeletionDecision(handler) + return b +} + +// WithDataExtractor registers a function to harvest information from the +// Deployment after it has been successfully reconciled. +// +// This is useful for capturing auto-generated fields (like names or assigned +// IPs) and making them available to other components or resources via the +// framework's data extraction mechanism. +func (b *Builder) WithDataExtractor( + extractor func(appsv1.Deployment) error, +) *Builder { + if extractor != nil { + b.base.WithDataExtractor(func(d *appsv1.Deployment) error { + return extractor(*d) + }) + } + return b +} + +// Build validates the configuration and returns the initialized Resource. +// +// It ensures that: +// - A base Deployment object was provided. +// - The Deployment has both a name and a namespace set. +// +// If validation fails, an error is returned and the Resource should not be used. +func (b *Builder) Build() (*Resource, error) { + genericRes, err := b.base.Build() + if err != nil { + return nil, err + } + return &Resource{base: genericRes}, nil +} diff --git a/pkg/primitives/deployment/builder_test.go b/pkg/primitives/deployment/builder_test.go new file mode 100644 index 0000000..2e9dcf9 --- /dev/null +++ b/pkg/primitives/deployment/builder_test.go @@ -0,0 +1,273 @@ +package deployment + +import ( + "errors" + "testing" + + "github.com/sourcehawk/operator-component-framework/pkg/component" + "github.com/sourcehawk/operator-component-framework/pkg/feature" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + appsv1 "k8s.io/api/apps/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +func TestBuilder(t *testing.T) { + t.Parallel() + + t.Run("Build validation", func(t *testing.T) { + t.Parallel() + + tests := []struct { + name string + deployment *appsv1.Deployment + expectedErr string + }{ + { + name: "nil deployment", + deployment: nil, + expectedErr: "object cannot be nil", + }, + { + name: "empty name", + deployment: &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: "test-ns", + }, + }, + expectedErr: "object name cannot be empty", + }, + { + name: "empty namespace", + deployment: &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-deploy", + }, + }, + expectedErr: "object namespace cannot be empty", + }, + { + name: "valid deployment", + deployment: &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-deploy", + Namespace: "test-ns", + }, + }, + expectedErr: "", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + res, err := NewBuilder(tt.deployment).Build() + if tt.expectedErr != "" { + require.Error(t, err) + assert.Contains(t, err.Error(), tt.expectedErr) + assert.Nil(t, res) + } else { + require.NoError(t, err) + require.NotNil(t, res) + assert.Equal(t, "apps/v1/Deployment/test-ns/test-deploy", res.Identity()) + } + }) + } + }) + + t.Run("WithMutation", func(t *testing.T) { + t.Parallel() + deploy := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-deploy", + Namespace: "test-ns", + }, + } + m := feature.Mutation[*Mutator]{ + Name: "test-mutation", + } + res, err := NewBuilder(deploy). + WithMutation(m). + Build() + require.NoError(t, err) + assert.Len(t, res.base.Mutations, 1) + assert.Equal(t, "test-mutation", res.base.Mutations[0].Name) + }) + + t.Run("WithCustomFieldApplicator", func(t *testing.T) { + t.Parallel() + deploy := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-deploy", + Namespace: "test-ns", + }, + } + applied := false + applicator := func(_ *appsv1.Deployment, _ *appsv1.Deployment) error { + applied = true + return nil + } + res, err := NewBuilder(deploy). + WithCustomFieldApplicator(applicator). + Build() + require.NoError(t, err) + require.NotNil(t, res.base.CustomFieldApplicator) + _ = res.base.CustomFieldApplicator(nil, nil) + assert.True(t, applied) + }) + + t.Run("WithFieldApplicationFlavor", func(t *testing.T) { + t.Parallel() + deploy := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-deploy", + Namespace: "test-ns", + }, + } + res, err := NewBuilder(deploy). + WithFieldApplicationFlavor(PreserveCurrentLabels). + WithFieldApplicationFlavor(nil). + Build() + require.NoError(t, err) + assert.Len(t, res.base.FieldFlavors, 1) + }) + + t.Run("WithCustomConvergeStatus", func(t *testing.T) { + t.Parallel() + deploy := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-deploy", + Namespace: "test-ns", + }, + } + handler := func(_ component.ConvergingOperation, _ *appsv1.Deployment) (component.ConvergingStatusWithReason, error) { + return component.ConvergingStatusWithReason{Status: component.ConvergingStatusUpdating}, nil + } + res, err := NewBuilder(deploy). + WithCustomConvergeStatus(handler). + Build() + require.NoError(t, err) + require.NotNil(t, res.base.ConvergingStatusHandler) + status, err := res.base.ConvergingStatusHandler(component.ConvergingOperationUpdated, nil) + require.NoError(t, err) + assert.Equal(t, component.ConvergingStatusUpdating, status.Status) + }) + + t.Run("WithCustomGraceStatus", func(t *testing.T) { + t.Parallel() + deploy := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-deploy", + Namespace: "test-ns", + }, + } + handler := func(_ *appsv1.Deployment) (component.GraceStatusWithReason, error) { + return component.GraceStatusWithReason{Status: component.GraceStatusReady}, nil + } + res, err := NewBuilder(deploy). + WithCustomGraceStatus(handler). + Build() + require.NoError(t, err) + require.NotNil(t, res.base.GraceStatusHandler) + status, err := res.base.GraceStatusHandler(nil) + require.NoError(t, err) + assert.Equal(t, component.GraceStatusReady, status.Status) + }) + + t.Run("WithCustomSuspendStatus", func(t *testing.T) { + t.Parallel() + deploy := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-deploy", + Namespace: "test-ns", + }, + } + handler := func(_ *appsv1.Deployment) (component.SuspensionStatusWithReason, error) { + return component.SuspensionStatusWithReason{Status: component.SuspensionStatusSuspended}, nil + } + res, err := NewBuilder(deploy). + WithCustomSuspendStatus(handler). + Build() + require.NoError(t, err) + require.NotNil(t, res.base.SuspendStatusHandler) + status, err := res.base.SuspendStatusHandler(nil) + require.NoError(t, err) + assert.Equal(t, component.SuspensionStatusSuspended, status.Status) + }) + + t.Run("WithCustomSuspendMutation", func(t *testing.T) { + t.Parallel() + deploy := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-deploy", + Namespace: "test-ns", + }, + } + handler := func(_ *Mutator) error { + return errors.New("suspend error") + } + res, err := NewBuilder(deploy). + WithCustomSuspendMutation(handler). + Build() + require.NoError(t, err) + require.NotNil(t, res.base.SuspendMutationHandler) + err = res.base.SuspendMutationHandler(nil) + assert.EqualError(t, err, "suspend error") + }) + + t.Run("WithCustomSuspendDeletionDecision", func(t *testing.T) { + t.Parallel() + deploy := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-deploy", + Namespace: "test-ns", + }, + } + handler := func(_ *appsv1.Deployment) bool { + return true + } + res, err := NewBuilder(deploy). + WithCustomSuspendDeletionDecision(handler). + Build() + require.NoError(t, err) + require.NotNil(t, res.base.DeleteOnSuspendHandler) + assert.True(t, res.base.DeleteOnSuspendHandler(nil)) + }) + + t.Run("WithDataExtractor", func(t *testing.T) { + t.Parallel() + deploy := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-deploy", + Namespace: "test-ns", + }, + } + called := false + extractor := func(_ appsv1.Deployment) error { + called = true + return nil + } + res, err := NewBuilder(deploy). + WithDataExtractor(extractor). + Build() + require.NoError(t, err) + assert.Len(t, res.base.DataExtractors, 1) + err = res.base.DataExtractors[0](&appsv1.Deployment{}) + require.NoError(t, err) + assert.True(t, called) + }) + + t.Run("WithDataExtractor nil", func(t *testing.T) { + t.Parallel() + deploy := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-deploy", + Namespace: "test-ns", + }, + } + res, err := NewBuilder(deploy). + WithDataExtractor(nil). + Build() + require.NoError(t, err) + assert.Len(t, res.base.DataExtractors, 0) + }) +} diff --git a/pkg/primitives/deployment/flavors.go b/pkg/primitives/deployment/flavors.go new file mode 100644 index 0000000..ea79a3c --- /dev/null +++ b/pkg/primitives/deployment/flavors.go @@ -0,0 +1,48 @@ +package deployment + +import ( + "github.com/sourcehawk/operator-component-framework/pkg/flavors" + "github.com/sourcehawk/operator-component-framework/pkg/flavors/utils" + appsv1 "k8s.io/api/apps/v1" +) + +// FieldApplicationFlavor defines a function signature for applying "flavors" to a resource. +// A flavor typically preserves certain fields from the current (live) object after the +// baseline field application has occurred. +type FieldApplicationFlavor flavors.FieldApplicationFlavor[*appsv1.Deployment] + +// PreserveCurrentLabels ensures that any labels present on the current live +// Deployment but missing from the applied (desired) object are preserved. +// If a label exists in both, the applied value wins. +func PreserveCurrentLabels(applied, current, desired *appsv1.Deployment) error { + return flavors.PreserveCurrentLabels[*appsv1.Deployment]()(applied, current, desired) +} + +// PreserveCurrentAnnotations ensures that any annotations present on the current +// live Deployment but missing from the applied (desired) object are preserved. +// If an annotation exists in both, the applied value wins. +func PreserveCurrentAnnotations(applied, current, desired *appsv1.Deployment) error { + return flavors.PreserveCurrentAnnotations[*appsv1.Deployment]()(applied, current, desired) +} + +// PreserveCurrentPodTemplateLabels ensures that any labels present on the +// current live Deployment's pod template but missing from the applied +// (desired) object's pod template are preserved. +// If a label exists in both, the applied value wins. +// +// Note: pod template metadata changes may affect the rollout hash of the Deployment. +func PreserveCurrentPodTemplateLabels(applied, current, _ *appsv1.Deployment) error { + applied.Spec.Template.Labels = utils.PreserveMap(applied.Spec.Template.Labels, current.Spec.Template.Labels) + return nil +} + +// PreserveCurrentPodTemplateAnnotations ensures that any annotations present +// on the current live Deployment's pod template but missing from the applied +// (desired) object's pod template are preserved. +// If an annotation exists in both, the applied value wins. +// +// Note: pod template metadata changes may affect the rollout hash of the Deployment. +func PreserveCurrentPodTemplateAnnotations(applied, current, _ *appsv1.Deployment) error { + applied.Spec.Template.Annotations = utils.PreserveMap(applied.Spec.Template.Annotations, current.Spec.Template.Annotations) + return nil +} diff --git a/pkg/primitives/deployment/flavors_test.go b/pkg/primitives/deployment/flavors_test.go new file mode 100644 index 0000000..0e1985e --- /dev/null +++ b/pkg/primitives/deployment/flavors_test.go @@ -0,0 +1,146 @@ +package deployment + +import ( + "errors" + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +func TestMutate_OrderingAndFlavors(t *testing.T) { + desired := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-deploy", + Namespace: "test-ns", + Labels: map[string]string{"app": "desired"}, + }, + Spec: appsv1.DeploymentSpec{ + Replicas: ptrInt32(3), + }, + } + + t.Run("flavors run after baseline applicator", func(t *testing.T) { + current := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-deploy", + Namespace: "test-ns", + Labels: map[string]string{"extra": "preserved"}, + }, + } + + res, _ := NewBuilder(desired). + WithFieldApplicationFlavor(PreserveCurrentLabels). + Build() + + err := res.Mutate(current) + require.NoError(t, err) + + assert.Equal(t, "desired", current.Labels["app"]) + assert.Equal(t, "preserved", current.Labels["extra"]) + }) + + t.Run("flavors run in registration order", func(t *testing.T) { + current := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-deploy", + Namespace: "test-ns", + }, + } + + var order []string + flavor1 := func(_, _, _ *appsv1.Deployment) error { + order = append(order, "flavor1") + return nil + } + flavor2 := func(_, _, _ *appsv1.Deployment) error { + order = append(order, "flavor2") + return nil + } + + res, _ := NewBuilder(desired). + WithFieldApplicationFlavor(flavor1). + WithFieldApplicationFlavor(flavor2). + Build() + + err := res.Mutate(current) + require.NoError(t, err) + assert.Equal(t, []string{"flavor1", "flavor2"}, order) + }) + + t.Run("flavor error is returned with context", func(t *testing.T) { + current := &appsv1.Deployment{} + flavorErr := errors.New("boom") + flavor := func(_, _, _ *appsv1.Deployment) error { + return flavorErr + } + + res, _ := NewBuilder(desired). + WithFieldApplicationFlavor(flavor). + Build() + + err := res.Mutate(current) + require.Error(t, err) + assert.Contains(t, err.Error(), "failed to apply field application flavor") + assert.True(t, errors.Is(err, flavorErr)) + }) +} + +func TestDefaultFlavors(t *testing.T) { + t.Run("PreserveCurrentLabels", func(t *testing.T) { + applied := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{"keep": "applied", "overlap": "applied"}}} + current := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{"extra": "current", "overlap": "current"}}} + + err := PreserveCurrentLabels(applied, current, nil) + require.NoError(t, err) + assert.Equal(t, "applied", applied.Labels["keep"]) + assert.Equal(t, "applied", applied.Labels["overlap"]) + assert.Equal(t, "current", applied.Labels["extra"]) + }) + + t.Run("PreserveCurrentAnnotations", func(t *testing.T) { + applied := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Annotations: map[string]string{"keep": "applied"}}} + current := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Annotations: map[string]string{"extra": "current"}}} + + err := PreserveCurrentAnnotations(applied, current, nil) + require.NoError(t, err) + assert.Equal(t, "applied", applied.Annotations["keep"]) + assert.Equal(t, "current", applied.Annotations["extra"]) + }) + + t.Run("PreserveCurrentPodTemplateLabels", func(t *testing.T) { + applied := &appsv1.Deployment{Spec: appsv1.DeploymentSpec{Template: corev1.PodTemplateSpec{ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{"keep": "applied"}}}}} + current := &appsv1.Deployment{Spec: appsv1.DeploymentSpec{Template: corev1.PodTemplateSpec{ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{"extra": "current"}}}}} + + err := PreserveCurrentPodTemplateLabels(applied, current, nil) + require.NoError(t, err) + assert.Equal(t, "applied", applied.Spec.Template.Labels["keep"]) + assert.Equal(t, "current", applied.Spec.Template.Labels["extra"]) + }) + + t.Run("PreserveCurrentPodTemplateAnnotations", func(t *testing.T) { + applied := &appsv1.Deployment{Spec: appsv1.DeploymentSpec{Template: corev1.PodTemplateSpec{ObjectMeta: metav1.ObjectMeta{Annotations: map[string]string{"keep": "applied"}}}}} + current := &appsv1.Deployment{Spec: appsv1.DeploymentSpec{Template: corev1.PodTemplateSpec{ObjectMeta: metav1.ObjectMeta{Annotations: map[string]string{"extra": "current"}}}}} + + err := PreserveCurrentPodTemplateAnnotations(applied, current, nil) + require.NoError(t, err) + assert.Equal(t, "applied", applied.Spec.Template.Annotations["keep"]) + assert.Equal(t, "current", applied.Spec.Template.Annotations["extra"]) + }) + + t.Run("handles nil maps safely", func(t *testing.T) { + applied := &appsv1.Deployment{} + current := &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{"extra": "current"}}} + + err := PreserveCurrentLabels(applied, current, nil) + require.NoError(t, err) + assert.Equal(t, "current", applied.Labels["extra"]) + }) +} + +func ptrInt32(i int32) *int32 { + return &i +} diff --git a/pkg/primitives/deployment/handlers.go b/pkg/primitives/deployment/handlers.go new file mode 100644 index 0000000..be00c92 --- /dev/null +++ b/pkg/primitives/deployment/handlers.go @@ -0,0 +1,111 @@ +package deployment + +import ( + "fmt" + + "github.com/sourcehawk/operator-component-framework/pkg/component" + appsv1 "k8s.io/api/apps/v1" +) + +// DefaultConvergingStatusHandler is the default logic for determining if a Deployment has reached its desired state. +// +// It considers a Deployment ready when its Status.ReadyReplicas matches the Spec.Replicas (defaulting to 1 if nil). +// +// This function is used as the default handler by the Resource if no custom handler is registered via +// Builder.WithCustomConvergeStatus. It can be reused within custom handlers to augment the default behavior. +func DefaultConvergingStatusHandler( + op component.ConvergingOperation, deployment *appsv1.Deployment, +) (component.ConvergingStatusWithReason, error) { + desiredReplicas := int32(1) + if deployment.Spec.Replicas != nil { + desiredReplicas = *deployment.Spec.Replicas + } + + if deployment.Status.ReadyReplicas == desiredReplicas { + return component.ConvergingStatusWithReason{ + Status: component.ConvergingStatusReady, + Reason: "All replicas are ready", + }, nil + } + + var status component.ConvergingStatus + switch op { + case component.ConvergingOperationCreated: + status = component.ConvergingStatusCreating + case component.ConvergingOperationUpdated: + status = component.ConvergingStatusUpdating + default: + status = component.ConvergingStatusScaling + } + + return component.ConvergingStatusWithReason{ + Status: status, + Reason: fmt.Sprintf("Waiting for replicas: %d/%d ready", deployment.Status.ReadyReplicas, desiredReplicas), + }, nil +} + +// DefaultGraceStatusHandler provides a default health assessment of the Deployment when it has not yet +// reached full readiness. +// +// It categorizes the current state into: +// - GraceStatusDegraded: At least one replica is ready, but the desired count is not met. +// - GraceStatusDown: No replicas are ready. +// +// This function is used as the default handler by the Resource if no custom handler is registered via +// Builder.WithCustomGraceStatus. It can be reused within custom handlers to augment the default behavior. +func DefaultGraceStatusHandler(deployment *appsv1.Deployment) (component.GraceStatusWithReason, error) { + if deployment.Status.ReadyReplicas > 0 { + return component.GraceStatusWithReason{ + Status: component.GraceStatusDegraded, + Reason: "Deployment partially available", + }, nil + } + + return component.GraceStatusWithReason{ + Status: component.GraceStatusDown, + Reason: "No replicas are ready", + }, nil +} + +// DefaultDeleteOnSuspendHandler provides the default decision of whether to delete the Deployment +// when the parent component is suspended. +// +// It always returns false, meaning the Deployment is kept in the cluster but scaled to zero replicas. +// +// This function is used as the default handler by the Resource if no custom handler is registered via +// Builder.WithCustomSuspendDeletionDecision. It can be reused within custom handlers. +func DefaultDeleteOnSuspendHandler(_ *appsv1.Deployment) bool { + return false +} + +// DefaultSuspendMutationHandler provides the default mutation applied to a Deployment when the component is suspended. +// +// It scales the Deployment to zero replicas by setting Spec.Replicas to 0. +// +// This function is used as the default handler by the Resource if no custom handler is registered via +// Builder.WithCustomSuspendMutation. It can be reused within custom handlers. +func DefaultSuspendMutationHandler(mutator *Mutator) error { + mutator.EnsureReplicas(0) + return nil +} + +// DefaultSuspensionStatusHandler monitors the progress of the suspension process. +// +// It reports whether the Deployment has successfully scaled down to zero replicas +// by checking if Status.Replicas is 0. +// +// This function is used as the default handler by the Resource if no custom handler is registered via +// Builder.WithCustomSuspendStatus. It can be reused within custom handlers. +func DefaultSuspensionStatusHandler(deployment *appsv1.Deployment) (component.SuspensionStatusWithReason, error) { + if deployment.Status.Replicas == 0 { + return component.SuspensionStatusWithReason{ + Status: component.SuspensionStatusSuspended, + Reason: "Deployment scaled to zero", + }, nil + } + + return component.SuspensionStatusWithReason{ + Status: component.SuspensionStatusSuspending, + Reason: fmt.Sprintf("Waiting for replicas to scale down, %d replicas still running.", deployment.Status.Replicas), + }, nil +} diff --git a/pkg/primitives/deployment/handlers_test.go b/pkg/primitives/deployment/handlers_test.go new file mode 100644 index 0000000..66ced38 --- /dev/null +++ b/pkg/primitives/deployment/handlers_test.go @@ -0,0 +1,183 @@ +package deployment + +import ( + "testing" + + "github.com/sourcehawk/operator-component-framework/pkg/component" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + appsv1 "k8s.io/api/apps/v1" + "k8s.io/utils/ptr" +) + +func TestDefaultConvergingStatusHandler(t *testing.T) { + tests := []struct { + name string + op component.ConvergingOperation + deployment *appsv1.Deployment + wantStatus component.ConvergingStatus + wantReason string + }{ + { + name: "ready with 1 replica (default)", + op: component.ConvergingOperationUpdated, + deployment: &appsv1.Deployment{ + Spec: appsv1.DeploymentSpec{}, + Status: appsv1.DeploymentStatus{ + ReadyReplicas: 1, + }, + }, + wantStatus: component.ConvergingStatusReady, + wantReason: "All replicas are ready", + }, + { + name: "ready with custom replicas", + op: component.ConvergingOperationUpdated, + deployment: &appsv1.Deployment{ + Spec: appsv1.DeploymentSpec{ + Replicas: ptr.To(int32(3)), + }, + Status: appsv1.DeploymentStatus{ + ReadyReplicas: 3, + }, + }, + wantStatus: component.ConvergingStatusReady, + wantReason: "All replicas are ready", + }, + { + name: "creating", + op: component.ConvergingOperationCreated, + deployment: &appsv1.Deployment{ + Spec: appsv1.DeploymentSpec{ + Replicas: ptr.To(int32(3)), + }, + Status: appsv1.DeploymentStatus{ + ReadyReplicas: 1, + }, + }, + wantStatus: component.ConvergingStatusCreating, + wantReason: "Waiting for replicas: 1/3 ready", + }, + { + name: "updating", + op: component.ConvergingOperationUpdated, + deployment: &appsv1.Deployment{ + Spec: appsv1.DeploymentSpec{ + Replicas: ptr.To(int32(3)), + }, + Status: appsv1.DeploymentStatus{ + ReadyReplicas: 1, + }, + }, + wantStatus: component.ConvergingStatusUpdating, + wantReason: "Waiting for replicas: 1/3 ready", + }, + { + name: "scaling", + op: component.ConvergingOperation("Scaling"), + deployment: &appsv1.Deployment{ + Spec: appsv1.DeploymentSpec{ + Replicas: ptr.To(int32(3)), + }, + Status: appsv1.DeploymentStatus{ + ReadyReplicas: 1, + }, + }, + wantStatus: component.ConvergingStatusScaling, + wantReason: "Waiting for replicas: 1/3 ready", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + got, err := DefaultConvergingStatusHandler(tt.op, tt.deployment) + require.NoError(t, err) + assert.Equal(t, tt.wantStatus, got.Status) + assert.Equal(t, tt.wantReason, got.Reason) + }) + } +} + +func TestDefaultGraceStatusHandler(t *testing.T) { + t.Run("degraded (some ready)", func(t *testing.T) { + deployment := &appsv1.Deployment{ + Status: appsv1.DeploymentStatus{ + ReadyReplicas: 1, + }, + } + got, err := DefaultGraceStatusHandler(deployment) + require.NoError(t, err) + assert.Equal(t, component.GraceStatusDegraded, got.Status) + assert.Equal(t, "Deployment partially available", got.Reason) + }) + + t.Run("down (none ready)", func(t *testing.T) { + deployment := &appsv1.Deployment{ + Status: appsv1.DeploymentStatus{ + ReadyReplicas: 0, + }, + } + got, err := DefaultGraceStatusHandler(deployment) + require.NoError(t, err) + assert.Equal(t, component.GraceStatusDown, got.Status) + assert.Equal(t, "No replicas are ready", got.Reason) + }) +} + +func TestDefaultDeleteOnSuspendHandler(t *testing.T) { + deploy := &appsv1.Deployment{} + assert.False(t, DefaultDeleteOnSuspendHandler(deploy)) +} + +func TestDefaultSuspendMutationHandler(t *testing.T) { + deploy := &appsv1.Deployment{ + Spec: appsv1.DeploymentSpec{ + Replicas: ptr.To(int32(3)), + }, + } + mutator := NewMutator(deploy) + err := DefaultSuspendMutationHandler(mutator) + require.NoError(t, err) + err = mutator.Apply() + require.NoError(t, err) + assert.Equal(t, int32(0), *deploy.Spec.Replicas) +} + +func TestDefaultSuspensionStatusHandler(t *testing.T) { + tests := []struct { + name string + deployment *appsv1.Deployment + wantStatus component.SuspensionStatus + wantReason string + }{ + { + name: "suspended", + deployment: &appsv1.Deployment{ + Status: appsv1.DeploymentStatus{ + Replicas: 0, + }, + }, + wantStatus: component.SuspensionStatusSuspended, + wantReason: "Deployment scaled to zero", + }, + { + name: "suspending", + deployment: &appsv1.Deployment{ + Status: appsv1.DeploymentStatus{ + Replicas: 2, + }, + }, + wantStatus: component.SuspensionStatusSuspending, + wantReason: "Waiting for replicas to scale down, 2 replicas still running.", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + got, err := DefaultSuspensionStatusHandler(tt.deployment) + require.NoError(t, err) + assert.Equal(t, tt.wantStatus, got.Status) + assert.Equal(t, tt.wantReason, got.Reason) + }) + } +} diff --git a/pkg/primitives/deployment/mutator.go b/pkg/primitives/deployment/mutator.go new file mode 100644 index 0000000..b3bc07e --- /dev/null +++ b/pkg/primitives/deployment/mutator.go @@ -0,0 +1,465 @@ +package deployment + +import ( + "github.com/sourcehawk/operator-component-framework/pkg/feature" + "github.com/sourcehawk/operator-component-framework/pkg/mutation/editors" + "github.com/sourcehawk/operator-component-framework/pkg/mutation/selectors" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" +) + +// Mutation defines a mutation that is applied to a deployment Mutator +// only if its associated feature.ResourceFeature is enabled. +type Mutation feature.Mutation[*Mutator] + +type containerEdit struct { + selector selectors.ContainerSelector + edit func(*editors.ContainerEditor) error +} + +type containerPresenceOp struct { + name string + container *corev1.Container // nil for remove +} + +type featurePlan struct { + deploymentMetadataEdits []func(*editors.ObjectMetaEditor) error + deploymentSpecEdits []func(*editors.DeploymentSpecEditor) error + podTemplateMetadataEdits []func(*editors.ObjectMetaEditor) error + podSpecEdits []func(*editors.PodSpecEditor) error + containerPresence []containerPresenceOp + containerEdits []containerEdit + initContainerPresence []containerPresenceOp + initContainerEdits []containerEdit +} + +// Mutator is a high-level helper for modifying a Kubernetes Deployment. +// +// It uses a "plan-and-apply" pattern: mutations are recorded first, and then +// applied to the Deployment in a single controlled pass when Apply() is called. +// +// This approach ensures that mutations are applied consistently and minimizes +// repeated scans of the underlying Kubernetes structures. +// +// The Mutator maintains feature boundaries: each feature's mutations are planned +// together and applied in the order the features were registered. +type Mutator struct { + current *appsv1.Deployment + + plans []featurePlan + active *featurePlan +} + +// NewMutator creates a new Mutator for the given Deployment. +// +// It is typically used within a Feature's Mutation logic to express desired +// changes to the Deployment. +func NewMutator(current *appsv1.Deployment) *Mutator { + m := &Mutator{ + current: current, + } + m.BeginFeature() + return m +} + +// BeginFeature starts a new feature planning scope. All subsequent mutation +// registrations will be grouped into this feature's plan until EndFeature +// or another BeginFeature is called. +// +// This is used to ensure that mutations from different features are applied +// in registration order while maintaining internal category ordering within +// each feature. +func (m *Mutator) BeginFeature() { + m.plans = append(m.plans, featurePlan{}) + m.active = &m.plans[len(m.plans)-1] +} + +// EditContainers records a mutation for containers matching the given selector. +// +// Planning: +// All container edits are stored and executed during Apply(). +// +// Execution Order: +// - Within a feature, edits are applied in registration order. +// - Overall, container edits are executed AFTER container presence operations within the same feature. +// +// Selection: +// - The selector determines which containers the edit function will be called for. +// - If either selector or edit function is nil, the registration is ignored. +// - Selectors are intended to target containers defined by the baseline resource structure or added by earlier presence operations. +// - Selector matching is evaluated against a snapshot taken after the current feature's container presence operations are applied. +// - Mutations should not rely on earlier edits in the SAME feature phase changing which selectors match. +func (m *Mutator) EditContainers(selector selectors.ContainerSelector, edit func(*editors.ContainerEditor) error) { + if selector == nil || edit == nil { + return + } + m.active.containerEdits = append(m.active.containerEdits, containerEdit{ + selector: selector, + edit: edit, + }) +} + +// EditInitContainers records a mutation for init containers matching the given selector. +// +// Planning: +// All init container edits are stored and executed during Apply(). +// +// Execution Order: +// - Within a feature, edits are applied in registration order. +// - Overall, init container edits apply only to spec.template.spec.initContainers. +// - They run in their own category during Apply(), after init container presence operations within the same feature. +// +// Selection: +// - The selector determines which init containers the edit function will be called for. +// - If either selector or edit function is nil, the registration is ignored. +// - Selector matching is evaluated against a snapshot taken after the current feature's init container presence operations are applied. +func (m *Mutator) EditInitContainers(selector selectors.ContainerSelector, edit func(*editors.ContainerEditor) error) { + if selector == nil || edit == nil { + return + } + m.active.initContainerEdits = append(m.active.initContainerEdits, containerEdit{ + selector: selector, + edit: edit, + }) +} + +// EnsureContainer records that a regular container must be present in the Deployment. +// If a container with the same name exists, it is replaced; otherwise, it is appended. +func (m *Mutator) EnsureContainer(container corev1.Container) { + m.active.containerPresence = append(m.active.containerPresence, containerPresenceOp{ + name: container.Name, + container: &container, + }) +} + +// RemoveContainer records that a regular container should be removed by name. +func (m *Mutator) RemoveContainer(name string) { + m.active.containerPresence = append(m.active.containerPresence, containerPresenceOp{ + name: name, + container: nil, + }) +} + +// RemoveContainers records that multiple regular containers should be removed by name. +func (m *Mutator) RemoveContainers(names []string) { + for _, name := range names { + m.RemoveContainer(name) + } +} + +// EnsureInitContainer records that an init container must be present in the Deployment. +// If an init container with the same name exists, it is replaced; otherwise, it is appended. +func (m *Mutator) EnsureInitContainer(container corev1.Container) { + m.active.initContainerPresence = append(m.active.initContainerPresence, containerPresenceOp{ + name: container.Name, + container: &container, + }) +} + +// RemoveInitContainer records that an init container should be removed by name. +func (m *Mutator) RemoveInitContainer(name string) { + m.active.initContainerPresence = append(m.active.initContainerPresence, containerPresenceOp{ + name: name, + container: nil, + }) +} + +// RemoveInitContainers records that multiple init containers should be removed by name. +func (m *Mutator) RemoveInitContainers(names []string) { + for _, name := range names { + m.RemoveInitContainer(name) + } +} + +// EditDeploymentSpec records a mutation for the Deployment's top-level spec. +// +// Planning: +// All deployment spec edits are stored and executed during Apply(). +// +// Execution Order: +// - Within a feature, edits are applied in registration order. +// - Overall, deployment spec edits are executed AFTER deployment-metadata edits but BEFORE pod template/spec/container edits within the same feature. +// +// If the edit function is nil, the registration is ignored. +func (m *Mutator) EditDeploymentSpec(edit func(*editors.DeploymentSpecEditor) error) { + if edit == nil { + return + } + m.active.deploymentSpecEdits = append(m.active.deploymentSpecEdits, edit) +} + +// EditPodSpec records a mutation for the Deployment's pod spec. +// +// Planning: +// All pod spec edits are stored and executed during Apply(). +// +// Execution Order: +// - Within a feature, edits are applied in registration order. +// - Overall, pod spec edits are executed AFTER replica/metadata edits but BEFORE container edits within the same feature. +// +// If the edit function is nil, the registration is ignored. +func (m *Mutator) EditPodSpec(edit func(*editors.PodSpecEditor) error) { + if edit == nil { + return + } + m.active.podSpecEdits = append(m.active.podSpecEdits, edit) +} + +// EditPodTemplateMetadata records a mutation for the Deployment's pod template metadata. +// +// Planning: +// All pod template metadata edits are stored and executed during Apply(). +// +// Execution Order: +// - Within a feature, edits are applied in registration order. +// - Overall, pod template metadata edits are executed AFTER replica/deployment-metadata edits but BEFORE pod spec/container edits within the same feature. +// +// If the edit function is nil, the registration is ignored. +func (m *Mutator) EditPodTemplateMetadata(edit func(*editors.ObjectMetaEditor) error) { + if edit == nil { + return + } + m.active.podTemplateMetadataEdits = append(m.active.podTemplateMetadataEdits, edit) +} + +// EditDeploymentMetadata records a mutation for the Deployment's own metadata. +// +// Planning: +// All deployment metadata edits are stored and executed during Apply(). +// +// Execution Order: +// - Within a feature, edits are applied in registration order. +// - Overall, deployment metadata edits are executed AFTER replica edits but BEFORE pod template/spec/container edits within the same feature. +// +// If the edit function is nil, the registration is ignored. +func (m *Mutator) EditDeploymentMetadata(edit func(*editors.ObjectMetaEditor) error) { + if edit == nil { + return + } + m.active.deploymentMetadataEdits = append(m.active.deploymentMetadataEdits, edit) +} + +// EnsureReplicas records the desired number of replicas for the Deployment. +func (m *Mutator) EnsureReplicas(replicas int32) { + m.EditDeploymentSpec(func(e *editors.DeploymentSpecEditor) error { + e.SetReplicas(replicas) + return nil + }) +} + +// EnsureContainerEnvVar records that an environment variable must be present +// in all containers of the Deployment. +// +// This is a convenience wrapper over EditContainers. +func (m *Mutator) EnsureContainerEnvVar(ev corev1.EnvVar) { + m.EditContainers(selectors.AllContainers(), func(e *editors.ContainerEditor) error { + e.EnsureEnvVar(ev) + return nil + }) +} + +// RemoveContainerEnvVar records that an environment variable should be +// removed from all containers of the Deployment. +// +// This is a convenience wrapper over EditContainers. +func (m *Mutator) RemoveContainerEnvVar(name string) { + m.EditContainers(selectors.AllContainers(), func(e *editors.ContainerEditor) error { + e.RemoveEnvVar(name) + return nil + }) +} + +// RemoveContainerEnvVars records that multiple environment variables should be +// removed from all containers of the Deployment. +// +// This is a convenience wrapper over EditContainers. +func (m *Mutator) RemoveContainerEnvVars(names []string) { + m.EditContainers(selectors.AllContainers(), func(e *editors.ContainerEditor) error { + e.RemoveEnvVars(names) + return nil + }) +} + +// EnsureContainerArg records that a command-line argument must be present +// in all containers of the Deployment. +// +// This is a convenience wrapper over EditContainers. +func (m *Mutator) EnsureContainerArg(arg string) { + m.EditContainers(selectors.AllContainers(), func(e *editors.ContainerEditor) error { + e.EnsureArg(arg) + return nil + }) +} + +// RemoveContainerArg records that a command-line argument should be +// removed from all containers of the Deployment. +// +// This is a convenience wrapper over EditContainers. +func (m *Mutator) RemoveContainerArg(arg string) { + m.EditContainers(selectors.AllContainers(), func(e *editors.ContainerEditor) error { + e.RemoveArg(arg) + return nil + }) +} + +// RemoveContainerArgs records that multiple command-line arguments should be +// removed from all containers of the Deployment. +// +// This is a convenience wrapper over EditContainers. +func (m *Mutator) RemoveContainerArgs(args []string) { + m.EditContainers(selectors.AllContainers(), func(e *editors.ContainerEditor) error { + e.RemoveArgs(args) + return nil + }) +} + +// Apply executes all recorded mutation intents on the underlying Deployment. +// +// Execution Order: +// Features are applied in the order they were registered. +// Within each feature, mutations are applied in this fixed category order: +// 1. Deployment metadata edits +// 2. DeploymentSpec edits +// 3. Pod template metadata edits +// 4. Pod spec edits +// 5. Regular container presence operations +// 6. Regular container edits +// 7. Init container presence operations +// 8. Init container edits +// +// Within each category of a single feature, edits are applied in their registration order. +// +// Selection & Identity: +// - Container selectors target containers in the state they are in at the start of that feature's +// container phase (after presence operations of the SAME feature have been applied). +// - Selector matching within a phase is evaluated against a snapshot of containers at the start +// of that phase, not the progressively mutated live containers. +// - Later features observe the Deployment as modified by all previous features. +// +// Timing: +// No changes are made to the Deployment until Apply() is called. +// Selectors and edit functions are executed during this pass. +func (m *Mutator) Apply() error { + for _, plan := range m.plans { + // 1. Deployment metadata + if len(plan.deploymentMetadataEdits) > 0 { + editor := editors.NewObjectMetaEditor(&m.current.ObjectMeta) + for _, edit := range plan.deploymentMetadataEdits { + if err := edit(editor); err != nil { + return err + } + } + } + + // 2. DeploymentSpec + if len(plan.deploymentSpecEdits) > 0 { + editor := editors.NewDeploymentSpecEditor(&m.current.Spec) + for _, edit := range plan.deploymentSpecEdits { + if err := edit(editor); err != nil { + return err + } + } + } + + // 3. Pod template metadata + if len(plan.podTemplateMetadataEdits) > 0 { + editor := editors.NewObjectMetaEditor(&m.current.Spec.Template.ObjectMeta) + for _, edit := range plan.podTemplateMetadataEdits { + if err := edit(editor); err != nil { + return err + } + } + } + + // 4. Pod spec + if len(plan.podSpecEdits) > 0 { + editor := editors.NewPodSpecEditor(&m.current.Spec.Template.Spec) + for _, edit := range plan.podSpecEdits { + if err := edit(editor); err != nil { + return err + } + } + } + + // 5. Regular container presence + for _, op := range plan.containerPresence { + applyPresenceOp(&m.current.Spec.Template.Spec.Containers, op) + } + + // 6. Regular container edits + if len(plan.containerEdits) > 0 { + // Take snapshot of containers AFTER presence ops but BEFORE applying any edits for stable selector matching + snapshots := make([]corev1.Container, len(m.current.Spec.Template.Spec.Containers)) + for i := range m.current.Spec.Template.Spec.Containers { + m.current.Spec.Template.Spec.Containers[i].DeepCopyInto(&snapshots[i]) + } + + for i := range m.current.Spec.Template.Spec.Containers { + container := &m.current.Spec.Template.Spec.Containers[i] + snapshot := &snapshots[i] + editor := editors.NewContainerEditor(container) + for _, ce := range plan.containerEdits { + if ce.selector(i, snapshot) { + if err := ce.edit(editor); err != nil { + return err + } + } + } + } + } + + // 7. Init container presence + for _, op := range plan.initContainerPresence { + applyPresenceOp(&m.current.Spec.Template.Spec.InitContainers, op) + } + + // 8. Init container edits + if len(plan.initContainerEdits) > 0 { + // Take snapshot of init containers AFTER presence ops but BEFORE applying any edits + snapshots := make([]corev1.Container, len(m.current.Spec.Template.Spec.InitContainers)) + for i := range m.current.Spec.Template.Spec.InitContainers { + m.current.Spec.Template.Spec.InitContainers[i].DeepCopyInto(&snapshots[i]) + } + + for i := range m.current.Spec.Template.Spec.InitContainers { + container := &m.current.Spec.Template.Spec.InitContainers[i] + snapshot := &snapshots[i] + editor := editors.NewContainerEditor(container) + for _, ce := range plan.initContainerEdits { + if ce.selector(i, snapshot) { + if err := ce.edit(editor); err != nil { + return err + } + } + } + } + } + } + + return nil +} + +func applyPresenceOp(containers *[]corev1.Container, op containerPresenceOp) { + found := -1 + for i, c := range *containers { + if c.Name == op.name { + found = i + break + } + } + + if op.container == nil { + // Remove + if found != -1 { + *containers = append((*containers)[:found], (*containers)[found+1:]...) + } + return + } + + // Ensure + if found != -1 { + (*containers)[found] = *op.container + } else { + *containers = append(*containers, *op.container) + } +} diff --git a/pkg/primitives/deployment/mutator_test.go b/pkg/primitives/deployment/mutator_test.go new file mode 100644 index 0000000..eec2f18 --- /dev/null +++ b/pkg/primitives/deployment/mutator_test.go @@ -0,0 +1,639 @@ +package deployment + +import ( + "errors" + "testing" + + "github.com/sourcehawk/operator-component-framework/pkg/mutation/editors" + "github.com/sourcehawk/operator-component-framework/pkg/mutation/selectors" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/utils/ptr" +) + +func TestMutator_EnvVars(t *testing.T) { + deploy := &appsv1.Deployment{ + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "main", + Env: []corev1.EnvVar{ + {Name: "KEEP", Value: "stay"}, + {Name: "CHANGE", Value: "old"}, + {Name: "REMOVE", Value: "gone"}, + }, + }, + }, + }, + }, + }, + } + + m := NewMutator(deploy) + m.EnsureContainerEnvVar(corev1.EnvVar{Name: "CHANGE", Value: "new"}) + m.EnsureContainerEnvVar(corev1.EnvVar{Name: "ADD", Value: "added"}) + m.RemoveContainerEnvVars([]string{"REMOVE", "NONEXISTENT"}) + + err := m.Apply() + require.NoError(t, err) + + env := deploy.Spec.Template.Spec.Containers[0].Env + assert.Len(t, env, 3) + + findEnv := func(name string) *corev1.EnvVar { + for _, e := range env { + if e.Name == name { + return &e + } + } + return nil + } + + assert.NotNil(t, findEnv("KEEP")) + assert.Equal(t, "stay", findEnv("KEEP").Value) + assert.Equal(t, "new", findEnv("CHANGE").Value) + assert.Equal(t, "added", findEnv("ADD").Value) + assert.Nil(t, findEnv("REMOVE")) +} + +func TestMutator_Args(t *testing.T) { + deploy := &appsv1.Deployment{ + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "main", + Args: []string{"--keep", "--change=old", "--remove"}, + }, + }, + }, + }, + }, + } + + m := NewMutator(deploy) + m.EnsureContainerArg("--change=new") + m.EnsureContainerArg("--add") + m.RemoveContainerArgs([]string{"--remove", "--nonexistent"}) + + err := m.Apply() + require.NoError(t, err) + + args := deploy.Spec.Template.Spec.Containers[0].Args + assert.Contains(t, args, "--keep") + assert.Contains(t, args, "--change=old") + assert.Contains(t, args, "--change=new") + assert.Contains(t, args, "--add") + assert.NotContains(t, args, "--remove") +} + +func TestMutator_Replicas(t *testing.T) { + deploy := &appsv1.Deployment{ + Spec: appsv1.DeploymentSpec{ + Replicas: ptr.To(int32(3)), + }, + } + + m := NewMutator(deploy) + m.EnsureReplicas(5) + + err := m.Apply() + require.NoError(t, err) + + assert.Equal(t, int32(5), *deploy.Spec.Replicas) +} + +func TestNewMutator(t *testing.T) { + deploy := &appsv1.Deployment{} + m := NewMutator(deploy) + assert.NotNil(t, m) + assert.Equal(t, deploy, m.current) +} + +func TestMutator_EditContainers(t *testing.T) { + deploy := &appsv1.Deployment{ + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: "c1"}, + {Name: "c2"}, + }, + }, + }, + }, + } + + m := NewMutator(deploy) + m.EditContainers(selectors.ContainerNamed("c1"), func(e *editors.ContainerEditor) error { + e.Raw().Image = "c1-image" + return nil + }) + m.EditContainers(selectors.AllContainers(), func(e *editors.ContainerEditor) error { + e.EnsureEnvVar(corev1.EnvVar{Name: "GLOBAL", Value: "true"}) + return nil + }) + + err := m.Apply() + require.NoError(t, err) + + assert.Equal(t, "c1-image", deploy.Spec.Template.Spec.Containers[0].Image) + assert.Equal(t, "", deploy.Spec.Template.Spec.Containers[1].Image) + assert.Equal(t, "GLOBAL", deploy.Spec.Template.Spec.Containers[0].Env[0].Name) + assert.Equal(t, "GLOBAL", deploy.Spec.Template.Spec.Containers[1].Env[0].Name) +} + +func TestMutator_EditPodSpec(t *testing.T) { + deploy := &appsv1.Deployment{} + m := NewMutator(deploy) + m.EditPodSpec(func(e *editors.PodSpecEditor) error { + e.Raw().ServiceAccountName = "my-sa" + return nil + }) + + err := m.Apply() + require.NoError(t, err) + assert.Equal(t, "my-sa", deploy.Spec.Template.Spec.ServiceAccountName) +} + +func TestMutator_EditDeploymentSpec(t *testing.T) { + deploy := &appsv1.Deployment{} + m := NewMutator(deploy) + m.EditDeploymentSpec(func(e *editors.DeploymentSpecEditor) error { + e.SetPaused(true) + e.SetMinReadySeconds(10) + return nil + }) + + err := m.Apply() + require.NoError(t, err) + assert.True(t, deploy.Spec.Paused) + assert.Equal(t, int32(10), deploy.Spec.MinReadySeconds) +} + +func TestMutator_EditMetadata(t *testing.T) { + deploy := &appsv1.Deployment{} + m := NewMutator(deploy) + m.EditDeploymentMetadata(func(e *editors.ObjectMetaEditor) error { + e.Raw().Labels = map[string]string{"deploy": "label"} + return nil + }) + m.EditPodTemplateMetadata(func(e *editors.ObjectMetaEditor) error { + e.Raw().Annotations = map[string]string{"pod": "ann"} + return nil + }) + + err := m.Apply() + require.NoError(t, err) + assert.Equal(t, "label", deploy.Labels["deploy"]) + assert.Equal(t, "ann", deploy.Spec.Template.Annotations["pod"]) +} + +func TestMutator_Errors(t *testing.T) { + deploy := &appsv1.Deployment{} + m := NewMutator(deploy) + m.EditPodSpec(func(_ *editors.PodSpecEditor) error { + return errors.New("boom") + }) + + err := m.Apply() + assert.Error(t, err) + assert.Equal(t, "boom", err.Error()) +} + +func TestMutator_Order(t *testing.T) { + deploy := &appsv1.Deployment{ + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{"orig": "label"}, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{{Name: "main"}}, + }, + }, + }, + } + + var order []string + + m := NewMutator(deploy) + // 6. Container edits + m.EditContainers(selectors.AllContainers(), func(_ *editors.ContainerEditor) error { + order = append(order, "container") + return nil + }) + // 5. Pod spec edits + m.EditPodSpec(func(_ *editors.PodSpecEditor) error { + order = append(order, "podspec") + return nil + }) + // 4. Pod template metadata edits + m.EditPodTemplateMetadata(func(_ *editors.ObjectMetaEditor) error { + order = append(order, "podmeta") + return nil + }) + // 3. Deployment spec edits + m.EditDeploymentSpec(func(_ *editors.DeploymentSpecEditor) error { + order = append(order, "depspec") + return nil + }) + // 2. Deployment metadata edits + m.EditDeploymentMetadata(func(_ *editors.ObjectMetaEditor) error { + order = append(order, "depmeta") + return nil + }) + // 1. Replicas (now via depspec) + m.EnsureReplicas(3) + + err := m.Apply() + require.NoError(t, err) + + // Verify order: depmeta -> depspec -> podmeta -> podspec -> container + // Replicas is also a depspec edit, so it will trigger the depspec callback if we had another one, + // but here we just check the sequence of callbacks. + expected := []string{"depmeta", "depspec", "podmeta", "podspec", "container"} + assert.Equal(t, expected, order) + assert.Equal(t, int32(3), *deploy.Spec.Replicas) +} + +func TestMutator_InitContainers(t *testing.T) { + const newImage = "new-image" + deploy := &appsv1.Deployment{ + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + InitContainers: []corev1.Container{ + {Name: "init-1", Image: "old-image"}, + }, + }, + }, + }, + } + + m := NewMutator(deploy) + m.EditInitContainers(selectors.ContainerNamed("init-1"), func(e *editors.ContainerEditor) error { + e.Raw().Image = newImage + return nil + }) + + if err := m.Apply(); err != nil { + t.Fatalf("Apply failed: %v", err) + } + + if deploy.Spec.Template.Spec.InitContainers[0].Image != newImage { + t.Errorf("expected image %s, got %s", newImage, deploy.Spec.Template.Spec.InitContainers[0].Image) + } +} + +func TestMutator_ContainerPresence(t *testing.T) { + const newImage = "new-image" + deploy := &appsv1.Deployment{ + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: "app", Image: "app-image"}, + {Name: "sidecar", Image: "sidecar-image"}, + }, + }, + }, + }, + } + + m := NewMutator(deploy) + // Replace + m.EnsureContainer(corev1.Container{Name: "app", Image: "app-new-image"}) + // Remove + m.RemoveContainer("sidecar") + // Append + m.EnsureContainer(corev1.Container{Name: "new-container", Image: newImage}) + + if err := m.Apply(); err != nil { + t.Fatalf("Apply failed: %v", err) + } + + if len(deploy.Spec.Template.Spec.Containers) != 2 { + t.Fatalf("expected 2 containers, got %d", len(deploy.Spec.Template.Spec.Containers)) + } + + if deploy.Spec.Template.Spec.Containers[0].Name != "app" || deploy.Spec.Template.Spec.Containers[0].Image != "app-new-image" { + t.Errorf("unexpected container at index 0: %+v", deploy.Spec.Template.Spec.Containers[0]) + } + + if deploy.Spec.Template.Spec.Containers[1].Name != "new-container" || deploy.Spec.Template.Spec.Containers[1].Image != newImage { + t.Errorf("unexpected container at index 1: %+v", deploy.Spec.Template.Spec.Containers[1]) + } +} + +func TestMutator_InitContainerPresence(t *testing.T) { + deploy := &appsv1.Deployment{ + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + InitContainers: []corev1.Container{ + {Name: "init-1", Image: "init-1-image"}, + }, + }, + }, + }, + } + + m := NewMutator(deploy) + m.EnsureInitContainer(corev1.Container{Name: "init-2", Image: "init-2-image"}) + m.RemoveInitContainers([]string{"init-1"}) + + if err := m.Apply(); err != nil { + t.Fatalf("Apply failed: %v", err) + } + + if len(deploy.Spec.Template.Spec.InitContainers) != 1 { + t.Fatalf("expected 1 init container, got %d", len(deploy.Spec.Template.Spec.InitContainers)) + } + + if deploy.Spec.Template.Spec.InitContainers[0].Name != "init-2" { + t.Errorf("expected init-2, got %s", deploy.Spec.Template.Spec.InitContainers[0].Name) + } +} + +func TestMutator_SelectorSnapshotSemantics(t *testing.T) { + const appV2 = "app-v2" + deploy := &appsv1.Deployment{ + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: "app", Image: "app-image"}, + }, + }, + }, + }, + } + + m := NewMutator(deploy) + + // First edit renames the container + m.EditContainers(selectors.ContainerNamed("app"), func(e *editors.ContainerEditor) error { + e.Raw().Name = appV2 + return nil + }) + + // Second edit should still match using "app" selector because of snapshot + m.EditContainers(selectors.ContainerNamed("app"), func(e *editors.ContainerEditor) error { + e.Raw().Image = "app-image-updated" + return nil + }) + + // Third edit targeting "app-v2" should NOT match in this apply pass + m.EditContainers(selectors.ContainerNamed(appV2), func(e *editors.ContainerEditor) error { + e.Raw().Image = "should-not-be-set" + return nil + }) + + if err := m.Apply(); err != nil { + t.Fatalf("Apply failed: %v", err) + } + + if deploy.Spec.Template.Spec.Containers[0].Name != appV2 { + t.Errorf("expected name %s, got %s", appV2, deploy.Spec.Template.Spec.Containers[0].Name) + } + + if deploy.Spec.Template.Spec.Containers[0].Image != "app-image-updated" { + t.Errorf("expected image app-image-updated, got %s", deploy.Spec.Template.Spec.Containers[0].Image) + } +} + +func TestMutator_Ordering_PresenceBeforeEdit(t *testing.T) { + deploy := &appsv1.Deployment{ + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{}, + }, + }, + }, + } + + m := NewMutator(deploy) + + // Register edit first + m.EditContainers(selectors.ContainerNamed("new-app"), func(e *editors.ContainerEditor) error { + e.Raw().Image = "edited-image" + return nil + }) + + // Register presence later + m.EnsureContainer(corev1.Container{Name: "new-app", Image: "original-image"}) + + if err := m.Apply(); err != nil { + t.Fatalf("Apply failed: %v", err) + } + + // It should work because presence happens before edits in Apply() + if len(deploy.Spec.Template.Spec.Containers) != 1 { + t.Fatalf("expected 1 container, got %d", len(deploy.Spec.Template.Spec.Containers)) + } + + if deploy.Spec.Template.Spec.Containers[0].Image != "edited-image" { + t.Errorf("expected edited-image, got %s", deploy.Spec.Template.Spec.Containers[0].Image) + } +} + +func TestMutator_NilSafety(t *testing.T) { + deploy := &appsv1.Deployment{ + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{{Name: "main"}}, + }, + }, + }, + } + m := NewMutator(deploy) + + // These should all be no-ops and not panic + m.EditContainers(nil, func(_ *editors.ContainerEditor) error { return nil }) + m.EditContainers(selectors.AllContainers(), nil) + m.EditPodSpec(nil) + m.EditPodTemplateMetadata(nil) + m.EditDeploymentMetadata(nil) + m.EditDeploymentSpec(nil) + + err := m.Apply() + assert.NoError(t, err) +} + +func TestMutator_CrossFeatureOrdering(t *testing.T) { + deploy := &appsv1.Deployment{ + Spec: appsv1.DeploymentSpec{ + Replicas: ptr.To[int32](1), + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{{Name: "app", Image: "v1"}}, + }, + }, + }, + } + + m := NewMutator(deploy) + + // Feature A: sets replicas to 2, image to v2 + m.BeginFeature() + m.EnsureReplicas(2) + m.EditContainers(selectors.ContainerNamed("app"), func(e *editors.ContainerEditor) error { + e.Raw().Image = "v2" + return nil + }) + + // Feature B: sets replicas to 3, image to v3 + m.BeginFeature() + m.EnsureReplicas(3) + m.EditContainers(selectors.ContainerNamed("app"), func(e *editors.ContainerEditor) error { + e.Raw().Image = "v3" + return nil + }) + + if err := m.Apply(); err != nil { + t.Fatalf("Apply failed: %v", err) + } + + // Feature B should win + assert.Equal(t, int32(3), *deploy.Spec.Replicas) + assert.Equal(t, "v3", deploy.Spec.Template.Spec.Containers[0].Image) +} + +func TestMutator_WithinFeatureCategoryOrdering(t *testing.T) { + deploy := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{Name: "original-name"}, + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{{Name: "app"}}, + }, + }, + }, + } + + m := NewMutator(deploy) + + var executionOrder []string + + // We register them in reverse order of expected execution + m.EditContainers(selectors.AllContainers(), func(_ *editors.ContainerEditor) error { + executionOrder = append(executionOrder, "container") + return nil + }) + m.EditPodSpec(func(_ *editors.PodSpecEditor) error { + executionOrder = append(executionOrder, "podspec") + return nil + }) + m.EditPodTemplateMetadata(func(_ *editors.ObjectMetaEditor) error { + executionOrder = append(executionOrder, "podmeta") + return nil + }) + m.EditDeploymentSpec(func(_ *editors.DeploymentSpecEditor) error { + executionOrder = append(executionOrder, "deploymentspec") + return nil + }) + m.EditDeploymentMetadata(func(_ *editors.ObjectMetaEditor) error { + executionOrder = append(executionOrder, "deploymentmeta") + return nil + }) + + if err := m.Apply(); err != nil { + t.Fatalf("Apply failed: %v", err) + } + + expectedOrder := []string{ + "deploymentmeta", + "deploymentspec", + "podmeta", + "podspec", + "container", + } + assert.Equal(t, expectedOrder, executionOrder) +} + +func TestMutator_CrossFeatureVisibility(t *testing.T) { + deploy := &appsv1.Deployment{ + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{{Name: "app"}}, + }, + }, + }, + } + + m := NewMutator(deploy) + + // Feature A renames container + m.BeginFeature() + m.EditContainers(selectors.ContainerNamed("app"), func(e *editors.ContainerEditor) error { + e.Raw().Name = "app-v2" + return nil + }) + + // Feature B selects by the new name - this should work! + m.BeginFeature() + m.EditContainers(selectors.ContainerNamed("app-v2"), func(e *editors.ContainerEditor) error { + e.Raw().Image = "v2-image" + return nil + }) + + if err := m.Apply(); err != nil { + t.Fatalf("Apply failed: %v", err) + } + + assert.Equal(t, "app-v2", deploy.Spec.Template.Spec.Containers[0].Name) + assert.Equal(t, "v2-image", deploy.Spec.Template.Spec.Containers[0].Image) +} + +func TestMutator_InitContainer_OrderingAndSnapshots(t *testing.T) { + deploy := &appsv1.Deployment{ + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + InitContainers: []corev1.Container{}, + }, + }, + }, + } + + m := NewMutator(deploy) + + // 1. Add init-1 + m.EnsureInitContainer(corev1.Container{Name: "init-1", Image: "v1"}) + + // 2. Edit init-1 (it's present in the same feature's phase) + m.EditInitContainers(selectors.ContainerNamed("init-1"), func(e *editors.ContainerEditor) error { + e.Raw().Image = "v1-edited" + return nil + }) + + // 3. Rename it inside the edit phase + m.EditInitContainers(selectors.ContainerNamed("init-1"), func(e *editors.ContainerEditor) error { + e.Raw().Name = "init-1-renamed" + return nil + }) + + // 4. Selector targeting "init-1" should still match because of snapshot in same phase + m.EditInitContainers(selectors.ContainerNamed("init-1"), func(e *editors.ContainerEditor) error { + e.Raw().Image = "v1-final" + return nil + }) + + if err := m.Apply(); err != nil { + t.Fatalf("Apply failed: %v", err) + } + + require.Len(t, deploy.Spec.Template.Spec.InitContainers, 1) + assert.Equal(t, "init-1-renamed", deploy.Spec.Template.Spec.InitContainers[0].Name) + assert.Equal(t, "v1-final", deploy.Spec.Template.Spec.InitContainers[0].Image) +} diff --git a/pkg/primitives/deployment/resource.go b/pkg/primitives/deployment/resource.go new file mode 100644 index 0000000..b20a890 --- /dev/null +++ b/pkg/primitives/deployment/resource.go @@ -0,0 +1,146 @@ +package deployment + +import ( + "github.com/sourcehawk/operator-component-framework/internal/generic" + "github.com/sourcehawk/operator-component-framework/pkg/component" + appsv1 "k8s.io/api/apps/v1" + "sigs.k8s.io/controller-runtime/pkg/client" +) + +// DefaultFieldApplicator replaces current with a deep copy of desired. +func DefaultFieldApplicator(current, desired *appsv1.Deployment) error { + *current = *desired.DeepCopy() + return nil +} + +// Resource is a high-level abstraction for managing a Kubernetes Deployment within a controller's +// reconciliation loop. +// +// It implements several component interfaces to integrate with the operator-component-framework: +// - component.Resource: for basic identity and mutation behavior. +// - component.Alive: for health and readiness tracking. +// - component.Suspendable: for graceful scale-down or temporary deactivation. +// - component.DataExtractable: for exporting information after successful reconciliation. +// +// This resource handles the lifecycle of a Deployment, including initial creation, +// updates via feature mutations, and status monitoring. +type Resource struct { + base *generic.WorkloadResource[*appsv1.Deployment, *Mutator] +} + +// Identity returns a unique identifier for the Deployment in the format +// "apps/v1/Deployment//". +// +// This identifier is used by the framework's internal tracking and recording +// mechanisms to distinguish this specific Deployment from other resources +// managed by the same component. +func (r *Resource) Identity() string { + return r.base.Identity() +} + +// Object returns a copy of the underlying Kubernetes Deployment object. +// +// The returned object implements the client.Object interface, making it +// fully compatible with controller-runtime's Client for operations like +// Get, Create, Update, and Patch. +// +// This method is called by the framework to obtain the current state +// of the resource before applying mutations. +func (r *Resource) Object() (client.Object, error) { + return r.base.GetObject() +} + +// Mutate transforms the current state of a Kubernetes Deployment into the desired state. +// +// The mutation process follows a specific order: +// 1. Core State: The current object is reset to the desired base state, or +// modified via a custom customFieldApplicator if one is configured. +// 2. Feature Mutations: All registered feature-based mutations are applied, +// allowing for granular, version-gated changes to the Deployment. +// 3. Suspension: If the resource is in a suspending state, the suspension +// logic (e.g., scaling to zero) is applied. +// +// This method is invoked by the framework during the "Update" phase of +// reconciliation. It ensures that the in-memory object reflects all +// configuration and feature requirements before it is sent to the API server. +func (r *Resource) Mutate(current client.Object) error { + return r.base.Mutate(current) +} + +// ConvergingStatus evaluates if the Deployment has successfully reached its desired state. +// +// By default, it uses DefaultConvergingStatusHandler, which checks if the number of ReadyReplicas +// matches the desired replica count. +// +// The return value includes a descriptive status (Ready, Creating, Updating, or Scaling) +// and a human-readable reason, which are used to update the component's conditions. +// +// When to use: +// This is used by the framework after an Apply operation to determine if the +// reconciliation of this specific resource is complete or if further waiting is required. +func (r *Resource) ConvergingStatus(op component.ConvergingOperation) (component.ConvergingStatusWithReason, error) { + return r.base.ConvergingStatus(op) +} + +// GraceStatus provides a health assessment of the Deployment when it has not yet +// reached full readiness. +// +// By default, it uses DefaultGraceStatusHandler, which categorizes the current state into: +// - GraceStatusDegraded: At least one replica is ready, but the desired count is not met. +// - GraceStatusDown: No replicas are ready. +// +// This information is surfaced through the component's health reporting, allowing +// operators to understand the severity of a rollout delay or failure. +func (r *Resource) GraceStatus() (component.GraceStatusWithReason, error) { + return r.base.GraceStatus() +} + +// DeleteOnSuspend determines whether the Deployment should be deleted from the +// cluster when the parent component is suspended. +// +// By default, it uses DefaultDeleteOnSuspendHandler, which returns false, meaning +// the Deployment is kept in the cluster but scaled to zero replicas. This preserves +// the resource definition and history while stopping the workload. +// +// A custom decision handler can be registered via the Builder to change this +// behavior based on the current state of the Deployment. +func (r *Resource) DeleteOnSuspend() bool { + return r.base.DeleteOnSuspend() +} + +// Suspend triggers the deactivation of the Deployment. +// +// It registers a mutation that will be executed during the next Mutate call. +// The default behavior uses DefaultSuspendMutationHandler to scale the Deployment +// to zero replicas, which effectively stops the application while keeping the +// Kubernetes resource intact. +// +// This is typically called by the framework when a component's .spec.suspended +// field is set to true. +func (r *Resource) Suspend() error { + return r.base.Suspend() +} + +// SuspensionStatus monitors the progress of the suspension process. +// +// By default, it uses DefaultSuspensionStatusHandler, which reports whether the +// Deployment has successfully scaled down to zero replicas or is still in the +// process of doing so. The framework uses this to determine when the component +// has reached a fully suspended state. +func (r *Resource) SuspensionStatus() (component.SuspensionStatusWithReason, error) { + return r.base.SuspensionStatus() +} + +// ExtractData executes registered data extraction functions to harvest information +// from the reconciled Deployment. +// +// This is called by the framework after a successful reconciliation of the +// resource. It allows the component to export details (like a generated name, +// assigned IP, or status fields) that might be needed by other resources or +// higher-level controllers. +// +// Data extractors are provided with a deep copy of the current Deployment to +// prevent accidental mutations during the extraction process. +func (r *Resource) ExtractData() error { + return r.base.ExtractData() +} diff --git a/pkg/primitives/deployment/resource_test.go b/pkg/primitives/deployment/resource_test.go new file mode 100644 index 0000000..064faee --- /dev/null +++ b/pkg/primitives/deployment/resource_test.go @@ -0,0 +1,416 @@ +package deployment + +import ( + "errors" + "testing" + + "github.com/sourcehawk/operator-component-framework/pkg/component" + "github.com/sourcehawk/operator-component-framework/pkg/feature" + "github.com/sourcehawk/operator-component-framework/pkg/mutation/editors" + "github.com/sourcehawk/operator-component-framework/pkg/mutation/selectors" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/mock" + "github.com/stretchr/testify/require" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/utils/ptr" +) + +func TestResource_Identity(t *testing.T) { + deploy := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-deploy", + Namespace: "test-ns", + }, + } + res, _ := NewBuilder(deploy).Build() + + assert.Equal(t, "apps/v1/Deployment/test-ns/test-deploy", res.Identity()) +} + +func TestResource_Object(t *testing.T) { + deploy := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-deploy", + Namespace: "test-ns", + }, + } + res, _ := NewBuilder(deploy).Build() + + obj, err := res.Object() + require.NoError(t, err) + + got, ok := obj.(*appsv1.Deployment) + require.True(t, ok) + assert.Equal(t, deploy.Name, got.Name) + assert.Equal(t, deploy.Namespace, got.Namespace) + + // Ensure it's a deep copy + got.Name = "changed" + assert.Equal(t, "test-deploy", deploy.Name) +} + +func TestResource_Mutate(t *testing.T) { + desired := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + Namespace: "default", + Labels: map[string]string{"app": "test"}, + }, + Spec: appsv1.DeploymentSpec{ + Replicas: ptr.To(int32(3)), + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{"app": "test"}, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{"app": "test"}, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: "web", Image: "nginx"}, + }, + }, + }, + }, + } + + res, _ := NewBuilder(desired). + WithMutation(feature.Mutation[*Mutator]{ + Name: "test-mutation", + Feature: feature.NewResourceFeature("v1", nil).When(true), + Mutate: func(m *Mutator) error { + m.EnsureContainerEnvVar(corev1.EnvVar{Name: "FOO", Value: "BAR"}) + return nil + }, + }). + Build() + + current := &appsv1.Deployment{} + err := res.Mutate(current) + require.NoError(t, err) + + assert.Equal(t, int32(3), *current.Spec.Replicas) + assert.Equal(t, "test", current.Labels["app"]) + assert.Equal(t, "BAR", current.Spec.Template.Spec.Containers[0].Env[0].Value) +} + +func TestResource_Mutate_FeatureOrdering(t *testing.T) { + desired := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + Namespace: "default", + }, + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: "app", Image: "v1"}, + }, + }, + }, + }, + } + + res, _ := NewBuilder(desired). + WithMutation(feature.Mutation[*Mutator]{ + Name: "feature-a", + Feature: feature.NewResourceFeature("v1", nil).When(true), + Mutate: func(m *Mutator) error { + m.EditContainers(selectors.ContainerNamed("app"), func(e *editors.ContainerEditor) error { + e.Raw().Image = "v2" + return nil + }) + return nil + }, + }). + WithMutation(feature.Mutation[*Mutator]{ + Name: "feature-b", + Feature: feature.NewResourceFeature("v1", nil).When(true), + Mutate: func(m *Mutator) error { + // This should see image "v2" if BeginFeature() is working correctly between mutations + m.EditContainers(selectors.ContainerNamed("app"), func(e *editors.ContainerEditor) error { + if e.Raw().Image == "v2" { + e.Raw().Image = "v3" + } + return nil + }) + return nil + }, + }). + Build() + + current := &appsv1.Deployment{} + err := res.Mutate(current) + require.NoError(t, err) + + assert.Equal(t, "v3", current.Spec.Template.Spec.Containers[0].Image) +} + +type mockHandlers struct { + mock.Mock +} + +func (m *mockHandlers) ConvergingStatus(op component.ConvergingOperation, d *appsv1.Deployment) (component.ConvergingStatusWithReason, error) { + args := m.Called(op, d) + return args.Get(0).(component.ConvergingStatusWithReason), args.Error(1) +} + +func (m *mockHandlers) GraceStatus(d *appsv1.Deployment) (component.GraceStatusWithReason, error) { + args := m.Called(d) + return args.Get(0).(component.GraceStatusWithReason), args.Error(1) +} + +func (m *mockHandlers) SuspensionStatus(d *appsv1.Deployment) (component.SuspensionStatusWithReason, error) { + args := m.Called(d) + return args.Get(0).(component.SuspensionStatusWithReason), args.Error(1) +} + +func (m *mockHandlers) Suspend(mut *Mutator) error { + args := m.Called(mut) + return args.Error(0) +} + +func (m *mockHandlers) DeleteOnSuspend(d *appsv1.Deployment) bool { + args := m.Called(d) + return args.Bool(0) +} + +func TestResource_Status(t *testing.T) { + deploy := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + Namespace: "default", + }, + Spec: appsv1.DeploymentSpec{ + Replicas: ptr.To(int32(3)), + }, + Status: appsv1.DeploymentStatus{ + ReadyReplicas: 2, + Replicas: 3, + }, + } + + t.Run("ConvergingStatus calls handler", func(t *testing.T) { + m := &mockHandlers{} + statusReady := component.ConvergingStatusWithReason{Status: component.ConvergingStatusReady} + m.On("ConvergingStatus", component.ConvergingOperationUpdated, deploy).Return(statusReady, nil) + + res, _ := NewBuilder(deploy). + WithCustomConvergeStatus(m.ConvergingStatus). + Build() + + status, err := res.ConvergingStatus(component.ConvergingOperationUpdated) + require.NoError(t, err) + m.AssertExpectations(t) + assert.Equal(t, component.ConvergingStatusReady, status.Status) + }) + + t.Run("ConvergingStatus uses default", func(t *testing.T) { + res, err := NewBuilder(deploy).Build() + require.NoError(t, err) + status, err := res.ConvergingStatus(component.ConvergingOperationUpdated) + require.NoError(t, err) + assert.Equal(t, component.ConvergingStatusUpdating, status.Status) + }) + + t.Run("GraceStatus calls handler", func(t *testing.T) { + m := &mockHandlers{} + statusReady := component.GraceStatusWithReason{Status: component.GraceStatusReady} + m.On("GraceStatus", deploy).Return(statusReady, nil) + + res, _ := NewBuilder(deploy). + WithCustomGraceStatus(m.GraceStatus). + Build() + + status, err := res.GraceStatus() + require.NoError(t, err) + m.AssertExpectations(t) + assert.Equal(t, component.GraceStatusReady, status.Status) + }) + + t.Run("GraceStatus uses default", func(t *testing.T) { + res, err := NewBuilder(deploy).Build() + require.NoError(t, err) + status, err := res.GraceStatus() + require.NoError(t, err) + assert.Equal(t, component.GraceStatusDegraded, status.Status) + }) +} + +func TestResource_DeleteOnSuspend(t *testing.T) { + deploy := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{Name: "test", Namespace: "default"}, + } + + t.Run("calls handler", func(t *testing.T) { + m := &mockHandlers{} + m.On("DeleteOnSuspend", deploy).Return(true) + + res, err := NewBuilder(deploy). + WithCustomSuspendDeletionDecision(m.DeleteOnSuspend). + Build() + require.NoError(t, err) + assert.True(t, res.DeleteOnSuspend()) + m.AssertExpectations(t) + }) + + t.Run("uses default", func(t *testing.T) { + res, err := NewBuilder(deploy).Build() + require.NoError(t, err) + assert.False(t, res.DeleteOnSuspend()) + }) +} + +func TestResource_Suspend(t *testing.T) { + deploy := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{Name: "test", Namespace: "default"}, + Spec: appsv1.DeploymentSpec{ + Replicas: ptr.To(int32(3)), + }, + } + + t.Run("Suspend registers mutation and Mutate applies it using default handler", func(t *testing.T) { + res, err := NewBuilder(deploy).Build() + require.NoError(t, err) + err = res.Suspend() + require.NoError(t, err) + + current := deploy.DeepCopy() + err = res.Mutate(current) + require.NoError(t, err) + + assert.Equal(t, int32(0), *current.Spec.Replicas) + }) + + t.Run("Suspend uses custom mutation handler", func(t *testing.T) { + m := &mockHandlers{} + m.On("Suspend", mock.Anything).Return(nil).Run(func(args mock.Arguments) { + mut := args.Get(0).(*Mutator) + mut.EnsureReplicas(1) + }) + + res, err := NewBuilder(deploy). + WithCustomSuspendMutation(m.Suspend). + Build() + require.NoError(t, err) + err = res.Suspend() + require.NoError(t, err) + + current := deploy.DeepCopy() + err = res.Mutate(current) + require.NoError(t, err) + + m.AssertExpectations(t) + assert.Equal(t, int32(1), *current.Spec.Replicas) + }) +} + +func TestResource_SuspensionStatus(t *testing.T) { + deploy := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{Name: "test", Namespace: "default"}, + Status: appsv1.DeploymentStatus{ + Replicas: 0, + }, + } + + t.Run("calls handler", func(t *testing.T) { + m := &mockHandlers{} + statusSuspended := component.SuspensionStatusWithReason{Status: component.SuspensionStatusSuspended} + m.On("SuspensionStatus", deploy).Return(statusSuspended, nil) + + res, err := NewBuilder(deploy). + WithCustomSuspendStatus(m.SuspensionStatus). + Build() + require.NoError(t, err) + status, err := res.SuspensionStatus() + require.NoError(t, err) + m.AssertExpectations(t) + assert.Equal(t, component.SuspensionStatusSuspended, status.Status) + }) + + t.Run("uses default", func(t *testing.T) { + res, err := NewBuilder(deploy).Build() + require.NoError(t, err) + status, err := res.SuspensionStatus() + require.NoError(t, err) + assert.Equal(t, component.SuspensionStatusSuspended, status.Status) + }) +} + +func TestResource_ExtractData(t *testing.T) { + deploy := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{Name: "test", Namespace: "default"}, + Spec: appsv1.DeploymentSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{{Name: "web", Image: "nginx:latest"}}, + }, + }, + }, + } + + extractedImage := "" + res, err := NewBuilder(deploy). + WithDataExtractor(func(d appsv1.Deployment) error { + extractedImage = d.Spec.Template.Spec.Containers[0].Image + return nil + }). + Build() + require.NoError(t, err) + + err = res.ExtractData() + require.NoError(t, err) + assert.Equal(t, "nginx:latest", extractedImage) +} + +func TestResource_CustomFieldApplicator(t *testing.T) { + desired := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + Namespace: "default", + Labels: map[string]string{"app": "test"}, + }, + Spec: appsv1.DeploymentSpec{ + Replicas: ptr.To(int32(3)), + }, + } + + applicatorCalled := false + res, _ := NewBuilder(desired). + WithCustomFieldApplicator(func(current *appsv1.Deployment, desired *appsv1.Deployment) error { + applicatorCalled = true + current.Name = desired.Name + current.Namespace = desired.Namespace + // Only apply replicas, ignore labels + current.Spec.Replicas = desired.Spec.Replicas + return nil + }). + Build() + + current := &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{"external": "label"}, + }, + } + err := res.Mutate(current) + require.NoError(t, err) + + assert.True(t, applicatorCalled) + assert.Equal(t, int32(3), *current.Spec.Replicas) + assert.Equal(t, "label", current.Labels["external"], "External label should be preserved") + assert.NotContains(t, current.Labels, "app", "Desired label should NOT be applied by custom applicator") + + t.Run("returns error", func(t *testing.T) { + res, _ := NewBuilder(desired). + WithCustomFieldApplicator(func(_ *appsv1.Deployment, _ *appsv1.Deployment) error { + return errors.New("applicator error") + }). + Build() + + err := res.Mutate(&appsv1.Deployment{}) + require.Error(t, err) + assert.Contains(t, err.Error(), "applicator error") + }) +}