EKS Declarative Automation
EKS Capabilities (Nov 2025) provides popular open-source tools as AWS managed services, serving as the core infrastructure for declaratively deploying AIDLC Construction phase outputs and continuously managing them in the Operations phase.
⚡ EKS Capabilities (2025.11)
AWS-managed K8s native tools
Managed Argo CD
GAAWS-managed GitOps
ACK (AWS Controllers for K8s)
GA50+ AWS service CRD management
KRO (K8s Resource Orchestrator)
PreviewResourceGroup CRD composite resources
LBC v3
GAGateway API GA support
1. EKS Capabilities Overview
EKS Capabilities consists of five managed services:
- Managed Argo CD — GitOps-based continuous deployment
- ACK (AWS Controllers for Kubernetes) — Manage AWS resources as K8s CRDs
- KRO (Kubernetes Resource Orchestrator) — Orchestrate composite resources as single deployment units
- Gateway API (LBC v3) — L4/L7 traffic routing and advanced networking
- Node Readiness Controller — Declarative node readiness state management
These tools form the complete pipeline where Kiro-generated code is automatically deployed to EKS when pushed to Git, and AI Agents monitor and automatically respond in the Operations phase.
2. Managed Argo CD — GitOps Pattern
Managed Argo CD operates GitOps as a managed service on AWS infrastructure. When Kiro-generated code is pushed to Git, it's automatically deployed to EKS.
Core Concepts
- Application CRD: Declares single environment (e.g., production) deployment
- ApplicationSet: Automatically generates multi-environment (dev/staging/production)
- Self-healing: Automatically syncs when Git state and cluster state diverge
- Progressive Delivery: Automates canary/blue-green deployments
AIDLC Integration
| Phase | Role |
|---|---|
| Construction | Kiro-generated Helm chart/Kustomize Git commit → Argo CD automatic deployment |
| Operations | AI Agent monitors deployment status, triggers automatic rollback on SLO violations |
References
3. ACK — AWS Resource CRD Management
ACK declaratively manages 50+ AWS services as K8s CRDs. Kiro-generated Domain Design infrastructure elements (DynamoDB, SQS, S3, etc.) are deployed with kubectl apply and naturally integrate into Argo CD's GitOps workflow.
Core Value
With ACK, AWS resources outside the cluster can also be managed with the K8s declarative model. Creating/modifying/deleting DynamoDB, SQS, S3, RDS, etc. as K8s CRDs is the strategy to "declaratively manage all infrastructure centered around K8s."
AIDLC Integration
- Inception: Analyze domain boundaries in DDD Integration → Identify ACK resource needs
- Construction: Kiro automatically generates ACK CRD manifests
- Operations: Monitor ACK resource status in Observability Stack
References
4. KRO — ResourceGroup Orchestration
KRO bundles multiple K8s resources into a single deployment unit (ResourceGroup). It directly maps to AIDLC's Deployment Unit concept, creating Deployment + Service + HPA + ACK resources as one Custom Resource.
Core Concepts
- ResourceGroup: Defines logical deployment unit (e.g., Payment Service = Deployment + Service + DynamoDB Table)
- Dependencies: Automatically manages resource dependencies (e.g., Deployment starts after DynamoDB Table creation)
- Rollback: Atomic rollback by ResourceGroup unit
Mapping with DDD Aggregate
| DDD Concept | KRO Implementation |
|---|---|
| Aggregate Root | ResourceGroup CRD |
| Entity | Deployment, StatefulSet |
| Value Object | ConfigMap, Secret |
| Repository | ACK DynamoDB/RDS CRD |
References
5. Gateway API — L4/L7 Traffic Routing
AWS Load Balancer Controller v3 transitions Gateway API to GA, providing L4 (NLB) + L7 (ALB) routing, QUIC/HTTP3, JWT validation, and header transformation.
Gateway API Design Philosophy
Gateway API is designed role-oriented, allowing infrastructure operators, cluster operators, and application developers to manage traffic within their respective responsibilities.
| Resource | Owner | Responsibility |
|---|---|---|
| GatewayClass | Infrastructure Operator | Define load balancer type (ALB/NLB) |
| Gateway | Cluster Operator | Define listeners (port, TLS), namespace access control |
| HTTPRoute/GRPCRoute | Application Developer | Path-based routing, canary deployment, header transformation |
Supported Features (LBC v2.14+)
-
L4 Routes (NLB, v2.13.3+)
- TCPRoute, UDPRoute, TLSRoute
- SNI-based TLS routing, QUIC/HTTP3 support
-
L7 Routes (ALB, v2.14.0+)
- HTTPRoute: Path/header/query-based routing
- GRPCRoute: gRPC method-based routing
-
Advanced Features (Gateway API v1.4)
- JWT validation (Gateway level)
- Header transformation (RequestHeaderModifier, ResponseHeaderModifier)
- Weight-based canary deployment
YAML Example (3-resource separation pattern)
# GatewayClass — Defined by infrastructure operator
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: aws-alb
spec:
controllerName: gateway.alb.aws.amazon.com/controller
---
# Gateway — Defined by cluster operator
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: payment-gateway
namespace: production
spec:
gatewayClassName: aws-alb
listeners:
- name: https
protocol: HTTPS
port: 443
---
# HTTPRoute — Defined by application developer
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: payment-api-route
namespace: production
spec:
parentRefs:
- name: payment-gateway
rules:
- matches:
- path:
type: PathPrefix
value: /api/v1/payments
backendRefs:
- name: payment-service-v1
port: 8080
weight: 90 # Canary deployment: v1 90%
- name: payment-service-v2
port: 8080
weight: 10 # v2 10%
AIDLC Construction Phase Utilization
-
Define API Routing Requirements in Kiro Spec
- Specify requirements like "route 10% traffic to v2 with canary deployment" in
requirements.md - Kiro automatically generates HTTPRoute manifest
- Specify requirements like "route 10% traffic to v2 with canary deployment" in
-
Declarative Deployment via GitOps Workflow
- Deploy Gateway and HTTPRoute with single Git commit
- Argo CD automatically syncs changes to EKS
- LBC provisions ALB/NLB and applies routing rules
-
Integration with Operations Phase
- Monitor each version's SLO with CloudWatch Application Signals
- AI Agent automatically adjusts HTTPRoute weight to rollback on SLO violations
Gateway API vs Ingress
Ingress defines all routing rules in a single resource, mixing infrastructure operator and developer responsibilities. Gateway API separates roles into GatewayClass (infrastructure), Gateway (cluster), and HTTPRoute (application), allowing each team to work independently. This aligns with AIDLC's Loss Function concept — validate at each layer to prevent error propagation.
References
- Kubernetes Gateway API v1.4 Release (2025-11-06)
- AWS Load Balancer Controller — Gateway API Docs
- Kubernetes Gateway API in Action (AWS Blog)
6. Node Readiness Controller — Declarative Node Readiness Management
Node Readiness Controller (NRC) is a controller that declaratively defines conditions a Kubernetes node must meet before accepting workloads. It's a key tool for expressing infrastructure requirements as code in the AIDLC Construction phase and automatically applying them via GitOps.
Core Concepts
NRC defines conditions that nodes must satisfy before transitioning to "Ready" state through the NodeReadinessRule CRD. Traditionally, node readiness was automatically determined by kubelet, but with NRC, you can declaratively inject application-specific requirements into the infrastructure layer.
- Declarative Policy: Define node readiness conditions in YAML as
NodeReadinessRule - GitOps Compatible: Version control and automatically deploy node readiness policies via Argo CD
- Workload Protection: Block scheduling until essential daemonsets (CNI, CSI, security agents) are ready
AIDLC Phase Utilization
| Phase | NRC Role | Example |
|---|---|---|
| Inception | AI analyzes workload requirements → Automatically defines necessary NodeReadinessRule | "GPU workloads schedule only after NVIDIA device plugin is ready" |
| Construction | Include NRC rules in Helm chart, deploy via Terraform EKS Blueprints AddOn | Kiro automatically generates NodeReadinessRule manifest |
| Operations | NRC automatically manages node readiness at runtime, AI analyzes rule effects | Track node readiness delay time with CloudWatch Application Signals |
Infrastructure as Code Perspective
NRC extends AIDLC's "infrastructure as code, test infrastructure too" principle to the node level.
-
GitOps-Based Policy Management
- Store
NodeReadinessRuleCRD in Git repository - Argo CD automatically syncs to EKS cluster
- Apply to entire cluster with single Git commit when policy changes
- Store
-
Kiro + MCP Automation
- Kiro parses workload requirements from
design.mdin Inception phase - AI Coding Agent checks current cluster daemonset status
- Automatically generates necessary
NodeReadinessRuleand adds to IaC repository
- Kiro parses workload requirements from
YAML Example: GPU Workload NodeReadinessRule
apiVersion: node.k8s.io/v1alpha1
kind: NodeReadinessRule
metadata:
name: gpu-node-readiness
namespace: kube-system
spec:
# Apply only to GPU nodes
nodeSelector:
matchLabels:
node.kubernetes.io/instance-type: p4d.24xlarge
# Don't transition node to Ready until all following daemonsets are Ready
requiredDaemonSets:
- name: nvidia-device-plugin-daemonset
namespace: kube-system
- name: gpu-feature-discovery
namespace: kube-system
- name: dcgm-exporter
namespace: monitoring
# Timeout: Keep node NotReady if conditions not met within 10 minutes
timeout: 10m
Practical Use Cases
| Scenario | NRC Rule | Effect |
|---|---|---|
| Cilium CNI Cluster | Wait until Cilium agent is Ready | Prevent Pod scheduling before network initialization |
| GPU Cluster | Wait for NVIDIA device plugin + DCGM exporter readiness | Block workload scheduling before GPU resource exposure |
| Security-Hardened Environment | Wait for Falco, OPA Gatekeeper readiness | Prevent workload execution before security policy application |
| Storage Workload | Wait for EBS CSI driver + snapshot controller readiness | Prevent volume mount failures |
References
- Kubernetes Blog: Introducing Node Readiness Controller (2026-02-03)
- Node Readiness Controller GitHub Repository
7. MCP-Based IaC Automation
AWS announced AWS Infrastructure as Code (IaC) MCP Server on November 28, 2025. This is a programmatic interface where AI tools like Kiro CLI can search CloudFormation and CDK documentation, automatically validate templates, and provide AI support for deployment troubleshooting.
AWS IaC MCP Server Overview
AWS IaC MCP Server provides the following capabilities via Model Context Protocol:
- Documentation Search: Real-time search of CloudFormation resource types, CDK syntax, and best practices
- Template Validation: Automatically detect and suggest fixes for IaC template syntax errors
- Deployment Troubleshooting: Analyze root causes of stack deployment failures and provide solutions
- Programmatic Access: Native integration with AI tools like Kiro and Amazon Q Developer
AIDLC Construction Phase Integration
-
Kiro Spec → IaC Code Generation Validation
- Kiro generates CDK/Terraform/Helm code based on
design.mdfrom Inception phase - IaC MCP Server automatically validates generated code syntax, resource constraints, and security policy compliance
- For CloudFormation templates, pre-detect resource type typos, circular dependencies, and incorrect properties
- Kiro generates CDK/Terraform/Helm code based on
-
Pre-validate Compatibility with Existing Infrastructure
- Integrate with EKS MCP Server and Cost Analysis MCP to analyze current cluster state
- Validate new IaC code doesn't conflict with existing resources (VPC, subnets, security groups)
-
Role as Loss Function
- Block incorrect IaC code before production deployment
- Verify consistency between domain boundaries defined in DDD Integration and infrastructure requirements
References
- AWS DevOps Blog: Introducing the AWS IaC MCP Server (2025-11-28)
8. AIDLC Pipeline Integration
When EKS Capabilities are combined, all outputs generated by Kiro from Spec can be deployed across the entire stack with a single Git push. This is the core of the Construction → Operations transition.
🔧 IaC Automation Pipeline
Kiro → MCP → IaC → Argo CD
- requirements.md
- design.md
- tasks.md
- EKS MCP
- Cost MCP
- AWS Docs MCP
- Terraform
- Helm Chart
- ACK CRD
- KRO ResourceGroup
- Git Repository
- ↓
- Managed Argo CD
🚀 AI/CD Pipeline Conceptual Diagram
Inception → Construction → Deploy
Integration Flow
Core Principles
- Declarative: Define all infrastructure, application, and networking configurations in YAML/HCL
- GitOps: Use Git as Single Source of Truth
- Automation: Minimize manual intervention with Kiro + MCP + Argo CD
- Validation: Loss Function catches errors early at each stage
Summary
EKS Capabilities are the core infrastructure for declaratively automating AIDLC's Construction/Operations phases:
- Managed Argo CD: GitOps-based continuous deployment
- ACK: Manage AWS resources as K8s CRDs
- KRO: Orchestrate composite resources as single deployment units
- Gateway API: Role-separated traffic routing, aligns with AIDLC Loss Function
- Node Readiness Controller: Declarative node readiness state management
- IaC MCP Server: AI-based IaC code validation and troubleshooting
These tools form the complete pipeline where Kiro-generated code deploys the entire stack with a single Git push, and AI Agents automatically monitor and respond in the Operations phase.