The relentless pursuit of operational efficiency often collides with the imperative for scalable, resilient infrastructure. In 2026, enterprises leveraging Kubernetes on AWS confront a critical nexus: optimizing cloud spend while maintaining peak application performance. The legacy approaches to autoscaling, while foundational, frequently leave significant capital on the table, particularly as cloud costs continue their upward trajectory. This article delves into a comprehensive, multi-faceted strategy for Kubernetes autoscaling on AWS, designed to achieve maximum cost savings by 2026 standards, moving beyond reactive scaling to proactive, intelligent resource management. Readers will gain actionable insights into advanced tooling, configuration patterns, and strategic considerations essential for architects and lead engineers aiming to build truly cost-optimized Kubernetes environments.
Technical Fundamentals: Mastering Kubernetes Autoscaling for Cost Efficiency in 2026
Effective Kubernetes autoscaling on AWS in 2026 demands a sophisticated understanding of how various components interact, not just how they function in isolation. The goal is to ensure that compute resources precisely match demand, minimizing idle capacity while preventing performance degradation. This involves a delicate balance across three primary dimensions: pod scaling, node scaling, and event-driven scaling.
Pod-Level Autoscaling: Horizontal and Vertical Resource Management
-
Horizontal Pod Autoscaler (HPA): The HPA remains the cornerstone of reactive pod scaling. It adjusts the number of pod replicas based on observed CPU utilization, memory utilization (since Kubernetes v1.8, requiring
metrics-server), or custom/external metrics. For cost optimization, the key is not just whether to scale, but how aggressively and based on what metrics. In 2026, relying solely on CPU utilization is often insufficient. Custom metrics from application-specific queues (e.g., SQS queue depth via KEDA), request latency, or even database connection pools provide a more accurate signal for scaling business-critical workloads. This allows for proactive scaling before traditional resource metrics peak, reducing the need for over-provisioning baseline resources.Analogy: Think of HPA as adding more cashiers (pods) to a supermarket check-out (workload) when the queue length (custom metric like SQS queue depth) gets too long, rather than waiting for the existing cashiers to be overwhelmed (CPU utilization).
-
Vertical Pod Autoscaler (VPA): VPA addresses the common problem of developers requesting generous, often exaggerated, CPU and memory limits for their pods. While HPA scales out, VPA scales up or down a pod's resource requests and limits. Its primary cost-saving utility comes from its "Recommender" mode, which observes pod usage patterns over time and suggests optimal resource requests and limits. Applying these recommendations can drastically reduce the amount of reserved but unused CPU and memory, thereby allowing more pods to fit on fewer, appropriately sized nodes. In 2026, VPA is often run in a non-enforcing "Recommender" mode, with operators or CI/CD pipelines applying the recommendations after review, to prevent unexpected restarts or resource reconfigurations during critical periods. The interaction between VPA and HPA is crucial: VPA should inform HPA about ideal per-pod resources, allowing HPA to make more accurate decisions on the number of pods.
Node-Level Autoscaling: Advanced Provisioners and Instance Optimization
The true frontier for cost savings in 2026 lies in optimizing the underlying compute infrastructure. Traditional Cluster Autoscaler (CAS) has served well, but it often struggles with heterogeneous node groups, rapid scaling events, and the full utilization of AWS-specific cost-saving mechanisms like Spot Instances and Graviton processors.
-
Cluster Autoscaler (CAS): CAS watches for unschedulable pods and adds nodes to the cluster, or removes nodes that have been underutilized for a specified period. While reliable, CAS can be slower to react and less intelligent about node type selection, often provisiong from pre-defined ASG configurations. Its limitation in 2026 is its lack of inherent awareness of instance types beyond the configured Auto Scaling Group (ASG), and its less aggressive stance on consolidating workloads onto fewer nodes.
-
Karpenter: Karpenter, an open-source, high-performance node provisioner built by AWS, has matured significantly by 2026 and is arguably the most impactful tool for cost optimization in this domain. Unlike CAS, Karpenter directly interfaces with the AWS EC2 API, provisioning the exact EC2 instance types required by pending pods, considering factors like architecture (x86, ARM64/Graviton), resource requests, and instance lifecycle (On-Demand, Spot).
- Cost Efficiency with Karpenter:
- Just-in-Time Provisioning: Launches nodes precisely when needed, minimizing idle time.
- Optimal Instance Selection: Dynamically selects the cheapest available instance type that satisfies pod requirements, including Spot Instances and Graviton processors.
- Aggressive Consolidation: Actively monitors nodes for underutilization and attempts to consolidate pods onto fewer nodes, terminating expensive, partially utilized instances. This is a game-changer compared to CAS's more conservative consolidation.
- Multi-Architecture Support: Seamlessly provisions ARM64 Graviton instances, which offer significant price/performance advantages, alongside x86 instances, simplifying the management of mixed-architecture clusters.
- Cost Efficiency with Karpenter:
-
AWS Compute Optimizer Integration: A new trend in 2026 is the deeper integration of AWS Compute Optimizer with Kubernetes autoscaling. While not a direct autoscaling tool, it provides valuable recommendations on instance types based on historical workload performance. You can leverage these recommendations within your Karpenter provisioners to further fine-tune instance selection for optimal cost and performance. By feeding Compute Optimizer's insights into Karpenter, you ensure that even as your workloads evolve, your infrastructure remains rightsized.
-
Predictive Scaling with Machine Learning: By late 2025 and early 2026, more sophisticated predictive scaling mechanisms have emerged. These leverage machine learning models to forecast future resource demands based on historical patterns, seasonality, and even external factors like marketing campaigns or economic indicators. Tools like StormForge Optimize Live or Dynatrace Davis can learn your application's behavior and predict resource needs, allowing for proactive scaling that avoids performance bottlenecks and minimizes over-provisioning. Integrating these predictions into your HPA or KEDA configurations ensures your cluster is always one step ahead of demand.
Event-Driven Autoscaling (KEDA)
Kubernetes Event-Driven Autoscaling (KEDA) extends HPA functionality to support a vast array of event sources beyond standard CPU/memory metrics. In 2026, KEDA is indispensable for applications with intermittent or asynchronous workloads, such as message queue processors, stream consumers, or cron jobs. By scaling to zero pods when no events are present and rapidly scaling up when demand surges, KEDA dramatically reduces operational costs for burstable or idle-prone services. Integrating KEDA with custom metrics and leveraging its scaleToZero capability provides a highly granular and cost-efficient scaling strategy.
Practical Implementation: A Cost-Optimized Kubernetes Autoscaling Blueprint for 2026
This section outlines a practical blueprint for deploying and configuring an advanced autoscaling solution on AWS using Karpenter, HPA, and VPA, focusing on maximizing cost savings. We'll assume an existing EKS cluster.
1. Installing Karpenter (v0.32.x in 2026)
First, install Karpenter and configure its IAM roles and policies. This setup grants Karpenter permissions to launch and manage EC2 instances.
# Set environment variables
export CLUSTER_NAME="your-eks-cluster-name"
export AWS_REGION="your-aws-region"
export KARPENTER_VERSION="v0.32.0" # Assuming latest stable for 2026
# Create Karpenter service account and IRSA
eksctl create iamserviceaccount \
--cluster $CLUSTER_NAME \
--namespace karpenter \
--name karpenter \
--role-name "$CLUSTER_NAME-karpenter" \
--attach-policy-arn "arn:aws:iam::$(aws sts get-caller-identity --query Account --output text):policy/KarpenterControllerPolicy-$CLUSTER_NAME" \
--override-existing-serviceaccounts \
--approve
# Install Karpenter Helm chart
helm upgrade --install karpenter oci://public.ecr.aws/karpenter/karpenter --version $KARPENTER_VERSION --namespace karpenter \
--set serviceAccount.create=false \
--set serviceAccount.name=karpenter \
--set clusterName=$CLUSTER_NAME \
--set clusterEndpoint="$(aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.endpoint" --output text)" \
--wait # Wait for Karpenter deployment to be ready
Why this matters for cost savings: Karpenter requires granular IAM permissions to interact directly with EC2 and other AWS services. This allows it to make intelligent, cost-aware decisions about instance provisioning (e.g., requesting Spot instances, Graviton types).
2. Configuring Karpenter Provisioners for Cost Savings
Karpenter uses Provisioner resources to define how it should provision nodes. Here, we create a provisioner that prioritizes Spot Instances, Graviton (ARM64) architecture, and aggressively consolidates nodes.
# karpenter-provisioner-cost-optimized.yaml
apiVersion: karpenter.k8s.aws/v1beta1
kind: Provisioner
metadata:
name: cost-optimized
spec:
# Provisioners can be filtered by labels. Here, we allow all workloads.
# For more fine-grained control, you can add `nodeSelector` or `affinity` to pods
# that target this provisioner with specific labels (e.g., critical-workload=true).
requirements:
- key: kubernetes.io/arch
operator: In
values: ["arm64", "amd64"] # Prioritize Graviton (arm64) but allow amd64 fallback
- key: karpenter.sh/capacity-type
operator: In
values: ["spot", "on-demand"] # Prioritize Spot instances
- key: karpenter.sh/provisioner-name
operator: In
values: ["cost-optimized"] # Identify this provisioner
# Node consolidation settings: aggressive termination of underutilized nodes
consolidation:
enabled: true
# Configure consolidation for maximum savings
# `expireAfter` specifies how long nodes can be unneeded before termination
# `maxEmptyDuration` specifies how long an empty node can exist before termination
ttlSecondsAfterEmpty: 60 # Terminate empty nodes quickly (60 seconds)
ttlSecondsAfterCreated: 3600 # Terminate nodes after 1 hour if not needed for the duration
# Instance type selection preferences
# Karpenter will try to pick the cheapest instance that meets requirements
weight: 10 # Higher weight means higher preference if multiple provisioners match
limits:
resources:
cpu: "1000" # Max 1000 CPU cores from this provisioner
memory: "2Ti" # Max 2TiB memory from this provisioner
providerRef:
name: default # Refers to a `AWSNodeTemplate` for more specific AWS configurations
---
apiVersion: karpenter.k8s.aws/v1beta1
kind: AWSNodeTemplate
metadata:
name: default
spec:
subnetSelector:
karpenter.sh/discovery: "${CLUSTER_NAME}" # Auto-discover subnets tagged for the cluster
securityGroupSelector:
karpenter.sh/discovery: "${CLUSTER_NAME}" # Auto-discover security groups tagged for the cluster
instanceTypes: # List specific instance families for better control, prioritizing Graviton
- c7g.medium # Graviton 3 (ARM64)
- m7g.medium # Graviton 3 (ARM64)
- r7g.medium # Graviton 3 (ARM64)
- c6g.medium # Graviton 2 (ARM64)
- m6g.medium # Graviton 2 (ARM64)
- r6g.medium # Graviton 2 (ARM64)
- c6i.large # Intel (AMD64)
- m6i.large # Intel (AMD64)
- r6i.large # Intel (AMD64)
# AMI Selector: Use AL2023 with EKS optimized for cost and performance in 2026
amiFamily: AL2023
# Optionally, add tags for cost allocation and visibility
tags:
karpenter.sh/provisioner-name: "cost-optimized"
environment: "production"
team: "devops"
kubectl apply -f karpenter-provisioner-cost-optimized.yaml
Why this matters for cost savings: This configuration explicitly tells Karpenter to:
- Prioritize
arm64(Graviton) instances due to their superior price/performance ratio.- Prioritize
spotinstances for significant discounts, falling back toon-demandonly when Spot capacity isn't available or suitable.- Aggressively consolidate and terminate underutilized nodes, ensuring you only pay for what you absolutely need, even if nodes are only partially utilized for a short period.
- Target specific, cost-effective instance types (e.g.,
c7g,m7g,r7gfamilies which offer better performance over previous generations).- Use AL2023 AMIs, which are lighter and more secure, improving boot times and reducing attack surface.
3. Implementing Vertical Pod Autoscaler (VPA) in Recommender Mode
Deploy VPA if not already present. Then, for a specific deployment, define a VPA resource.
# vpa-recommender-mode.yaml
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: my-app-vpa
namespace: default
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: my-app-deployment # Target your specific deployment
updatePolicy:
# Set to "Off" to only get recommendations, not apply them automatically.
# This is crucial for stability and allows manual review/CI/CD integration.
updateMode: "Off"
resourcePolicy:
containerPolicies:
- containerName: '*' # Apply to all containers in the target deployment
minAllowed:
cpu: "100m"
memory: "100Mi"
maxAllowed:
cpu: "4"
memory: "8Gi"
# Control what resources VPA can recommend
controlledResources: ["cpu", "memory"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
namespace: default
spec:
replicas: 1 # Start with a baseline, HPA will scale this
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
# Initial resource requests - VPA will recommend changes over time
containers:
- name: my-app-container
image: busybox # Replace with your actual application image
command: ["sh", "-c", "echo 'Hello from my-app' && sleep 3600"] # Example command
resources:
requests:
cpu: "200m"
memory: "256Mi"
limits:
cpu: "500m"
memory: "512Mi"
kubectl apply -f vpa-recommender-mode.yaml
Why this matters for cost savings: VPA in
Offmode provides data-driven insights into actual pod resource consumption. Over-provisioned requests lead to fewer pods fitting on a node or larger, more expensive nodes being provisioned. By continually observing VPA recommendations and adjustingrequestsandlimits, you ensure pods only reserve what they genuinely need, reducing waste and allowing Karpenter to pack nodes more efficiently with smaller, cheaper instances.
4. Configuring Horizontal Pod Autoscaler (HPA) with Custom Metrics
For a highly dynamic application, combine HPA with custom metrics for proactive scaling. Here, we'll demonstrate a simple HPA targeting CPU, but in a real-world scenario, you'd integrate KEDA with external metrics (e.g., SQS queue length).
# hpa-custom-metrics.yaml
apiVersion: autoscaling.k8s.io/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-app-hpa
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app-deployment # Target the same deployment as VPA
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 60 # Target 60% CPU utilization
# For custom metrics with KEDA:
# - type: External
# external:
# metric:
# name: s3_event_queue_length
# target:
# type: AverageValue
# averageValue: 5 # Scale if average S3 event queue length exceeds 5
kubectl apply -f hpa-custom-metrics.yaml
Why this matters for cost savings: HPA dynamically adjusts pod counts to meet demand. Using custom metrics, especially with KEDA, allows for scaling based on actual business load rather than just reactive resource consumption. This prevents over-provisioning during idle times and ensures rapid scaling during peak, balancing performance with cost.
π‘ Expert Tips: Advanced Strategies for Kubernetes Cost Optimization in 2026
Having deployed and managed production Kubernetes clusters across various industries, these are the nuanced strategies that differentiate truly cost-optimized environments.
-
Embrace Graviton Natively (ARM64): By 2026, Graviton processors are no longer an experimental feature but a mature, first-class citizen in AWS. If your workloads support ARM64 architecture (most modern applications with multi-arch Docker images do), migrating to Graviton instances can yield 20-40% price/performance improvement over comparable x86 instances. Configure your Karpenter
ProvisionersandAWSNodeTemplatesto explicitly prioritizearm64instance types. Test your applications thoroughly on ARM64 during development and CI/CD. -
Aggressive Spot Instance Utilization (90%+ Target): Don't just use Spot Instances for "non-critical" workloads. With robust Pod Disruption Budgets (PDBs), graceful termination hooks, and Karpenter's intelligent Spot instance selection and replacement, aim to run 90% or more of your worker nodes on Spot. Karpenter's consolidation and replacement logic significantly mitigates the risk of Spot interruptions. Ensure your applications are fault-tolerant and can gracefully handle pod evictions.
-
Implement Cost Visibility and Allocation Tools: You can't optimize what you can't measure. Tools like OpenCost (the CNCF standard for Kubernetes cost monitoring) or KubeCost are essential. Integrate them early. Tag your AWS resources (via Karpenter's
tagsinAWSNodeTemplate) withkarpenter.sh/provisioner-name,environment, andteamlabels. This provides granular cost allocation back to specific teams, services, or environments, driving accountability and identifying waste. -
Continuous VPA Recommendation Adoption: Don't just enable VPA Recommender mode and forget it. Integrate a process (manual review, automated CI/CD job) to regularly apply VPA recommendations for CPU and memory requests/limits. Even small adjustments across hundreds of pods add up to significant savings. Consider tools that automatically review and propose VPA changes in pull requests.
-
Right-Sizing Pod Disruption Budgets (PDBs): PDBs are critical for balancing availability and cost. If set too strictly, they can hinder Karpenter's ability to consolidate nodes and terminate expensive instances. If too loose, they can lead to application downtime. Regularly review and fine-tune PDBs to ensure they allow for efficient node cycling while maintaining acceptable service levels. For mission-critical workloads on Spot instances, ensure your PDB allows at least one replica to remain available.
-
Leverage Cluster Shutdown and Scale-to-Zero for Non-Prod: For development and staging environments, use KEDA to scale deployments to zero when idle. Additionally, consider custom automation that scales down or even completely shuts down non-production EKS clusters during off-hours, spinning them up on demand. This can dramatically reduce costs for environments that are only active during business hours.
-
Watch Out for "Zombie" Resources: Periodically audit your AWS account for unattached EBS volumes, unused Load Balancers (especially if you're using Gateway API for ingress), and old AMIs. While not directly related to autoscaling, these are common hidden costs in Kubernetes environments that accumulate silently.
Comparison: AWS Kubernetes Autoscaling Components (2026 Perspective)
This comparison highlights the role and cost implications of each key component in a modern, cost-optimized Kubernetes setup on AWS.
π Horizontal Pod Autoscaler (HPA)
β Strengths
- π Reactive Scaling: Scales pod replicas based on CPU, memory, or custom metrics, responding to immediate demand fluctuations.
- β¨ Integration: Seamlessly integrates with KEDA for event-driven scaling, broadening its applicability to asynchronous workloads.
- π Maturity: A fundamental and well-understood component of Kubernetes autoscaling since early versions.
β οΈ Considerations
- π° Node Underutilization: Can lead to nodes being underutilized if pod resource requests are inflated and VPA is not in play.
- π° Slow Reactivity: Pure resource-based scaling (CPU/memory) can be reactive rather than proactive, potentially leading to brief performance dips if metrics spike rapidly.
- π° Cost Blind: HPA itself has no awareness of underlying node costs or instance types.
π Vertical Pod Autoscaler (VPA)
β Strengths
- π Resource Rightsizing: Recommends optimal CPU and memory requests/limits for pods, eliminating guesswork and over-provisioning.
- β¨ Cost Reduction Potential: By accurately sizing pods, it allows more pods to fit onto fewer or smaller, cheaper nodes, reducing node count and increasing density.
- π Data-Driven Decisions: Provides evidence-based recommendations, removing subjective developer estimations.
β οΈ Considerations
- π° Compatibility with HPA (if 'Auto' mode): Running VPA in
Automode (which modifies resource requests and limits) can conflict with HPA's scaling decisions. Often run inOff(Recommender) mode for this reason. - π° Restart Impact: In
Automode, VPA may restart pods to apply new resource allocations, which can impact application availability if not managed carefully. - π° Initial Observation Period: Requires a period of observation to generate accurate recommendations.
β‘ Cluster Autoscaler (CAS)
β Strengths
- π Node Provisioning: Scales the number of nodes in an Auto Scaling Group (ASG) based on unschedulable pods.
- β¨ Simplicity: Easier to set up for basic node scaling compared to Karpenter, especially if homogeneous node groups suffice.
- π‘οΈ Battle-Tested: A mature and widely adopted component for traditional node autoscaling.
β οΈ Considerations
- π° Limited Intelligence: Relies on pre-defined ASGs; less dynamic in instance type selection, often unable to select the cheapest available instance.
- π° Slow Consolidation: Generally less aggressive in consolidating nodes and terminating underutilized ones, potentially leaving idle capacity.
- π° Spot Instance Limitations: While ASGs can use Spot, CAS doesn't dynamically react to Spot interruptions by finding alternative instance types as efficiently as Karpenter.
π Karpenter
β Strengths
- π Cost-Optimized Provisioning: Dynamically selects the cheapest available EC2 instance type (including Spot and Graviton) that matches pod requirements.
- β¨ Rapid Scaling: Directly provisions EC2 instances, bypassing ASG warm-up times, leading to faster node launches.
- π Aggressive Consolidation: Proactively identifies and terminates underutilized nodes, consolidating workloads onto fewer, cheaper instances, a major cost-saving feature.
- βοΈ Heterogeneous Clusters: Effortlessly manages diverse node groups, including mixed architectures (x86/ARM64) and instance types.
β οΈ Considerations
- π° Complexity: Requires more detailed configuration of Provisioners and AWSNodeTemplates than CAS, especially for advanced use cases.
- π° Newer Technology: While mature by 2026, it requires understanding its unique provisioning model, which differs significantly from traditional ASGs.
- π° Vendor Lock-in: Tightly coupled with AWS EC2 APIs, making migration to other clouds more involved if Karpenter-specific configurations are deeply embedded.
π― Kubernetes Event-Driven Autoscaling (KEDA)
β Strengths
- π Event-Driven Scaling: Scales workloads based on custom metrics from external systems (queues, databases, message brokers, etc.), ideal for asynchronous tasks.
- β¨ Scale to Zero: Can scale deployments down to zero pods when no events are present, dramatically reducing costs for intermittent workloads.
- π‘ Extensibility: Supports a vast array of "scalers" for different event sources, allowing for highly specific and intelligent scaling policies.
β οΈ Considerations
- π° Dependency on External Metrics: Requires reliable access and configuration of external monitoring systems.
- π° Increased Complexity: Adds another layer of abstraction and configuration to the autoscaling setup.
- π° Latency for Cold Starts: Scaling from zero can introduce a cold start delay, which might be unacceptable for very low-latency requirements.
Frequently Asked Questions (FAQ)
Q1: Is VPA always compatible with HPA? How should I use them together for cost savings?
A1: VPA in "Auto" mode (which modifies resource requests) can conflict with HPA, as HPA relies on stable resource requests to calculate replica counts. For optimal cost savings and stability, run VPA in "Recommender" mode (updateMode: "Off"). This allows VPA to suggest optimal resource settings, which you then manually or via CI/CD pipelines apply to your deployments. HPA can then accurately scale based on these correctly sized pods, while Karpenter ensures optimal node provisioning.
Q2: How can I guarantee cost savings when using Spot Instances with Karpenter? What are the risks?
A2: Karpenter significantly improves Spot instance utilization by rapidly replacing interrupted instances and finding alternative capacity. To guarantee savings and minimize risk, focus on application resilience: ensure your pods have PodDisruptionBudgets (PDBs), implement graceful shutdown hooks, and design your applications to be stateless or capable of handling transient failures. While Spot instances offer substantial savings, the risk of interruption is inherent. Karpenter mitigates this but does not eliminate it.
Q3: What's the best strategy for handling burstable workloads that spike suddenly and then become idle? A3: For burstable workloads, combine KEDA's event-driven autoscaling with Karpenter. KEDA can scale your application pods from zero to peak replicas very quickly based on queue depth or other external signals. Karpenter, in turn, will provision the necessary nodes (prioritizing cheap Spot instances) on demand to accommodate these new pods. As demand subsides, KEDA scales pods back down (potentially to zero), and Karpenter rapidly consolidates and terminates the unneeded nodes, ensuring you only pay for compute when it's actively needed.
Conclusion and Next Steps
The landscape of Kubernetes autoscaling on AWS in 2026 demands a strategic, multi-layered approach to unlock maximum cost savings without compromising performance or reliability. Moving beyond basic scaling, the integration of intelligent node provisioners like Karpenter, combined with the precision of VPA (in recommender mode) and the agility of HPA/KEDA, forms a powerful synergy. Prioritizing Graviton instances, aggressively leveraging Spot capacity, and maintaining rigorous cost visibility are no longer optional but essential practices for any organization serious about cloud financial optimization.
We encourage you to experiment with the configurations detailed in this article. Implement Karpenter, embrace VPA recommendations, and integrate KEDA for your asynchronous workloads. Start small, monitor your costs meticulously with tools like OpenCost, and iterate. The savings potential is substantial, and the tools are mature. What specific challenges are you facing in optimizing your EKS costs? Share your thoughts and experiences in the comments below.




