Skip to main content
Nauman Munir
Back to Projects
Case StudyE-commerceManaged KubernetesCloud Networking & DNS Management

EKS Fargate Profiles - Basics for BrightBuy Ventures

AMJ Cloud Technologies deployed EKS Fargate Profiles for BrightBuy Ventures, enabling serverless workloads with an ALB Ingress and automated Route 53 DNS for secure e-commerce microservices.

4 min read
1 month
EKS Fargate Profiles - Basics for BrightBuy Ventures

Technologies

AWS EKSEKS FargateAWS Load Balancer ControllerKubernetes IngressExternal DNSApplication Load BalancerAWS Route 53AWS Certificate Manager

Challenges

EC2 Node Management OverheadManual DNS ConfigurationResource Allocation Inefficiency

Solutions

Fargate Serverless WorkloadsAutomated Route 53 DNSSSL-Enabled ALB Ingress

Key Results

Scaled e-commerce microservices with serverless Fargate

scalability achievement

Fully automated Fargate profile and Ingress setup

automation level

Secured access with HTTPS and optimized resource allocation

security improvement

EKS Fargate Profiles for BrightBuy Ventures

AMJ Cloud Technologies partnered with BrightBuy Ventures, an e-commerce company, to enhance their AWS Elastic Kubernetes Service (EKS) cluster by implementing Fargate Profiles. This project enabled serverless workloads for BrightBuy’s frontend microservice, using an Application Load Balancer (ALB) Ingress with automated Route 53 DNS registration (fargate-demo.brightbuyventures.com). The solution eliminated EC2 node management overhead, optimized resource allocation, and ensured secure access with HTTPS, streamlining their e-commerce platform operations.

Situation

BrightBuy Ventures sought to modernize their EKS cluster by adopting serverless computing to reduce the operational burden of managing EC2 worker nodes. Their existing setup included a managed node group, but they needed a more efficient way to deploy microservices. AMJ was tasked with creating a Fargate Profile to run a frontend microservice, configuring an ALB Ingress with IP-based targeting for Fargate pods, and automating DNS management to simplify access and enhance security.

Task

The objectives were to:

  • Create a Fargate Profile on the existing EKS cluster (ecommerce-cluster) for the fargate-dev namespace.
  • Deploy a frontend microservice with defined resource requests and limits.
  • Configure an ALB Ingress with alb.ingress.kubernetes.io/target-type: ip for Fargate workloads.
  • Automate Route 53 DNS record creation for fargate-demo.brightbuyventures.com using External DNS.
  • Verify application access via HTTPS and ensure health checks.
  • Complete the project within one month.

Action

Our team executed the following steps, adhering to AWS and Kubernetes best practices:

Prerequisites

  • Used BrightBuy’s existing EKS cluster (ecommerce-cluster, version 1.31) with a managed node group.
  • Ensured the latest eksctl version:
    eksctl version
    brew upgrade eksctl && brew link --overwrite eksctl
  • Verified AWS Load Balancer Controller (v2.8.0) and External DNS were running:
    helm install load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=ecommerce-cluster --set image.tag=v2.8.0
    helm install external-dns external-dns/external-dns -n kube-system --set provider=aws --set aws.region=us-east-1
    kubectl get pods -n kube-system
  • Checked existing worker nodes:
    kubectl get nodes -o wide

Create Fargate Profile

  • Created a Fargate Profile for the fargate-dev namespace:
    eksctl create fargateprofile --cluster ecommerce-cluster --name fp-demo --namespace fargate-dev
  • Verified the Fargate Profile:
    eksctl get fargateprofile --cluster ecommerce-cluster

Configure Namespace

  • Created the fargate-dev namespace:
    apiVersion: v1
    kind: Namespace
    metadata:
      name: fargate-dev

Deploy Frontend Microservice

  • Deployed the frontend microservice with resource limits:
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: frontend-deployment
      namespace: fargate-dev
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: frontend
      template:
        metadata:
          labels:
            app: frontend
        spec:
          containers:
            - name: frontend
              image: nginx:latest
              ports:
                - containerPort: 80
              resources:
                requests:
                  memory: "128Mi"
                  cpu: "500m"
                limits:
                  memory: "500Mi"
                  cpu: "1000m"
  • Configured a NodePort Service:
    apiVersion: v1
    kind: Service
    metadata:
      name: frontend-service
      namespace: fargate-dev
      annotations:
        alb.ingress.kubernetes.io/healthcheck-path: /frontend/index.html
    spec:
      type: NodePort
      selector:
        app: frontend
      ports:
        - port: 80
          targetPort: 80

Configure ALB Ingress for Fargate

  • Configured the Ingress with IP-based targeting:
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: fargate-ingress
      namespace: fargate-dev
      annotations:
        alb.ingress.kubernetes.io/load-balancer-name: ecommerce-ingress
        alb.ingress.kubernetes.io/scheme: internet-facing
        alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
        alb.ingress.kubernetes.io/healthcheck-interval-seconds: "15"
        alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
        alb.ingress.kubernetes.io/success-codes: "200"
        alb.ingress.kubernetes.io/healthy-threshold-count: "2"
        alb.ingress.kubernetes.io/unhealthy-threshold-count: "2"
        alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
        alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:<account-id>:certificate/<certificate-id>
        alb.ingress.kubernetes.io/ssl-redirect: "443"
        external-dns.alpha.kubernetes.io/hostname: fargate-demo.brightbuyventures.com
        alb.ingress.kubernetes.io/target-type: ip
    spec:
      ingressClassName: alb-ingress-class
      rules:
        - http:
            paths:
              - path: /frontend
                pathType: Prefix
  • Applied manifests:
    kubectl apply -f manifests/

Verify Deployment

  • Verified Kubernetes resources:
    kubectl get ns
    kubectl get pods -n fargate-dev -o wide
    kubectl get ingress -n fargate-dev
  • Confirmed ALB settings and Route 53 record for fargate-demo.brightbuyventures.com in the AWS Console.
  • Checked External DNS logs:
    kubectl logs -f $(kubectl get po -n kube-system | egrep -o 'external-dns[A-Za-z0-9-]+')

Test Application Access

  • Verified HTTPS access:
    https://fargate-demo.brightbuy.io/frontend/index.html

Result

The project delivered a serverless, scalable solution for BrightBuy Ventures:

  • Scalability Achievement: Scaled e-commerce microservices using Fargate’s serverless compute model.
  • Automation Level: Fully automated Fargate Profile, ALB Ingress, and DNS setup.
  • Security Improvement: Secured access with HTTPS and optimized resource allocation for Fargate pods.

Technologies Used

  • AWS EKS
  • EKS Fargate
  • AWS Load Balancer Controller
  • Kubernetes Ingress
  • External DNS
  • Application Load Balancer
  • AWS Route 53
  • AWS Certificate Manager

Key Takeaways

This case study highlights AMJ Cloud Technologies’ expertise in deploying serverless workloads for BrightBuy Ventures’ e-commerce platform. EKS Fargate Profiles, combined with an IP-based ALB and External DNS, reduced infrastructure overhead and ensured secure access, offering a scalable model for similar industries.

Architectural Diagram

Illustrates BrightBuy’s EKS cluster with a managed node group, Fargate Profile, ALB Ingress, External DNS, Route 53, and frontend microservice running on Fargate pods.

Need a Similar Solution?

I can help you design and implement similar cloud infrastructure and DevOps solutions for your organization.