HEAVY.AI Migrates to Cloud to Better Serve SMBs
HEAVY.AI partnered with AWS and Loka to migrate to the cloud, leveraging Kubernetes, GitOps, IaC and CI/CD for scalable big-data analysis. This transition improved deployment efficiency, reduced costs and sped up time to market.
HEAVY.AI is an advanced analytics company that helps enterprise businesses and public sector organizations make time-sensitive, high-impact decisions using big data.
HEAVY.AI aspired to be the go-to solution for enterprise-level customers to analyze and store enormous amounts of data while providing a visual representation of the geographic imprint for each data set. To achieve their goal, they needed to improve their computing power to quickly process massive calculations and reveal hidden opportunities and risks.
HEAVY.AI was concerned that their existing individualized system solution would negatively affect customer service. They believed there would be too many different scenarios for a small team of engineers to keep up with as well as extended problem resolution times, leaving customers frustrated.
For their existing product to be scalable for small- and medium-sized enterprises (SMBs) without creating expensive, high-maintenance, discrete, customized systems, they needed their solution to provide secure authentication processes and fast, accessible data storage solutions.
In short, HEAVY.AI needed to migrate to the cloud. They reached out to AWS for assistance, and AWS connected them to Loka.
Loka's DevOps team deployed scalable infrastructure, authentication services and highly available storage for customer data.
To manage customer applications and enable horizontal and vertical autoscaling, we configured and deployed a Kubernetes (EKS) cluster. Both EKS management cluster and EKS shared cluster were deployed and are ready for production. The management cluster is used for management applications such as Grafana, ArgoCD, and Prometheus. The shared cluster is used for running client applications.
We utilized the GitOps methodology, leveraging ArgoCD, to deploy resources onto the workload cluster. A secure connection between ArgoCD and the workload cluster was established, and IAM roles were used for authentication.
Kubernetes served as the main orchestration tool for customer applications, and AWS FsX was configured to store customer data.
StepFunctions were used to orchestrate user registration and provision resources for newly registered users.
To generate customer configurations, we used Lambda functions with reusable executions. These functions were deployed in the Lambda Service VPC since they only needed to communicate with non-VPC services such as CodeCommit, DynamoDB, SSM and SecretsManager. This means that the Lambda functions were deployed without VPC configuration, making them automatically available in all availability zones.