The core idea of the Performance Efficiency pillar revolves around optimal utilization of computing resources, to meet specific business requirements in the most effective manner. Furthermore, it delves into strategies for sustaining this efficiency amid fluctuating demands and ever-evolving technologies.
Best practices for extracting the most out of your workloads on AWS
These practices are focused on achieving optimal utilization of computing resources while fulfilling specific performance requirements. With this, organizations can maximize the value of their cloud-based systems and enhance overall cost-effectiveness.
Avoiding reinventing the wheel and delegating repeatable tasks
With AWS, you can let the cloud vendor handle repeatable tasks that don’t provide any differentiated value for your software teams. For example, maintaining a database running in EC2 or hosting a self-managed load balancer is not going to help you in the long run, unless there are strong business reasons to demand that. So, whenever possible, it’s recommended to adopt AWS-managed services, which reduce the time and effort you would have otherwise invested in maintaining this stack. This effort-saving directly impacts your total cost of ownership (TCO) when operating in the cloud.
Adopting serverless technologies to reduce cost and operations
Serverless technologies such as AWS Lambda, DynamoDB, and Step Functions abstract a lot of intricate details under the hood, which removes a lot of overhead around the maintenance and development of complicated technology stacks. While spearheading the development of actual applications, they shorten the time-to-business -value realization, which is very beneficial for software teams that are just getting started or already very mature with their cloud strategy.
Deploying applications closer to end users for increased performance and reliability
By leveraging AWS services such as CloudFront edge locations and AWS Global Accelerator, application teams can host their applications closer to end users, which reduces response latency, increases performance, and allows you to extract more value out of the same set of compute resources.
Experimenting with different service configurations
Be it running applications’ business logic in AWS Lambda functions or running CI/CD build jobs using AWS CodeBuild, it’s important to right-size your compute configurations to address the use case at hand. For example, increasing the memory size of your Lambda function might allow you to process the same request in reduced time, eventually leading to lower costs as the billed-duration metric decreases substantially.
Before starting with any optimizations in the application code or the AWS components used in the technology stack, it’s important to measure important metrics to surface the bottlenecks and identify areas of optimization. To support this, the Amazon CloudWatch observability platform offers a cross-account observability feature that can be used to analyze metrics throughout your organization,in one central AWS account. Let’s look at an example implementation of how this works in practice.