cloud-aws-banner

    Journey to a Successful AWS Deployment – Part 4 Cloud Performance

    Insight recently partnered with Amazon Web Services (AWS) to help customers with their cloud journey.  Each customer’s requirements for the cloud can be different, from applications born in the cloud to a “lift and shift” migration of existing workloads to the cloud.  A consultative approach is taken to ensure the migration to the cloud is efficient and fit for purpose.

    What is common between each approach, is how the solution is architected to result in a successful deployment.  Insight approach each design following the AWS Well-Architected Framework.  The framework is a published set of best practices for deployment to AWS services.  This framework provides consistency across deployments.

    The AWS Well-Architected Framework consists of five pillars:

    • Operational Excellence
    • Security
    • Reliability
    • Performance Efficiency
    • Cost Optimisation

    The following post is part 4 of a blog series covering each of the five pillars and how Insight use these whilst architecting in the cloud.  AWS provide a whitepaper on this subject – AWS Well-Architected Framework.

    Performance Efficiency

    The fourth pillar in the AWS Well-Architected Framework is Performance Efficiency.  This pillar covers how to architect systems to meet requirements and to maintain that efficiency as demand changes.

    The design principles covered are as follows.

    • Democratise Advance Technology: Adoption of new technology can be difficult for organisations.  The learning curve can vary depending on existing skill sets within teams, keeping up with new technologies and architecture can be difficult.  Instead, consume new technology as a service.  For instance, NoSQL databases, media transcoding and Machine Learning can all be consumed as a service with little learning curve and without setting up and managing the underlying, supporting infrastructure.
    • Go Global In Minutes: Leverage AWS global footprint with the ability to deploy in multiple regions around the world instantly.  Providing a low cost and low latency environment.
    • Use Serverless Architecture: Serverless is a term coined in recent times to refer to services that remove the operational burden of maintaining underlying servers, in terms of operation systems and applications.  Serverless architecture remove this burden and are designed to run a function, for instance AWS Lambda is designed to run application code and storage services can run static websites.  Less things to manage whilst leveraging managed services at cloud scale.
    • Experiment More Often: Leverage automation to deploy resources to quickly carry out comparative testing using different configuration types, such as different instance types or storage configurations.
    • Mechanical Sympathy: Be sure to select the correct technology approach that best aligns with the desired outcome.  Consider data access patterns when selecting the storage or database design.

    Performance Efficiency is broken down into four focus areas of best practice:

    • Selection
    • Review
    • Monitoring
    • Trade-offs

    Selection

    Select the correct solution and technology that meets the requirements in an optimal manner.  AWS provide many building tools with the ease to consume many different technologies as managed services, different technologies can be combined and consumed to provide the most optimal solution.  Traditional, on-premises solutions usually need to be sized and designed up front in terms of infrastructure but also in terms of technologies that are procured in cycles, often 3-5 year cycles.  In the world of cloud, new services can be consumed as they are released.

    Let’s consider the four main resource types consumed – compute, storage, database and network.

    Compute can be categorised into Instances, Containers and Functions.  Each can be used but this section highlights that architecting the wrong compute solution can lead to lower performance efficiency.  Consider elasticity requirements and overall sustained capacity when selecting the compute.

    Instances are generally the default option, these are considered virtual machines (VMs) running on AWS platform that can be deployed and adjusted, in terms of instance size, easily via the console or CLI.  Many different instance sizes are available that come at different costs and different specifications, some instance types are CPU optimised and others can be RAM or GPU optimised.  Some instance types support burstable performance and some can support the very latest computing features.

    Containers are a method of operating system virtualisation that enables the ability to run an application and its dependencies in resource-isolated processes.  Amazon EC2 Container Service (Amazon ECS) is a managed service that allows the execution and management of containers on top of a EC2 cluster.  Auto-scaling policies can be configured to have a desired and maximum amount of containers that can be executed and AWS handle the deployment and management of the cluster.

    Functions is further abstraction, allowing the ability to execute code, or provide a service, without running or managing instances.  AWS Lambda offers the ability to run code without provisioning or managing instances or containers.  Just upload code and Lambda handles everything required to run the code and at the required scale.

    The optimal storage solution can vary greatly due to many different reasons, the access method (block or object), the pattern of access (sequential or random), required access frequency (online or achieve) and availability requirements.

    Amazon EBS provides persistent block storage to be used for EC2 instances, characteristics of which can be changed on an ongoing basis to meet requirements.  SSD volumes can be created with provisioned IOPS if required, along with the ability to create snapshots of volumes.  For object based storage, Amazon S3 provides a cloud scale, highly durable object store which can be architected with its own access policies and lifecycle policies for archiving.

    Database select can again vary depending on workload type.  Selecting the wrong approach can result in performance problems and can prove costly.  Relational databases such as MySQL, Microsoft SQL and Oracle are the most common database type found in traditional applications.  These database solutions can be run on EC2 instances with instance types designed for databases or can run on Amazon Relational Database Service (Amazon RDS).  Amazon RDS provides a fully managed relational database where new or existing databases can run from an automated environment.

    Non-relational databases (No-SQL) can also run on EC2 instances, for instance running Cassandra on an optimised instance type or Amazon DynamoDB can be leveraged to provide a fully managed NoSQL database that provides very low latency at any scale.

    Selecting the right database approach can not only affect performance but also have a big impact on cost, Insight can assist in evaluating the correct solution.

    Review

    AWS allow customers to adopt new technologies and approaches as they become available, the original architected solution and product set is not stuck “as is” or within a procurement cycle.  New technology should be reviewed, existing environments should be benchmarked and load tested to ensure the best performance as an ongoing process.

    • Infrastructure As Code: Develop and test infrastructure side by side with production, automate environments for testing.
    • Deployment Pipeline: Consider Continuous Integration / Continuous Deployment (CI/CD) pipelines to deploy infrastructure, enabling repeatable, consistent and low-cost approach.
    • Well-Defined Metrics: Monitor Key Performance Indicators (KPIs).
    • Automated Performance Test: Automate performance tests as part of the deployment process, automatically deploying a new environment for load testing and benchmarking.
    • Load Generation: Consider software that can generate close to production like load on the infrastructure.
    • Performance Visibility: Share the key metric information with the team for each build version.
    • Visualisation: Visualise the performance issues, hot spots and low utilisation occurring.

    Monitoring

    Monitoring has featured in other pillars of the Well-Architected Framework and is a key consideration for performance so problems can be remediated before customer are affected.  Alarms can be configured to alert when certain thresholds have been breached and can trigger automation to scale an application when under heavy load.  Amazon CloudWatch provides the tools to configure alarms and performance thresholds that can scale an environment.

    Amazon CloudWatch can be used to gain system-wide visibility into resource utilisation, operational health and application performance.  CloudWatch can monitor AWS resources such as EC2 and Amazon RDS database instances that include out of the box metrics.  Custom metrics can also be configured to go beyond the provided ones.  CloudWatch Logs is also a very good tool to centrally collect logs from services and applications where alarms can also be configured.

    Consider passive monitoring as well as active.  Active monitoring can be used to continually check performance and availability whereas passive monitoring can be used to collect things like performance metrics across all users.  Passive monitoring can be a great method to understand user performance and geographic performance variability.

    Trade-offs

    Trade-offs cover solutions that can be configured to improve performance in certain situations, such as adding a caching tier, and what impact those decisions can have on the overall solution. 

    Let’s start with caching, AWS offer a number of caching solutions for different tiers of the application.  A general trade-off for caching could be the data isn’t always up to date whilst it lives in cache and not consistent. 

    Amazon ElastiCache is a web service that makes it easy to consume in-memory robust caching engines in AWS with Memcahced and Redis supported.  At a database level, Amazon RDS service provides Read Replicas to scale-out the read performance and offload the reads from the main DB.  Data can also be cached at a geographical level by using Amazon CloudFront, a CDN service.  Content such as dynamic and static content or entire websites can be cached and delivered using a global network of edge locations to the users closest location.

    Compression trades additional computing time against reducing space and network requirements.  Compression can be applied at a file system and data files.  Amazon CloudFront can also support compression at the edge, if the web client supports it.  For large data collection, Amazon Redshift can trade off computing time for the benefit of space efficiency.

    Conclusion

    Performance Efficiency is the fourth pillar within the AWS Well-Architected Framework and with all of the framework, it represents an ongoing effort.  By planning the AWS deployment up front and following this methodology, it ensures best practice is followed from the start and not bolted on as an afterthought.

    The remaining pillars in the framework will be covered in this blog series. 

    If you are interested in finding out more, please contact your Insight Account Manager or get in touch via our contact form here.


    Why not also read 'Journey to a Successful AWS Deployment – Part 5 Cloud Cost Optimisation'?