1.3 List the different cloud architecture design principles

The Well-Architected framework has been developed to help cloud architects build the most secure, high-performing, resilient, and efficient infrastructure possible for their applications. This framework provides a consistent approach for customers and partners to evaluate architectures, and provides guidance to help implement designs that will scale with your application needs over time.

The AWS Well-Architected Framework is based on five pillars—security, reliability, performance efficiency, cost optimization, and operational excellence:

These principles will be described in greater detail below.

Core Principles
The below sections provide details:
 * 1) Scalability
 * 2) Disposable Resources Instead of Fixed Servers
 * 3) Automation
 * 4) Loose Coupling
 * 5) Services, Not Servers
 * 6) Databases
 * 7) Managing Increasing Volumes of Data
 * 8) Removing Single Points of Failure
 * 9) Optimize for Cost
 * 10) Caching
 * 11) Security

Scalability

Elasticity and Scalability are two fundamental cloud architecture principles that guide the AWS Architecture.

Elasticity is the ability to use resources in a dynamic and efficient way so the traditional anti-pattern of over-provisioning of infrastructure resources to cope with capacity requirements is avoided. Significantly, elasticity avoids the costs of these over-provisioned resources such as power, space, and maintenance. This is the AWS pay as you go/pay for what you use model.

Scalability is the ability to scale without changing the design. With AWS, scalability is achieved by scaling-out. Infrastructure and application components are designed with the premise that they will fail, instead of a just being designed around High Availability. The technology components are commodities that can be thrown out when they fail and grown by adding more when demanded. A guiding principle is to have a consistent approach to architecture and growth.

There are two types of scaling:
 * Horizontal Scaling - an increase in the number of resources. Autoscaling and Bootstrapping are used for horizontal scaling. Autoscaling allows you to automatically horizontally scale to accommodate load. Bootstrapping allows you automatically setup your servers after they boot. (Using components such as Amazon Machine Images (AMI’s) and CloudFormation to automate).
 * Vertical Scaling - an increase in the capabilities of the resource (e.g. faster CPU, more memory, more storage).

Disposable Resources Automation
 * Resources need to be treated as temporary disposable resources rather than fixed permanent on-premises resources before.
 * AWS focuses on the concept of  Immutable infrastructure - a server once launched, is never updated throughout its lifetime. Updates can be performed on a new server with the latest configuration. This ensures resources are always in a consistent (and tested) state and easier rollbacks.
 * AWS provides multiple ways to instantiate compute resources in an automated and repeatable way:
 * Bootstrapping - scripts to configure and setup for e.g. using data scripts and cloud-init to install software or copy resources and code
 * Golden Images - a snapshot of a particular state of that resource. Allows faster start times and removes dependencies to configuration services or third-party repositories
 * Containers - AWS support for docker images through Elastic Beanstalk and ECS. Docker allows packaging a piece of software in a Docker Image, which is a standardized unit for software development, containing everything the software needs to run: code, runtime, system tools, system libraries, etc.
 * Infrastructure as Code - AWS assets are programmable. Techniques, practices, and tools from software development can be applied to make the whole infrastructure reusable, maintainable, extensible, and testable. AWS provides services such as CloudFormation and OpsWorks for codifying deployment

Unlike traditional IT infrastructure, Cloud enables automation of a number of events, improving both your system’s stability and the efficiency of your organization. Some of the AWS resources you can use for automation are: As an architect for the AWS Cloud, these automation resources are a great advantage to work with.
 * AWS Elastic Beanstalk: This resource is the fastest and simplest way to get an application up and running on AWS. You can simply upload their application code and the service automatically handles all the details, such as resource provisioning, load balancing, autoscaling, and monitoring.
 * Amazon EC2 Auto recovery: You can create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically recovers it if it becomes impaired. But a word of caution – During instance recovery, the instance is migrated through an instance reboot, and any data that is in-memory is lost.
 * Auto Scaling: With Auto Scaling, you can maintain application availability and scale your Amazon EC2 capacity up or down automatically according to conditions you define.
 * Amazon CloudWatch Alarms: You can create a CloudWatch alarm that sends an Amazon Simple Notification Service (Amazon SNS) message when a particular metric goes beyond a specified threshold for a specified number of periods.
 * Amazon CloudWatch Events: The CloudWatch service delivers a near real-time stream of system events that describe changes in AWS resources. Using simple rules that you can set up in a couple of minutes, you can easily route each type of event to one or more targets: AWS Lambda functions, Amazon Kinesis streams, Amazon SNS topics, etc.
 * AWS OpsWorks Lifecycle events: AWS OpsWorks supports continuous configuration through lifecycle events that automatically update your instances’ configuration to adapt to environment changes. These events can be used to trigger Chef recipes on each instance to perform specific configuration tasks.
 * AWS Lambda Scheduled events: These events allow you to create a Lambda function and direct AWS Lambda to execute it on a regular schedule.

Loose Coupling

Loosely coupled architectures reduce interdependencies, so that a change or failure in a component does not cascade to other components: Services, Not Servers
 * Asynchronous Integration
 * does not involve direct point-to-point interaction but usually through an intermediate durable storage layer for e.g. SQS, Kinesis.
 * decouples the components and introduces additional resiliency.
 * suitable for any interaction that doesn’t need an immediate response and where an ack that a request has been registered will suffice.
 * Service Discovery
 * allows new resources to be launched or terminated at any point in time and discovered as well for e.g. using ELB as a single point of contact with hiding the underlying instance details or Route 53 zones to abstract load balancer’s endpoint.
 * Well-Defined Interfaces
 * allows various components to interact with each other through specific, technology-agnostic interfaces for e.g. RESTful APIs with API Gateway.

A wide variety of underlying technology components are required to develop manage and operate applications. Your AWS cloud architecture should leverage a broad set of compute, storage, database, analytics, application, and deployment services. On AWS, there are two ways to do that. The first is through managed services that include databases, machine learning, analytics, queuing, search, email, notifications, and more. For example, with the Amazon Simple Queue Service (Amazon SQS) you can offload the administrative burden of operating and scaling a highly available messaging cluster, while paying a low price for only what you use. Not only that, Amazon SQS is inherently scalable.

The second way is to reduce the operational complexity of running applications through server-less architectures. It is possible to build both event-driven and synchronous services for mobile, web, analytics, and the Internet of Things (IoT) without managing any server infrastructure.

Databases

On AWS, managed database services help remove constraints that come with licensing costs and the ability to support diverse database engines that were a problem with the traditional IT infrastructure. You need to keep in mind that access to the information stored on these databases is the main purpose of cloud computing.

There are three different categories of databases to keep in mind while architecting: In a database world horizontal-scaling is often based on the partitioning of the data i.e. each node contains only part of the data, in vertical-scaling the data resides on a single node and scaling is done through multi-core i.e. spreading the load between the CPU and RAM resources of that machine.
 * Relational databases – Data here is normalized into tables and also provided with a powerful query language (SQL), flexible indexing capabilities, strong integrity controls, and the ability to combine data from multiple tables in a fast and efficient manner. They can be scaled vertically and are highly available during failovers (designed for graceful failures). RDS allows vertical scalability by increasing resources and horizontal scalability using Read Replicas for increasing read capacity and sharding or data partitioning for improving write capacity. RDS also provides High Availability using Multi-AZ deployment, where data is synchronously replicated. Furthermore, the RDS service can be set up across a hybrid environment (i.e. distributed across a company's data center and an AWS VPC).
 * NoSQL databases (DynamoDB) – These databases trade some of the query and transaction capabilities of relational databases for a more flexible data model that seamlessly scales horizontally. NoSQL databases utilize a variety of data models, including graphs, key-value pairs, and JSON documents. NoSQL databases are widely recognized for ease of development, scalable performance, high availability, and resilience.
 * Data warehouse (Redshift) – A specialized type of relational database, optimized for analysis and reporting of large amounts of data. It can be used to combine transactional data from disparate sources making them available for analysis and decision-making. Redshift achieves efficient storage and optimum query performance through a combination of massively parallel processing (MPP), columnar data storage, and targeted data compression encoding schemes. Redshift's MPP architecture enables increasing performance by increasing the number of nodes in the data warehouse cluster.

Remove single points of failure

A system is highly available when it can withstand the failure of an individual or multiple components (e.g., hard disks, servers, network links etc.). You can think about ways to automate recovery and reduce disruption at every layer of your AWS cloud architecture. This can be done with the following processes: Optimize for cost
 * Introduce redundancy to remove single points of failure, by having multiple resources for the same task. Redundancy can be implemented in either standby mode (functionality is recovered through failover while the resource remains unavailable) or active mode (requests are distributed to multiple redundant compute resources, and when one of them fails, the rest can simply absorb a larger share of the workload).
 * Detection and reaction to failure should both be automated as much as possible.
 * It is crucial to have a durable data storage that protects both data availability and integrity. Redundant copies of data can be introduced either through synchronous, asynchronous or Quorum based replication.
 * Automated Multi –Data Center resilience is practiced through Availability Zones across data centers that reduce the impact of failures.
 * Fault isolation improvement can be made to traditional horizontal scaling by sharding (a method of grouping instances into groups called shards, instead of sending the traffic from all users to every node like in the traditional IT structure).

At the end of the day, it often boils down to cost. Your AWS cloud architecture should be designed for cost optimization by keeping in mind the following principles: Implementing Auto Scaling so that you can scale horizontally when required or scale down when necessary can be done without any extra cost. Caching 
 * You can reduce costs by selecting the right types, configurations and storage solutions to suit your needs.
 * Taking advantage of the variety of Instance Purchasing options (Reserved and spot instances) while buying EC2 instances will help reduce cost of computing capacity.

Caching improves application performance and increases the cost efficiency of an implementation Security These principles help when architecting for the AWS cloud.
 * Application Data Caching
 * provides services that help store and retrieve information from fast, managed, in-memory caches
 * ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory cache in the cloud and supports two open-source in-memory caching engines: Memcached and Redis.
 * Edge Caching
 * allows content to be served by infrastructure that is closer to viewers, lowering latency and giving high, sustained data transfer rates needed to deliver large popular objects to end users at scale.
 * CloudFront is Content Delivery Network (CDN) consisting of multiple edge locations, that allows copies of static and dynamic content to be cached.
 * AWS works on shared security responsibility model
 * AWS is responsible for the security of the underlying cloud infrastructure
 * you are responsible for securing the workloads you deploy in AWS
 * AWS also provides ample security features
 * IAM to define a granular set of policies and assign them to users, groups, and AWS resources
 * IAM Roles to assign short term credentials to resources, which are automatically distributed and rotated
 * Amazon Cognito, for mobile applications, which allows client devices to get controlled access to AWS resources via temporary tokens.
 * VPC to isolate parts of infrastructure through the use of subnets, security groups, and routing controls
 * WAF to help protect web applications from SQL injection and other vulnerabilities in the application code
 * CloudWatch logs to collect logs centrally as the servers are temporary
 * CloudTrail for auditing AWS API calls, which delivers a log file to S3 bucket. Logs can then be stored in an immutable manner and automatically processed to either notify or even take action on your behalf, protecting your organization from non-compliance AWS Config, Amazon Inspector, and AWS Trusted Advisor to continually monitor for compliance or vulnerabilities giving a clear overview of which IT resources are in compliance, and which are not
 * For more details refer to AWS Security Whitepaper