I’m studying for the AWS Solutions Architect – Associate exam and there are a LOT of services to get to grips with. Inspired by AWS in Plain English, I’ve created my own list to make sure I know my Cloud Front from Cloud Trail and Athena from Aurora.
On Demand Instance
Virtual Private Gateway
Amazon CloudWatch collects and tracks metrics for your AWS resources. If you are new to AWS and are using the free tier you may want to add a Billing Alarm to make sure you don’t run into any unexpected charges. It’s easy to forget something is running and get landed with a bill.
An S3 bucket is where objects are stored, similar to files and folders on your local machine. Each object consists of:
- Key (the name of the object),
- Value (the data in the file itself made of bytes),
There are four storage tiers:
- S3 – Most expensive and reliable option
- S3:IA – For infrequently accessed files that are cheaper to store but if needed immediately incur a charge
- Reduced Redundancy Storage – Best for files that need to be retrieved often but you don’t care if you lose them
- Glacier – Extremely cheap long-term storage
Amazon RDS creates a storage volume snapshot of your entire instance. Creating this snapshot results in a brief I/O suspension that can last from a few seconds to a few minutes. Multi-AZ DB instances are not affected by this I/O suspension since the backup is taken on the standby.
When you create a DB snapshot, you need to identify which DB instance you are going to back up, and then give your DB snapshot a name so you can restore from it later. You can do this using the AWS Management Console, the AWS CLI, or the RDS API.
Amazon CloudFront is the AWS CDN. It caches information closest to the user to the next user can download a copy faster. CloudFront can distribute all website content including dynamic, static, streaming and interactive content from either AWS services like S3 or your own non-AWS server.
Amazon Kinesis Data Firehose is exactly what it sounds like, a reliable way to stream data in near real-time. Data can be streamed to S3, Amazon’s data warehousing solution, Redshift or Elasticsearch. Hearst Corporation used this service to build their data science capabilities and create near real-time data for decision makers.
AWS Identify and Access Management allows you to securely control individual and group access to your resources. Users by default have no access until you assign them a role. Roles define a set of permissions for making AWS service requests and are most often used to assign Groups of Users permissions to perform tasks or access services.
Amazon Route 53 is Amazons Domain Name System (DNS) web service. It is designed to give developers a cost-effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other. AWS named the service Route 53 because all DNS requests are handled through port 53.
An Amazon Machine Image is a type of virtual appliance used to create a virtual machine within the Amazon Elastic Compute Cloud (“EC2”). It serves as the basic unit of deployment for services delivered using EC2.
Amazon EMR provides a scalable framework so you can run Spark and Hadoop processes over an S3 data lake. The Run Job on an EMR template launches an Amazon EMR cluster based on the parameters provided and starts running steps based on the specified schedule. Once the job completes, the EMR cluster is terminated.
The AWS KMS Service makes it easy to create and control encryption keys on AWS which can then be utilised to encrypt and decrypt data in a safe manner. The service leverages Hardware Security Modules (HSM) under the hood which in return guarantees security and integrity of the generated keys.
To manage your objects so that they are stored cost-effectively throughout their lifecycle, configure their lifecycle. A lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. For example, you might choose to transition objects to the Standard_IA storage class 30 days after you created them, or archive objects to the Glacier storage class one year after creating them.
Amazon SNS allows applications to send time-critical messages to multiple subscribers through a “push” mechanism, eliminating the need to periodically check or “poll” for updates
Amazon SQS stores messages in a queue. SQS cannot deliver any messages, where an external service (lambda, EC2 etc) is needed to poll SQS and grab messages from SQS.
By using Amazon SNS and Amazon SQS together, messages can be delivered to applications that require immediate notification of an event, and also persisted in an Amazon SQS queue for other applications to process at a later time.
A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud. You can launch your AWS resources, such as Amazon EC2 instances, into your VPC.
You can use a NAT device to enable instances in a private subnet to connect to the internet (for example, for software updates) or other AWS services, but prevent the internet from initiating connections with the instances. A NAT device forwards traffic from the instances in the private subnet to the internet or other AWS services, and then sends the response back to the instances.
On Demand Instance
There are four ways to pay for Amazon EC2 instances:
- On-Demand – pay for capacity by per hour or per second depending on which instances you run.
- Reserved Instances – provide a reservation at 75% off the On-Demand price, giving you confidence in your ability to launch instances when you need them.
- Spot Instances – request spare Amazon EC2 computing capacity for up to 90% off the On-Demand price.
- Dedicated Hosts – provide EC2 instance capacity on physical servers dedicated for your use.
Amazon EBS is a persistent storage device that can be attached to a single EC2 instance to be used as a file system for databases, application hosting, and storage.
Amazon EFS is a managed network file system that can be shared across multiple Amazon EC2 instances and is scalable depending on workload.
Amazon RDS makes it easy to provision a managed databse instance in the cloud. At the time of writing the following database engines were available.
- Amazon Aurora for MySQL and PostgreSQL
- MS SQL Server
Read replication can be part of your disaster recovery plan. You can promote a read replica if the source database instance fails.
Auto Scaling launches and terminates Amazon EC2 instances automatically according to user-defined policies, schedules, and alarms. You can use Auto Scaling to maintain a fleet of AWS EC2 instances that can adjust to any presented load. You can also use Auto Scaling to bring up multiple instances in a group at one time.
Metrics are the fundamental concept in CloudWatch. A metric represents a time-ordered set of data points that are published to CloudWatch. Think of a metric as a variable to monitor, and the data points represent the values of that variable over time.
Each data point has a time stamp and a unit of measure. When you request statistics, the returned data stream is identified by namespace, metric name, dimension, and the unit.
Virtual Private Gateway
A VPC is a virtual data center which is a logically isolated section of AWS that can span availability zones. VPC’s are made of Internet Gateways/Virtual Private Gateways, route tables, network access control lists, subnets, and security groups.
AWS WAF protects web applications from attacks, like specific user-agents, bad bots, or content scrapers, by filtering traffic based on rules that you create.
AWS WAF can be deployed on Amazon CloudFront, protecting your resources and content at the Edge locations. And the Application Load Balancer (ALB), to protect Internet-facing as well as internal load balancers.
You can use x.509 certificates in AWS Certificate Manager to identify users, computers, applications, services, servers, and other devices internally.
OK, I cheated here, but this is a really interesting post that puts it all together: AWS Explained by Operating a Brewery
One of the most important introductory concepts to understand is that AWS hosts its infrastructure in data centres called Availability Zones (AZs). There are multiple AZs in a Region which means that if there is a problem in one AZ another can pick up the slack. For some services, you can host your application in multiple Regions.
Photo by Janko Ferlic on Pexels
This post first appeared on dev.to