Le cloud de Piermick



General:Amazon History:2003 - Chris Pinkman and Benjamin Black presented a paper on what Amazon's internal infrastructure should look like and suggested sellingit as a service2004 - SQS the first AWS service launched2006 - Official AWS Launch2007 - 180K devs on platform2010 - moved to AWS PlatformRegion is geographical area which consists of at least 2 AZ's2012 - First Re-Invent conference in Las Vegas2013 - Certifications Launched2014 - AWS commited to achieve 100% renewable energy usage for its global footprint2015 - AWS broke out it's revenue, 6 Billion USD per annum and growing close to 90% year after yearGlobal Infrastructure:Regions Vs. Availability Zones:?A Region is geographical area which consists of at least 2 Availability Zone's or AZ's. An AZ is simply a data center. HYPERLINK "" 16 Regions with 44 Availability Zones (AZ)Projected to spin up 4 additional regions, with 9 additional AZ's over the next yearEdge locations are CDN endpoints for CloudFront. Currently there are over 50 edge locations. Not the same as an AZMaximum Response time for Business is 1 hourServices such as CloudFormation, Elastic Beanstalk, Autoscaling, and OpsWorks are free however resources generated by these services are not freeTags:A Tag is a label which can be assigned to Amazon Web Services (AWS) resources. Using tags AWS resources can be catagoriesed easily.Benefits of using the TagsCategories the resources.Resources can be easily identified.Help to manage the environment/resources easily.Helpful in cost calculations for AWS resources in big organizationsTags works in Key-Value pair, example "Key:Environment,Value:Test"All resources have default Tag "Name"Most Common Tags :Name - Default tagEnvironment/stack - Production/Development/Test/SITApplication - Apache/Cognos/DB2/OracleRole - Webserver1/Certificate ServerOwner - Yogesh KumarCost center - Finance/HR/Billing/ITContains specific info such as pub/private IP's in EC2 instances, Port configs for ELB, Database engine in RDSSometimes can be inherited (Auto-scaling, CloudFormation, Elastic Beanstalk can create other resources)Resource Groups make it easy to group your resources using the tags that are assigned to them; You can group resources that share one or more tagsBy default, the AWS Management Console is organized by AWS service. But with the Resource Groups tool, you can create a custom console that organizes and consolidates information based on your project and the resources that you use. If you manage resources in multiple regions, you can create a resource group to view resources from different regions on the same page.Resource groups contain info such as region, name, health checksIP Address Info:Query meta-data:curl (displays bootstrapping commands)curl (displays AMI. instance type, etc)AWS uses the first 5, and last IP addresses of a subnet:x.x.x.0 - Network Addressx.x.x.1 - Gateway Addressx.x.x.2 - DNS Addressx.x.x.3 - Future Allocation Addressx.x.x.255 - Broadcast AddressConsolidated Billing:Accounts roll for customers:Paying account is independent, cannot access resources of the other accountsLinked accounts are independent from one anotherCurrently there is a limit of 20 linked accounts for consolidated billing (soft limit)One bill per AWS accountEasy to track charges and allocate costs between linked accountsVolume pricing discountResources across all linked accounts are tallied, and billing is applied collectively to allow bigger discountsActive Directory:Provides single sign-on to the AWS console, which authenticates directly off of your Active Directory infrastructureUses Secure Assertive Markup Language (SAML) authentication responses.Behind the scenes, sign-In's use the AssumeRoleWithSAML API to request temporary security credentials and then constructs a sign-in URL for the AWS Management ConsoleBrowser will then receive the sign-in URL and will be redirected to the consoleYou always authenticate against AD first, and then are granted security credentials that allow you to log into the AWS consoleImplementation & Development:How to Design Cloud Services & Best Practices:Design for failure, and create self-healing application environments.Always design applications with instances in at least two availability zones.Guarantee that you have "reserved" capacity in the event of an emergency by purchasing reserved instances in a designated recovery availability zone (AWS does not guarantee on-demand instance capacity).Rigorously test to ?nd single points of failure and apply high availability.Always enable RDS Multi-AZ and automated backups (InnoDB table support only for MySQL).Utilize Elastic IP addresses for fail over to “stand-by‘ instances when auto scaling and load balancing are not available.Use Route 53 to implement failover DNS techniques that include:Latency based routingFailover DNS routingHave a disaster recovery and backup strategy that utilizes:Multiple RegionsMaintain up to date AMl's (and copy AMl's from one region to another)Copy EBS snapshots to other regions (use CRON jobs that take snapshots of EBS)Automate everything in order to easily re-deploy resources in the event of a disasterUtilize bootstrapping to quickly bring up new instances with minimal con?guration and allows for ‘generic’ AMl'sDecouple application components using services such as SQS (when available).“Throw away" old or broken instances.Utilize Cloudwatch to monitor infrastructure changes and health.Utilize MultiPartUpload for S3 uploads (tor objects over 100MB).Cache static and dynamic content on Amazon CloudFront using EC2 or S3 Origins.Protect your data in transit by using HTTPS/SSL endpoints.Protect data at rest using encrypted ?le systems or EBS/S3 encryption options.Connect to instances inside of the VPC using a bastion host or VPN connection.Use IAM roles on EC2 instances instead of using API keys; Never store API keys on an AMIMonitoring your AWS Environment:Use Cloudwatch for:Shutting down inactive instances.Monitoring changes in your AWS environment with CloudTrail integration.Monitor instances resources and create alarms based off of usage and availability:EC2 instances have "basic" monitoring which Cloudwatch supports out of the box. and includes all metrics that can be monitored at the hypervisor level.Status Checks which can automate the recovery oi tailed status checks by stopping and starting the instance again.EC2 metrics that include custom scripts to work with Cloudwatch:Disk Usage; Available Disk SpaceSwap Usage: Available SwapMemory Usage; Available MemoryUse CIoudTraiI for:Security and compliance.Monitoring all actions taken against the AWS account.Monitoring (and being noti?ed) of changes to IAM accounts (with CloudWatch/SNS Integration)Viewing what API Keys/users performed any given API action against an environment (i.e view what user terminated a set of instances or an individual instance).Ful?lling auditing requirements inside of organizations.Use AWS Con?g for:Receiving detailed con?guration information about an AWS environment.Taking a point in time "snapshot" of all supported AWS resources to determine the state of your environment.Viewing historical con?gurations within your environment by viewing the "snapshots",Receiving noti?cations whenever resources are created, modified, or deleted.Viewing relationships between resources, LE what EC2 instances an EBS volume is attached to.Elasticity and Scalability:Proactive Cycle Scaling: Scaling that occurs at a fixed interval.Proactive Event-based scaling: Scaling that occurs in anticipation of an event.Auto-scaling based on demand: Scaling that occurs based off of increase in demand for the application.Plan to scale out rather than up (Horizontal scaling):Add more EC2 instances to handle increases in capacity rather than increasing instance sizeBe sure to design for the proper instance size to start.Use tools like Auto Scaling and ELB.A scaled service should be fault tolerant and operationally efficient.Scalable service should become more cost effective as it grows.DynamoDB is a fully managed NoSQL services from AWS:With high availability and scaling already built in.All the developer has to do is specify required throughput for the tables.RDS requires scaling in a few different ways:RDS does not support a cluster of instances to load balance traffic across.Because of this there are a few different methods to scale traffic with RDS:Utilize read replicas to offload heavy read only traffic.Increase the instance size to handle increase in load.Utilize Elasticache clusters for caching database session information.Architecting for Security in AWS:AWS Platform Compliance and Security Services:The AWS cloud infrastructure has been architected to be flexible and secure with world-class protection, by using it's built-in security features:Secure access - Use API endpoints, HTTPS, and SSLTTLSBuilt-in ?rewalls - Virtual Private Cloud (VPC)Unique users - AWS Identity and Access Management (IAM)Multi-factor authentication (MFA)Private subnets - AWS allowing private subnets on your VPCEncrypted data storage - Encrypt your data in EBS, S3, Glacier, Redshift. and SQL RDSDedicated connection option - AWS Direct ConnectPerfect Forward Secrecy - ELB and CloudFront offer SSL/TLS cipher suites for PFS security logs - AWS CloudTrailAsset identification and con?guration - AWS ConfigCentralized key management - Centralized key management serviceIsolated Govcloud - US ITAR regulations using AWS GovcloudCIoudHSM - Hardware Security Model (HSM) hardware based cryptographic storageTrusted Advisor - With premier support (identify security holes)Complex Access Control:Through IAM policies, AWS gives us the ability to create extremely complex and granular permission policies for our users (all the way down to the resource level).IAM polices with resource level permissions:EC2: Create permissions for instances such as reboot, start, stop, or terminate based all the way down to the instance ID.EBS volumes: Attach, Delete, Detach.EC2 actions that are not one of these above are not governed by resource-level at this time.This is not EC2 limited can also include services such as RDS, S3, etc.Additional security measures, such as MFA authentication are also available when acting on certain resources:For example, you can require MFA before an API request to delete an object within an S3 bucket.Disaster Recovery: (Business disaster recovery key words: Very important for AWS CSA Exam)Recovery Time Objective (RTO): Time it takes after a disruption to restore operations back to its regular service level, as de?ned by the companies operational level agreement. (i.e If the RTO is 4 hours, you have 4 hours to restore service back to an acceptable level).Recovery Point Objective (RPO): Acceptable amount of data loss measured in time. (i.e if the system goes down at 10PM, and RPO is 2 hours, then you should recover all data as part of the application as it was before 8PM).Not only should you design for disaster recovery for your applications running on AWS, you can also use AWS as a disaster recovery solution for your on-premise applications or data. The AWS services used should be determined based off of the business RTO and RPO operational agreement.Pilot Light: Minimal version of your production environment that is running on AWS, This allows for replication from on-premise servers to AWS, and in the event of a disaster the AWS environment spins up more capacity (elasticity/automatically) and DNS is switch from on-premise to AWS. It is important to keep up to date AMI and instance con?gurations it following pilot light protocol.Warm Standby: Has a larger foot print than a pilot light setup, and would most likely be running business critical applications in "standby‘. This type of con?guration could also be used as a test area for applications.Multi-Site Solution: Essentially clones your 'production" environment, which can either be in the cloud or on premise. Has an active-active con?guration which means instances size and capacity are all running in lull standby and can easily convert at the flip of a switch. Methods like this could also be used to “load balance” using latency based routing or Route 53 failover in the event of an issue.Services Examples:Elastic Load Balancer and Auto ScalingAmazon EC2 VM Import ConnectorAMI's with up to date con?gurationsReplication from on-premise database servers to RDSAutomate the increasing of resources in the event of a disasterUse AWS lmport/Export to copy large amounts of data to speed up replication times (also used for off site archiving)Route 53 DNS Failover/Latency Based Routing SolutionsStorage Gateway (Gateway-cached volumes/Gateway-stored volumes)Networking:VPC (Virtual Private Cloud):Lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking, IP ranges, creation of subnets and configuration of route tables and network gateways.?Virtual data center in the cloudAllowed up to 5 VPCs in each AWS region by defaultMultiple IGW's can be created, but only a single IGW can be attached to a VPCEach EC2 instance has both a public and private IP addressIf you delete the default VPC, the only way to get it back is to submit a support ticketBy default when you create a VPC, a default main routing table automatically gets created as well.Subnets are always mapped to a single AZ's; Subnets cannot be mapped to multiple AZ's/16 is the largest CIDR block available when provisioning an IP space for a VPC; smallest = /28Amazon uses 5 of the available IP addresses in a newly created subnetx.x.x.0 - Always subnet network address and is never usablex.x.x.1 - Reserved by AWS for the VPC routerx.x.x.2 - Reserved by AWS for subnet DNSx.x.x.3 - Reserved by AWS for future usex.x.x.255 - Always subnet broadcast address and is never usable.169.254.169.253 - Amazon DNSBy default all traffic between subnets is allowedBy default not all subnets have access to the Internet. Either an Internet Gateway or NAT gateway is required for private subnetsYou can only have 1 Internet gateway per VPCSecurity Groups:Security Groups are virtual Stateful firewalls that control inbound and outbound traffic to AWS Instances. Each Instance must be attached to a security group. Key points to note:Up to 500 security groups per VPC50 inbound and 50 outbound rulesAssociate up to 5 security groups to each network interfaceBy default no inbound traf?c allowed until you add inbound rulesBy default new groups have outbound rule to allow all outbound trafficSupports only Allow Rules - You cannot have an explicit Deny Rule A security group can stretch across different AZ'sYou can also create Hardware Virtual Private Network (VPN) connection between your corporate data center and your VPC and leverage the AWS cloud as an extension of your corporate data centerDefault VPC vs Custom VPCDefault is user friendly, can deploy instances right awayAll subnets in default VPC have an internet gateway attachedEach EC2 instance has both a public & private IP addressIf you delete default VPC, you have to call AWS to get it backNetwork Address Translation (NAT) Instances:When creating a NAT instance, disable Source/Destination checks on the instance or you could encounter issuesNAT instances must be in a public subnetThere must be a route out of the private subnet to the NAT instance in order for it to workThe amount of traffic that NAT instances support depend on the size of the NAT instanceIf you are experiencing any sort of bottleneck issues with a NAT instance, then increase the instance sizeHA can be achieved by using Auto-scaling groups, or multiple subnets in different AZ's with a scripted fail-over procedureNAT instances are always behind a security groupNetwork Address Translation (NAT) Gateway:NAT Gateways scale automatically up to 10GbpsThere is no need to patch NAT gateways as the AMI is handled by AWSNAT gateways are automatically assigned a public IP addressWhen a new NAT gateway has been created, remember to update your route tableNo need to assign a security group, NAT gateways are not associated with security groupsPreferred in the EnterpriseNo need to disable Source/Destination checksNetwork Access Control Lists (NACLS):Numbered list of rules that are evaluated in order starting at the lowest numbered rule first to determine what traffic is allowed in or out depending on what subnet is associated with the ruleThe highest rule number is 32766Start with rules starting at 100 so you can insert rules if neededDefault NACL will allow ALL traffic in and out by defaultYou must assign a NACL to each subnet, if a subnet is not associated with a NACL, it will allow no traffic in or outNACL rules are stateless, established in does not create outbound rule automaticallyYou can only assign a single NACL to a single subnetVPC Peering:Connection between two VPCs that enables you to route traffic between them using private IP addresses via a direct network routeInstances in either VPC can communicate with each other as if they are within the same networkYou can create VPC peering connections between your own VPCs or with a VPC in another account within a SINGLE REGIONAWS uses existing infrastructure of a VPC to create a VPC peering connection. It is not a gateway nor a VPN, and does not rely on separate hardwareThere is NO single point of failure for communication nor any bandwidth bottleneckThere is no transitive peering between VPC peers (Can't go through 1 VPC to get to another)Hub and spoke configuration model (1 to 1)Be mindful of IPs in each VPC, if multiple VPCs have the same IP blocks, they will not be able to communicateYou can peer VPC's with other AWS accounts as well as with other VPCs in the same accountVPC peering doesn’t support edge to edge routing.VPC Flow Logs:Enables you to capture information about the IP traffic going to and from network interfaces in your VPCVPC Flow Logs consist of network traffic for a speci?c 5-tuple.A 5-tuple is a set of five different values that comprise a TCP/IP connection. It includes:(1) Source IP address and (2) source port number(3) Destination IP address and (4) destination port number(5) ProtocolData is stored using Amazon CloudWatch Logs, you can view and retrieve its data in Amazon CloudWatch Logs.Help with a number of tasks:Troubleshoot why specific traffic is not reaching an instanceDiagnose overly restrictive security group rulesSecurity tool to monitor the traffic that is reaching your instance.Limitations of VPC Flow Logs - Traf?cs NOT captured by VPC Flow Logs:Traffic between an EC2 instance and an Amazon DNS SewerTraffic generated by request for instance metadata (request to 169.254.169.254)DHCP TrafficVPC Endpoint:A VPC endpoint enables you to create a private connection between your VPC and another AWS service without requiring access over the Internet, through a NAT device, a VPN connection, or AWS Direct Connect. Endpoints are virtual devices.AWS Supports S3 and DynamoDB; e.g. an instance in a private network can access to an S3 bucket.There is no additional charge for using endpoints.Resource or OperationDefault LimitCommentsVPCs per region:5The limit for Internet gateways per region is directly correlated to this one. Increasing this limit will increase the limit on Internet gateways per region by the same amount.Subnets per VPC:200Internet gateways per region:5This limit is directly correlated with the limit on VPCs per region. You cannot increase this limit individually; the only way to increase this limit is to increase the limit on VPCs per region. Only one Internet gateway can be attached to a VPC at a time.Customer gateways per region:50Direct Connect:AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Using industry standard 802.1q VLANs.?Makes it easy to establish a dedicated network connection from your premises to AWSEstablish private connectivity between AWS and your data center, office or collocation environmentAn AWS Direct Connect location provides access to the AWS region it is associated with. It does not provide access to other AWS regions.Can reduce network costs, increase bandwidth throughput, and provide more consistent network connectivity rather than Internet based connections.Requires a dedicated line such as MPLS, or other circuit ran from tel-co.From this line, you would have a cross connect from your on-premises device direct to AWS data centers; it’s physical connection between your network and the Direct Connect authorized partner, which then handles the routes and connections to AWS networks.The connection is delivered over 1Gbps or 10Gbps single mode fiber cross-network-connect (or cross-connect), one of which connects to your router and the other connects to the AWS DX (Diects Connect) fabric.On AWS side:Public Virtual Interface : allows use DX connection to connect to public AWS endpoints for services like DynamoDB and S3Private Virtual Interface : allows to interface with and AWS VPCCross-network-connect and Public/Private Virtual Interface are kind of DX connectors.Resource or OperationDefault LimitCommentsVirtual interfaces per AWS Direct Connect connection:50Active AWS Direct Connect connections per region per account:50Routes per Border Gateway Protocol (BGP) session:100This limit cannot be increased.Route 53:Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service.?Amazon Route 53 stores information about your domain in the hosted zone. There are 2 types of hosted zones:Private – used to provide configuration information on how to route traffic for a domain and its sub-domains within one or more VPCsPublic – used to provide configuration information on how to route traffic for a domain and its sub-domains on the internetELBs do not have a pre-defined IPv4 address. You resolve them using a DNS nameThe Apex domain record MUST be an A record or an alias; “” is an example of apex domain because it doesn't have any subdomain partsAliases map AWS resources to zone recordsAlias records you are not charged for, CNAME records you are charged forAlways chose an alias record, over a CNAME record, as alias records are free, and can be mapped to a domain apex record where CNAMES cannotLimit of 50 Domain Names can be managed in Route53. This limit can be raised by support.Route 53 Routing Policies:SimpleDefault routing policy when you create a new record setMost common when you have a single resource that performs given function for your domainRoute53 will respond to DNS queries that are only in the record set.No Intelligence is built into the responseWeightedLet you split traffic based on different weights defined1 AZ can be set to 90%, and another can be set to 10% for exampleLatencyAllows you to route your traffic based on the lowest network latency for your end user. (Which region will give them the fastest response time)Create a latency resource record set in each region that hosts your websiteWhen Route53 receives a query for your site, it selects the latency resource for the region that gives the user the lowest latencyFail-overUsed when you want to create an active/passive set upRoute53 will monitor the health of your primary site using a health checkHealth check monitors the health of your endpointsGeo-locationLets you choose where your traffic will be sent based on the geographic location of your usersGood if you want all queries from Europe to be routed to a fleet of EC2 instances in one of the EU regionsServers in these locations could have all prices and language set to EU standards for exampleAmazon Route 53 health checks monitor the health and performance of your web applications, web servers, and other resources … and trigger an action. Each health check that you create can monitor one of the following: The health of a specified resource, such as a web serverThe status of an Amazon CloudWatch alarmThe status of other health checksResource or OperationDefault LimitHosted zones:500Domains:50Compute:EC2 (Elastic Compute Cloud):Elastic Compute Cloud - Backbone of AWS, provides re-sizable compute capacity in the cloud. Reduces the time required to obtain and boot new server instances to minutes allowing you to quickly scale capacity, both up and down, as your computing requirements change.?Once an Instance has been launched with instance store storage, you cannot attach additional instance store volumes after the instance is launched, only EBS volumesWhen using an instance store volume, you cannot stop the instance (the option to do so will not be available, as the instance moves to another host and would cause complete data loss)When using ephemeral storage, an underlying host failure will result in data lossYou can reboot both instance types (w/ephemeral and EBS volumes) and will not lose data, but again, an ephemeral volume based instance can NOT be stoppedBy default both Root volumes will be deleted on termination, however you can tell AWS to keep the root device volume on a new instance during launchYou can poll an instances meta-data by using curl can get an instance's IP address by using curl such thing as user-data, remember its always meta-data not user-dataCannot encrypt root volumes, but you can encrypt any additional volumes that are added and attached to an EC2 instance.You can have up to 10 tags per EC2 instanceAWS does not recommend ever putting RAID 5's on EBSWhen configuring a launch configuration for an auto-scaling group, the Health Check Grace Period is the period of time to ignore health checks while instances or auto-scaled instances are added and booting.Termination protection is turned off by default, you must turn it onRoles:You can only assign an EC2 role to an instance on create. You cannot assign a role after the instance has been created and/or is runningYou can change the permissions on a role post creation, but can NOT assign a new role to an existing instanceRole permissions can be changed, but not swappedRoles are more secure then storing your access key and secret key on individual EC2 instancesRoles are easier to manage, You can assign a role, and change permissions on that role at any time which take effect immediatelyRoles can only be assigned when that EC2 instance is being provisionedRoles are universal, you can use them in any regionInstance sizing:T2 - Lowest Cost General Purpose - Web/Small DBsM4 - General Purpose - App ServersM3 - General Purpose - App serversC4 - Compute Optimized - CPU Intensive Apps/DBsC3 - Compute Optimized - CPU Intensive Apps/DBsR3 - Memory Optimized - Memory Intensive Apps/DBsG2 - Graphics / General Purpose - Video Encoding/Machine Learning/3D App StreamingI2 - High Speed Storage - NoSQL DBs, Data WarehousingD2 - Dense Storage - Fileservers/Data Warehousing/HadoopD - DensityI - IOPSR - RAMT - Cheap General PurposeM - Main General PurposeC - ComputeG - GraphicsStorage Types:Instance Store (Ephemeral):Also referred to as ephemeral storage and is not persistentInstances using instance store storage cannot be stopped. If they are, data loss would resultIf there is an issue with the underlying host and your instance needs to be moved, or is lost, data is also lostInstance store volumes cannot be detached and reattached to other instances; They exist only for the life of that instanceBest used for scratch storage, storage that can be lost at any time with no bad ramifications, such as a cache storeEBS (Elastic Block Storage):Elastic Block Storage is persistent storage that can be used to procure storage to EC2 instances. EBS can be thought of as external hard drive.You can NOT mount 1 EBS volume to multiple EC2 instances instead you must use EFSDefault action for EBS volumes is for the root EBS volume to be deleted when the instance is terminatedBy default, ROOT volumes will be deleted on termination, however with EBS volumes only, you can tell AWS to keep the root device volumeEBS backed instances can be stopped, you will NOT lose any dataEBS volumes can be detached and reattached to other EC2 instances; 3 types of available EBS volumes can be provisioned and attached to an EC2 instance:SSD Provisioned IOPS (I01)More than 10,000 IOSPIOPS is even faster than GP volumes; most useful for users needing high speed.Designed for I/O intensive applications such as large relational or No-SQL DBs.SSD General Purpose (GP2):Up to 10,000 IOPSVery useful for the majority of users, specifically for larger volumes needs99.999% availabilityRatio of 3 IOPS per GB with up to 10K IOPS and ability to burstUp to 3K IOPS for short periods for volumes under 1GBHDD Throughput Optimized (ST1)Frequently accessed workloadsUsed for data warehouse, transaction loggingCannot be used for boot volumesHDD Cold (SC1)Less frequently accessed workloadsUsed for file serversCannot be used for boot volumesHDD Magnetic (Standard)Lowest cost per GBIdeal for workloads where data is accessed infrequently and apps where the lowest cost storage is important.Ideal for fileserversEncryption:Root Volumes cannot be encrypted by default, you need a 3rd party utilityOther volumes added to an instance can be encrypted.AMIs:AMI's are simply snapshots of a root volume and is stored in S3AMI's are regional. You can only launch an AMI from the region in which it was storedYou can copy AMI's to other regions using the console, CLI or Amazon EC2 APIProvides information required to launch a VM in the cloudTemplate for the root volume for the instance (OS, Apps, etc)Permissions that control which AWS accounts can use the AMI to launch instancesWhen you create an AMI, by default its marked private. You have to manually change the permissions to make the image public or share images with individual accountsBlock device mapping that specifies volumes to attach to the instance when its launchedHardware Virtual Machines (HVM) AMI's AvailableParavirtual (PV) AMI's AvailableYou can select an AMI based on:RegionOSArchitecture (32 vs. 64 bit)Launch PermissionsStorage for the root device (Instance Store Vs. EBS)Security Groups:Act like virtual firewalls for the associated EC2 instanceIf you edit a security group, it takes effect immediately.You can not set any deny rules in security groups, you can only set allow rulesThere is an implicit deny any any at the end of the security group rulesYou don't need outbound rules for any inbound request. Rules are stateful meaning that any request allowed in, is automatically allowed outYou can have any number of EC2 instances associated with a security groupEvery instance must be associated with a security group and if a security group is not specified at launch time, then that instance will be associated with a default security groupThe following are the default rules for each default security group: Allows all inbound traffic from other instances associated with the default security group (the security group specifies itself as a source security group in its inbound rules) Allows all outbound traffic from the instance.Snapshots:You can take a snapshot of a volume, this will store that volumes snapshot on S3Snapshots are point in time copies of volumesThe first snapshot will be a full snapshot of the volume and can take a little time to createSnapshots are incremental, which means that only the blocks that have changes since your last snapshot are moved to S3Snapshots of encrypted volumes are encrypted automaticallyVolumes restored from encrypted snapshots are encrypted automaticallyYou can share snapshots but only if they are not encryptedSnapshots can be shared with other AWS accounts or made public in the market place again as long as they are NOT encryptedIf you are making a snapshot of a root volume, you should stop the instance before taking the snapshotRAID Volumes:RAID is accomplished at the software level (on OS running in the instance)Creating a RAID 0 array allows you to achieve a higher level of performance for a file system than you can provision on a single Amazon EBS volume.A RAID 1 array offers a "mirror" of your data for extra redundancy.If you take a snapshot, the snapshot excludes data held in the cache by applications or OS. This tends to not be an issue on a single volume, however multiple volumes in a RAID array, can cause a problem due to interdependencies of the arrayTake an application consistent snapshotStop the application from writing to diskFlush all caches to the diskSnapshot of RAID array --> 3 Methods:Freeze the file systemUnmount the RAID ArrayShutdown the EC2 instance --> Take Snapshot --> Turn it back onEnhanced Networking on LinuxEnhanced networking uses single root I/O virtualization (SR-IOV) to provide high-performance networking capabilities on supported instance types.SR-IOV is a method of device virtualization that provides higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces.Enhanced networking provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies.There is no additional charge for using enhanced networking.Elastic vs Public IP:An Elastic IP is essentially tied to your AWS account in that AZ. You can freely associate it with any AWS instance.The public IP you get when an instance is created (and you opt to give it a public IP) is ephemeral - if you stop that instance, when you start it up you'll get another random public IP.Elastic IP is "permanent" in the sense that you own it and you associate it to a specific AWS instance ID.To ensure efficient use of Elastic IP addresses, AWS impose a small hourly charge if an Elastic IP address is not associated with a running instanceAuto Scaling:Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your applicationYou create collections of EC2 instances, called Auto Scaling groups. You can specify the minimum number of instances in each Auto Scaling group, and Auto Scaling ensures that your group never goes below this size.You can only specify On-Demand or Spot instance when configuring auto scaling, you cannot use auto scaling to launch Reserved InstancesYou can specify the maximum number of instances in each Auto Scaling group, and Auto Scaling ensures that your group never goes above this size.If you specify the desired capacity, either when you create the group or at any time thereafter, Auto Scaling ensures that your group has this many instances.If you specify scaling policies, then Auto Scaling can launch or terminate instances as demand on your application increases or decreases.The Elastic Load Balancing (ELB) service automatically routes incoming web traffic across such a dynamically changing number of EC2 instances.To use a load balancer with your Auto Scaling group, create the load balancer and then attach it to the group.Scaling Plans - Auto Scaling provides several ways for you to scale your Auto Scaling groupMaintain Current Instance Levels; You can configure your Auto Scaling group to maintain a minimum or specified number of running instances at all timesManual Scaling; Scheduled Scaling (e.g. Proactive Cyclic Scaling allows you to scale during the desired time windows (daily, weekly, monthly, …)Dynamic Scaling; Scaling IN (terminating instances) and Scaling OUT (launching instances),Placement Groups:A logical group of instance in a single AZUsing placement groups enables applications to participate in a low latency, 10Gbps networkPlacement groups are recommended for applications that benefit from low network latency, high network throughput or bothA placement group can't span multiple AZ's so it is a SPoF; A placement group can't span peered VPCsThen name you specify for a placement group must be unique within your AWS accountOnly certain types of instances can be launched in a placement group. Computer Optimized, GPU, Memory Optimized, and Storage Optimized.AWS recommends that you use the same instance family and same instance size within the instance group.You can't merge placement groupsYou can't move an existing instance into a placement groupYou can create an AMI from your existing instance and then launch a new instance from the AMI into a placement groupIf you receive a capacity error when launching an instance in a placement group that already has running instances, stop and start all of the instances in the placement group, and try the launch again.Pricing Models:On Demand:Pay fixed rate by the hour with no commitmentUsers that want the low cost and flexibility of EC2Apps with short term, spiky or unpredictable workloads that cannot be interruptedApps being developed or tested on EC2 for the first timeReserved:Provide capacity reservation and offer significant discount on the hourly charge for an instance (1-3 year terms)Applications have steady state, or predictable usageApps that require reserved capacityUsers able to make upfront payments to reduce their total computing costs even further.Spot:Bid whatever price you want for instance capacity by the hourWhen your bid price is greater than or equal to the spot price, your instance will bootWhen the spot price is greater than your bid price, your instance will terminate with an hours notice.Applications have flexible start and end timesApps that are only feasible at very low compute pricesUsers with urgent computing needs for large amounts of additional capacityIf the spot instance is terminated by Amazon EC2, you will not be changed for a partial hour of usageIf you terminate the instance yourself you WILL be charged for any partial hours of usage.Resource or OperationDefault LimitElastic IP addresses for EC2-Classic:5Security groups for EC2-Classic per instance:500Rules per security group for EC2-Classic:100ELB (Elastic Block Storage Limits)Resource or OperationDefault LimitNumber of EBS volumes:5000Number of EBS snapshots:10,000ELB (Elastic Load Balancer)A load balancer accepts incoming traffic from clients and routes requests to its registered targets (such as EC2 instances) in one or more Availability Zones. The load balancer also monitors the health of its registered targets and ensures that it routes traffic only to healthy targets. When the load balancer detects an unhealthy target, it stops routing traffic to that target, and then resumes routing traffic to that target when it detects that the target is healthy again.You configure your load balancer to accept incoming traffic by specifying one or more listeners. A listener is a process that checks for connection requests. It is configured with a protocol and port number for connections from clients to the load balancer and a protocol and port number for connections from the load balancer to the targets.Elastic Load Balancing offers two types of load balancers that both feature high availability, automatic scaling, and robust security. These include the Classic Load Balancer that routes traffic based on either application or network level information, and the Application Load Balancer that routes traffic based on advanced application level information that includes the content of the request.Configuring ELBIdle Connection Timeout; by default, Elastic Load Balancing sets the idle timeout to 60 seconds for both connections. Therefore, if the instance doesn't send some data at least every 60 seconds while the request is in flight, the load balancer can close the connection.Cross-zone Load Balancing; enable Cross-Zone Load Balancing will distribute load across all back-end instances, even if they exist in different AZ'sConnection Draining; enables the load balancer to complete in-flight requests made to instances that are de-registering or unhealthyProxy Protocol; Proxy Protocol header helps you identify the IP address of a client when you have a load balancer that uses TCP for back-end connections.Sticky Sessions (Session Affinity); enables the load balancer to bind a user's session to a specific instance. This ensures that all requests from the user during the session are sent to the same instance.When configuring ELB health checks, bear in mind that you may want to create a file like healthcheck.html or point the ping path of the health check to the main index file in your applicationRemember the health check interval is how often a health check will occurYour Healthy/Unhealthy thresholds are how many times either will check before marking the origin either healthy or unhealthyHealth Check Interval: 10 secondsUnhealthy Threshold: 2Healthy Threshold: 3This means that if the health check interval occurs twice without success, then the source will be marked as unhealthy. This is 2 checks @ 10 seconds per check, so basically after 20 seconds the origin will be marked unhealthyLikewise, if the healthy threshold is marked at 3, then it would be 3 x health check interval or 10 seconds being 30 seconds. After 30 seconds with 3 consecutive success checks, the origin will be marked as healthy.ELBs are NEVER given public IP Addresses, only a public DNS nameELBs can be In Service or Out of Service depending on health check resultsCharged by the hour and on a per GB basis of usageMust be configured with at least one listenerA listener must be configured with a protocol and a port for front end (client to ELB connection), as well as a protocol and port for backed end (ELB to instances connection)Target groups: are where we assign different sets of EC2 instances to receive traffic in an Application Load Balancer. Launch configurations and Auto Scaling groups can be used with either load balancing type and CloudWatch events are not used in the Application Load Balancer configuration.ELBs support HTTP, HTTPS, TCP, and SSL (Secure TCP)Support the following X-Forwarder headers:X-Forwarded-ForX-Forwarded-ProtoX-Forwarded-PortELBs support all ports (1-65535)ELBs do not support multiple SSL certificatesClassic ELBs support the following ports:25 (SMTP)80 (HTTP)443 (HTTPS)465 (SMTPS)587 (SMTPS)1024-65535HTTP Error Codes:200 - The request has succeeded3xx - Redirection4xx - Client Error (404 not found)5xx - Server ErrorApplication Load Balancer LimitDefault LimitLoad balancers per region:20Target groups per region:50Listeners per load balancer:10Targets per load balancer:1000Classic Load Balancer LimitDefault LimitLoad balancers per region:20Listeners per load balancer:100Subnets per Availability Zone per load balancer:1Security groups per load balancer:5ECS (Elastic Container Service):Amazon EC2 Container Service (ECS) is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances.?Not covered as exam topic currentlyElastic Beanstalk:Amazon Elastic Beanstalk is an end to end solution to help developers deploy their applications in the cloud without worrying about the infrastructure that needs to be deployed to run those applications. Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.?Elastic Beanstalk Components:ApplicationApplication VersionEnvironment - Note - Environments can only run single application versions at any one point in time.Environment ConfigurationConfiguration TemplateResource or OperationDefault LimitApplications:1000Application Versions:1000Environments:500Lambda:AWS Lambda is a cutting-edge service from amazon that enables you to deploy Serverless computing capabilities by enabling you to run your code in response to events and automatically manage the underlying compute resources for you. You can build complete applications by utilizing AWS services and running code in response of event triggers such as such as HTTP requests via Amazon API Gateway, modifications to objects in Amazon S3 buckets and table updates in Amazon DynamoDB. Lambda runs your code in highly available AWS infrastructure that scales on demand and ensures necessary capacity provisioning, security patch deployment, monitoring and logging.With Lambda, you do not have to provision any infrastructure manually or by running any scripts. Amazon configures all necessary infrastructure components required to run your code, which includes compute, storage and memory configurations. The only requirement is that you code is in Node.js, Java, C# or Python languages.Serverless processingAWS lambda can automatically run code in response to modifications to objects in S3 buckets, messages arriving in Amazon Kinesis streams, table updates in DynamoDB, API call logs created by CloudTrail, and custom events from mobile applications, web applications, or other web servicesLambda runs your code on high-availability compute infrastructure and performs all of the administration of the compute resources including server and operating system maintenance, capacity provisioning and automatic scaling, node and security patch deployment, and code monitoring and logging. All you need to do is supply the code.Supports NodeJs, Python 2.x, Java and C#When should you use Lambda over Ec2:Generally you want to use Lambda when you want to run code that is in response to events, such asChanges to Amazon S3 buckets.Updates to an Amazon DynamoDB table.Custom events generated by your applications or devices.99.99% availability for both the service itself and the functions it operates.First 1 million requests are free0.20 per 1 million requests thereafterDuration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 100 msThe price depends on the amount of memory you allocate to your function. You are charged 0.00001667 for every GB-second usedFree Tier gives you 1 Million free requests per month, and 400K GB-Seconds of compute time per monthThe memory size you choose for your functions, determines how long they can run in the free tierThe lambda free tier does not automatically expire at the end of your 12 month AWS free tier term, but is available to both existing and new AWS customers indefinitelyFunctions can be ran in response to HTTP requests using API Gateway or API calls made using AWS SDKsResource or OperationDefault LimitConcurrent requests safety throttle per account:100For additional information about Lambda Limits, see?Limits in Amazon LambdaStorage:S3 (Simple Storage Service):Amazon Simple Storage Service (Amazon S3) provides developers and IT teams with secure, durable, highly-scalable cloud storage. Amazon S3 is easy to use object storage, with a simple web service interface to store and retrieve any amount of data from anywhere on the web (similar to Dropbox).?Object based storage only for files, cannot install OS or applicationsData is spread across multiple devices and multiple facilitiesCan loose 2 facilities and still have access to your filesFiles can be between 0 bytes and 5TB, and has no storage limitFiles are stored flatly in buckets, Folders don't really exist, but are part of the file nameS3 bucket names have a universal name-space, meaning each bucket name must be globally uniqueS3 Stores data in alphabetical order (lexigraphical order)S3 URL structures are region/amazon.bucketname ( acloudguru)S3 virtual-hosted–style URL (Website) : after write consistency for PUTS of new objects (As soon as you write an object, it is immediately available)Eventual consistency for overwrite PUTS and DELETES. (Updating or deleting an object could take time to propagate)S3 is basically a key value store and consists of the following:Key - Name of the objectValue - Data made up of bytesVersion ID (important for versioning)Meta-data - Data about what you are storingACLs - Permissions for stored objectsAmazon guarantees 99.99% availability for the S3 platformAmazon guarantees 99.999999999% durability for S3 information (11 x 9's)Tiered storage, and life-cycle management availableVersioning is available but must be enabled. It is off by defaultOffers encryption, and allows you to secure the data using ACLsS3 charges for storage, requests, and data transferBucket names must be all lowercase, however in US-Standard if creating with the CLI tool, it will allow capital lettersThe transfers tab shows uploads, downloads, permission changes, storage class changes, etc..When you upload a file to S3, by default it is set privateYou can transfer files up to 5GB using PUT requestsYou can setup access control to control your buckets access by using bucket policies or ACLsChange the storage class under the Properties tab when an object is selectedS3 buckets can be configured to create access logs which logs all requests to the S3 bucketS3 Events include SNS, or SQS events or Lambda functions. Lambda is location specific, not available in South KoreaAll storage tiers have SSL support, millisecond first byte latency, and support life-cycle management policies.The option to require MFA to delete objects from an S3 bucket is available, but it is not required/mandatory Storage Tiers/Class:Standard S3:Stored redundantly across multiple devices in multiple facilitiesDesigned to sustain the loss of 2 facilities concurrently11-9's durability, 99.99% availabilityMinimum object size = 0 bytes?; most often, this will be a “touched” fileS3-IA (Infrequently Accessed):For data that is accessed less frequently, but requires rapid access when neededOffers same latency and throughput performance as Standard S3Lower fee than S3, but you are charged a retrieval feeAlso designed to sustain the loss of 2 facilities concurrently11-9's durability, 99.99% availabilityMinimum object size = 128KB?; smaller object will be charged at 128KBReduced Redundancy Storage (RSS):Use for data such as thumbnails or data that could be regeneratedCosts less than Standard S3Designed to provide 99.99% durability and 99.99% availability of objects over a yearDesigned to sustain the loss of a single facilityGlacier:Very cheap, Stores data for as little as $0.01 per gigabyte, per monthOptimized for data that is infrequently accessed. Used for archival onlyGlacier now offers three levels of data retrieval (pricing varies):Expedited: 1-5 minutesStandard: 3-5 hoursBulk: 5-12 hoursStandard S3S3-IARSSGlacierAvailability99.99%99.90%99.99%Durability11-9's11-9's99.99%9-9'sVersioning and Cross-Region Replication (CRR):Cross-Region Replication (CRR) is a bucket-level configuration that enables automatic, asynchronous copying of objects across buckets in different AWS Regions. We refer to these buckets as source bucket and destination bucket. These buckets can be owned by different AWS accounts.Versioning must be enabled in order to take advantage of Cross-Region ReplicationVersioning resides under Cross Region Replication tabOnce Versioning is turned on, it cannot be turned off, it can only be suspendedIf you truly wanted versioning off, you would have to create a new bucket and move your objectsWhen versioning is enabled, you will see a slider tab at the top of the console that will enable you to hide/show all versions of files in the bucketIf a file is deleted for example, you need to slide this tab to show in order to see previous versions of the fileWith versioning enabled, if you delete a file, S3 creates a delete marker for that file, which tells the console to not display the file any longerIn order to restore a deleted file you simply delete the delete marker file, and the file will then be displayed again in the bucketTo move back to a previous version of a file including a deleted file, simply delete the newest version of the file or the delete marker, and the previous version will be displayedVersioning does store multiple copies of the same file. So in the example of taking a 1MB file, and uploading it. Currently your storage usage would be 1MB. Now if you update the file with small tweeks, so that content changes, but the size remains the same, and upload it. With the version tab on hide, you will see only the single updated file, however if you select show on the slider, you will see that both the original 1MB file exists as well as the updated 1MB file, so your total S3 usage is now 2MB not 1MBVersioning does NOT support de-duplication or any similar technology currentlyFor Cross Region Replication (CRR), as long as versioning is enabled, clicking on the tab will now give you the ability to suspend versioning, and enable cross region replicationCross Region Replication (CRR) has to be enabled on both the source and destination buckets in the selected regionsDestination bucket must be created and again globally unique (can be created right from the versioning tab, in the CRR configuration section via button)You have the ability to select a separate storage class for any Cross Region Replication destination bucketCRR does NOT replicate existing objects, only future objects meaning that only objects stored post turning the feature on will be replicatedAny object that already exists at the time of turning CRR on, will NOT be automatically replicatedVersioning integrates with life-cycle management and also supports MFA delete capability. This will use MFA to provide additional security against object deletionCross-Origin Resource Sharing (CORS)Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.Life-cycle Management:When clicking on Life-cycle, and adding a rule, a rule can be applied to either the entire bucket or a single 'folder' in a bucketRules can be set to move objects to either separate storage tiers or delete them all togetherCan be applied to current version and previous versionsIf multiple actions are selected for example transition from STD to IA storage 30 days after upload, and then Archive 60 days after upload is also selected, once an object is uploaded, 30 days later the object will be moved to IA storage. 30 days after that the object will be moved to glacier.Calculates based on UPLOAD date not Action dataTransition from STD to IA storage class requires MINIMUM of 30 days. You can not select or set any data range less than 30 daysArchive to Glacier can be set at a minimum of 1 day If STD->IA is NOT setIf STD->IA IS set, then you will have to wait a minimum of 60 days to archive the object because the minimum for STD->IA is 30 days, and the transition to glacier then takes an additional 30 daysWhen you enable versioning, there will be 2 sections in life-cycle management tab. 1 for the current version of an object, and another for previous versionsMinimum file size for IA storage is 128K for an objectCan set policy to permanently delete an object after a given time frame; it can be used to expire incomplete multipart uploadsIf versioning is enabled, then the object must be set to expire, before it can be permanently deletedCannot move objects to Reduced Redundancy using life-cycle policiesS3 Transfer Acceleration:Enables fast, easy, and secure transfers of files over long distances between your client and your S3 bucketUtilizes the CloudFront Edge Network to accelerate your uploads to S3Instead of uploading directly to your S3 bucket, you can use a distinct URL to upload directly to an edge location which will then transfer the file to S3This topic is covered in?AWS Solutions Architect Study GuideThere is a test utility available that will test uploading direct to S3 vs through Transfer Acceleration, which will show the upload speed from different global locationsTurning on and using Transfer Acceleration will incur an additional fee2 types of encryptions available:In transit:Uses SSL/TLS to encrypt the transfer of the objectAt Rest (AES 256):Server Side Encryption: S3 Managed Keys (SSE-S3); requires that Amazon S3 manage the data and master encryption keys; AES-256Server Side Encryption: AWS Key Management Service (KMS); AWS Key Management Service is a managed service that makes it easy to create and control the encryption keys used to encrypt data.Server Side Encryption: Encryption with Customer provided Keys (SSE-C); You manage the encryption keys and Amazon S3 manages the encryptionClient Side Encryption?: refers to encrypting data before sending it to Amazon S3(*) : S3 Event Notifications:S3 events notifications allow you to setup automated communication between S3 and other AWS services when a selected event occurs in an S3 mon event noti?cation triggers include:RRsOb]ectLost (Used for automating the recreation oi lost RRS objects)Objectcreated (for all or the following speci?c APls called)Events noti?cation can be sent to the following AWS services:SNSLambdaSQS QueuePricing (What your charged for when using S3):Storage usedNumber of RequestsData TransferResource or OperationDefault LimitBuckets per account:100Largest files size you can transfer with PUT request:5GBMinimum file size:1 byteMaximum file size:5 TBFor additional information about S3 Limits, see?Limits in Amazon S3CloudFront:Amazon CloudFront is a global content delivery network (CDN) service that accelerates delivery of your websites, APIs, video content or other web assets.?Edge Location is the location where content will be cached, separate from an AWS Region/AZOrigin is the origin of all files, can be S3, EC2 instance, a ELB, or Route53A custom origin is an HTTP server (Web server). Server can be an AC2 instance or an on-premise serverDistribution is the name given to the CDN which consists a collection of edge locationsWeb Distributions are used for websitesRTMP - (Real-Time Messaging Protocol) used for streaming media typically around adobe flash filesEdge locations can be R/W and will accept a PUT request on an edge location, which then will replicate the file back to the originObjects are cached for the life of the TTL (24 hours by default)You can clear objects from edge locations, but you will be chargedWhen enabling cloudfront from an S3 origin, you have the option to restrict bucket access; this will disable the direct link to the file in the S3 bucket, and ensure that the content is only served from CloudFrontThe path pattern uses regular expressionsYou can restrict access to your distributions using signed URLSYou can assign Web Application Firewall rules to your distributionsDistribution URLs are going to be non-pretty names such as random_characters.; you can create a CNAME that points to the cloudfront name to make the URL user friendlyYou can restrict content based on geographical locations in the behaviors tabYou can create custom error pages via the error pages tabPurging content is handled in the Invalidations tabResource or OperationDefault LimitData transfer rate per distribution:40 GbpsRequests per second per distribution:100,000Web distributions per account:200RTMP distributions per account:100EFS (Elastic File System):File storage service for EC2 instances. Its easy to use and provides a simple interface that allows you to create and configure file systems quickly and easily. With EFS storage capacity is elastic, growing and shrinking automatically as you add and remove files so your applications have the storage they need, when they need it.?Think NFS, only without a set storage limitSupports NFSv4, and you only pay for the storage you useBilling rate is 30 cents per GBCan scale to exabytesCan support thousands of concurrent NFS connectionsData is stored across multiple AZ within a regionBlock based storage.Can be shared with multiple instancesRead after Write ConsistencyYou must ensure that instances that will mount EFS are in the same security group as the EFS allocation. If they are not, you can modify the security groups, and add them to the same security group that was used to launch the EFS storageResource or OperationDefault LimitTotal throughput per file system:3 GB/s for all connected clientsFor additional information about EFS Limits, see?Limits in Amazon EFSSnowball:Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud.?Appliance allows local import using AWS internal networkResource or OperationDefault LimitCommentsSnowball:1If you need to increase this limit, contact AWS Support.Storage Gateway:The AWS Storage Gateway is a service connecting an on-premises software appliance with cloud-based storage to provide seamless and secure integration between an organization’s on-premises IT environment and AWS’s storage infrastructure. On-premise virtual appliance that can be downloaded and used to cache S3 locally at a customers siteReplicates data to and from AWS platformGateway's file interface, or file gateway?:Offers a seamless way to connect to the cloud in order to store files and backup images as durable objects on low-cost Amazon S3 and Amazon Glacier cloud storage. File gateway offers an NFS mount for S3 with local caching.Gateway Cached Volumes:Create storage volumes and mount them as iSCSI devices on the on-prem serversThe gateway will store the data written to this volume in S3 and will cache frequently access data on-prem in the storage deviceEntire dataset is stored on S3 and the most frequently accessed data is cached on-siteThese volumes minimize the need to scale your on-prem storage infrastructure while providing your applications with low-latency access to their frequently accessed dataCan create storage volumes up to 32TBs in size and mount them as iSCSI devices from your on-premises application servers.Data written to these volumes is stored in S3, with only a cache of recently written and recently read data stored locally on your on-premises storage hardware.Gateway Stored Volumes:Store all the data locally (on-prem) in storage volumesGateway will periodically take snapshots of the data as incremental backups and stores them on S3Store your primary data locally while asynchronously backing up that data to AWSProvide low-latency access to their entire datasets, while providing durable, off-site backups.Can create storage volumes up to 1TB in size and mount them as iSCSI devices from your on-premises application servers.Data written to your gateway stored volumes is stored on your on-prem storage hardware, and asynchronously backed up to S3 in the form of EBS snapshots.Gateway Virtual Tape Library (VTL):Used for backup and uses popular backup applications like NetBackup, Backup Exec and VeamPricing:You pay for what you useHas 4 pricing components:Gateway usage (per gateway per month)Snapshot storage usage (per GB per month)Volume storage usage (Per GB per month)Data transfer out (Per GB per month)AWS Import/Export:Import Export Disk - Import to EBS, S3, Glacier but only export to S3Pay for what you useHas 3 pricing components:Per device feeData load time charge per data-loading-hourPossible return shipping charges for expedited shipping or shipping to destinations not local to the Import/Export regionDatabases:RDS (Relational Database Service):Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Amazon RDS provides you six familiar database engines to choose from, including Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB.?Traditional relational databases that include tables, rows, fieldsOn-Line Transaction Processing (OLTP) type DBYou can copy a snapshot to another region if you want to have your database available in another regionYou scale your DB by taking a snapshot and doing a restore to a larger sized tierRDS maximum size for a MS SQL Server DB with SQL Server Express Edition is 10GB per DBSupported RDS Platforms:MS SQL ServerOracleMySQL ServerPostgreSQLAuroraMariaDBWhen a backup is restored, the restore will always be a new RDS instance, with a new DNS nameBackup types:Automated backupsAllows you to recover your database to any point in time within a retention periodRetention periods can be between 1 and 35 daysDefault backup window is set to 30 minutesTakes a full daily snapshot and will also store transaction logs through the dayWhen you do a recovery, AWS will choose the most recent daily backup and then apply transaction logsAllows you to do a point in time recover down to a second within the retention periodEnabled by defaultBackup data is stored in S3You get free storage space equal to the size of your database.Taken within a defined windowDuring the backup, storage I/O may be suspended and you may experience extended latencyMulti-AZ RDS : I/O activity is no longer suspended on primary during backup window, since backups are taken from the standbyTo disable automated backups for a DB instance, set the backup retention parameter to 0; e.g. using the API BackupRetentionPerIod= 0Automated backups are enabled by default and you get free storage equal to the size of your databaseManual backups - Database snapshotsUser initiated from the consoleStored even after you delete the original RDS instance unlike automatic backupsNote that when you restore a snapshot, you cannot restore to the existing DB Instance; you will have to restore to a new Instance.Security?- You can utilize various security features of the Amazon RDS service as well as incorporate other Amazon Security features to provide a complete end to end solution that will meet all your core security requirements. Typically, you can:Run DB Instances in a VPC and specifically in a private subnet of a VPCUse IAM policies to assign permissions on who is allowed to manage the RDS resourceUse Security Groups to control what IP Address or EC2 Instances can connect to the DB Instance. When setting up an instance, you need to specify rules via associated security groupsUse SSL with DB Instances running MySQL, Aurora, MariaDB, PostgreSQL, Oracle and MS SQL for encryption during transfer of dataUse RDS encryption to secure your RDS instances and snapshots at restUse the security features of your DB engine to control who can log into the databases on a DB instanceOnce your RDS instance is encrypted the data stored at rest in the underlaying storage is encrypted, as are its automated backups, read replicas and snapshotsEncryption at rest is available using the Amazon Key Management ServiceEncrypting an existing Database is not supported; you have to create a new DB Instance with encryption enabled and migrate data to itMulti-AZ:Allows you to have an exact copy of your production database in another AZ; replication is synchronousAWS handles the replication for you, so when your prod database is written to, the write will automatically be synchronized to the stand-by DBIn the event of DB maintenance, instance failure or AZ failure, RDS will automatically fail-over to the standby so that database operations can resume quickly without Admin intervention.In a fail-over scenario, the same DNS name is used to connect to the secondary instance, There is no need to reconfigure your applicationMulti AZ configurations are used for HA/DR only, and is not used for improving performanceTo scale for performance you need to set up read replicasAvailable for SQL Server, Oracle, MySQL, PostGreSQL, and AuroraIn order for Multi-AZ to work, your primary database instance must be launched into a "subnet group".NOTE: An RDS instance must be launched into a subnet (inside a VPC), just like an EC2 instance. So the same security/connectivity rules, and highly available/fault tolerant concepts apply.Multi-AZ Deployments are for disaster recovery purposes ONLY and not for enhancing database performance. Standby DB Instances are not available to offline queries from primary master DB. If you wish to use multiple DB Instances, you can use read replicas or Amazon Elasticache.Scalability:Vertical Scaling - Amazon offers standard options to Scale Up or Down. This is known as vertical scaling and involves modifying the Instance type from a smaller Instance to a larger one as requiredHorizontal Scaling - Amazon also offers Horizontal Scaling options for some of its Databases Engines. You can scale database by having read replicas; RDS MySQL, PostgreSQL, and MariaDB can have up to 5 read replicas, and Amazon Aurora can have up to 15 read replicas.Read Replica's:Uses asynchronous replication, from the primary instance to other instances that can be read fromYou can have up to 5 read replicas of your main databaseAllow you to have a read only copy of your prod databaseUsed primarily for very read-heavy database workloadsSQL Server and Oracle are not supportedRead Replicas are used for Scaling and not DRYou must have automatic backups turned on for Read-Replicas to workYou can have read replicas of read replicas (but could incur latency as its daisy chained)Each read replica will have its own DNS endpointYou cannot have read replicas that have Multi-AZYou can create read replicas of Multi-AZ source databases howeverRead Replicas can be promoted to be their own databases, however this breaks replicationRead Replicas in a second region for MySQL and MariaDB, not for PostgreSQLRead Replicas can be bigger than the primary source DB from a resource perspectiveAurora:MySQL compatible relational database engine that combines speed and availability of high end commercial databases with the simplicity and cost-effectiveness of open source databasesTwo types instances:Primary Instances - Main Instance supporting read and write workloads; any modification of data happens at the primary instance. Each cluster has one primary instanceAmazon Aurora Replica - Secondary Instance supports only read operations. Each DB Cluster can have 15 Aurora Replicas; enable the distribution of read-workloads among the instances and increase performance. Replicas can be placed across Availability Zones to increase availabilityAurora Endpoints - You connect to your DB Cluster using any one of the following endpoints:Cluster EndpointReader EndpointInstance EndpointProvides up to 5 times better performance than MySQL at a price point 1/10th of a commercial database while delivering similar performance and availabilityStarts with 10GB, scales in 10GB increments up to 64TB (Storage Auto scaling)Compute resources can scale up to 32 vCPUs and 244 GB of memoryMaintains 2 copies of your data contained in each availability zone, with minimum of 3 AZs. 6 copies of your dataDesigned to transparently handle the loss of up to two copies of data without affecting the DB write availability and up to 3 copies without affecting read availabilityDesigned to handle loss of up to 2 copies without affecting DB write availability; Designed to handle loss of up to 3 copies without affecting DB read availabilitySelf-healing storage, data blocks and disks are continuously scanned for errors and repaired automatically2 Types of replicas available:Aurora Replicas - Separate aurora DB, can have up to 15 replicasMySQL read replicas, can have up to 5If a failure occurs of the primary database, a fail-over will happen automatically to an aurora replica, but will NOT auto fail over to a MySQL read replica.Only available in certain regions, not allResource or OperationDefault LimitClusters:40Cluster parameter groups:50DB Instances:40Event subscriptions:20Manual snapshots:50Manual cluster snapshots:50DynamoDB (No-SQL):Amazon Dynamo DB is a fully managed NoSQL database solution that provides enterprise grade performance and scalability. You can create database tables that store and retrieve any amount of data. Dynamo DB Instances are stored on SSD Storage and automatically replication across multiple Availability Zones. Specifically, Dynamo DB will replicate across 3 separate datacenters.Dynamo DB will also distribute traffic and data for a table over multiple partitions. You need to specify the read and write capacity and DynamoDB will provide the necessary infrastructure to support the required throughput levels. You can further adjust the read and write capacity after tables have been created.Non Relational DB (No-SQL), comprised of collections (tables), of documents (rows), with each document consisting of key/value pairs (fields)Document oriented DBOffers push button scaling, meaning that you can scale your db on the fly without any downtimeRDS is not so easy, you usually have to use a bigger instance size or add read replicasStored on SSD StorageThere are 2 table keys?:Partition (or Primary) keySort key Automatic data replication over 3 datacenters in a single region.Data backed up to S3Very easy scale, provides “push button” scalingEventual Consistent Reads (Default)Consistency across all copies of data is usually reached within 1 secondRepeating a read after a short time should return updated dataBest Read PerformanceStrongly Consistent ReadsReturns a result that reflects all writes that received a successful response prior to the readStructure:TablesItems (Think rows in a traditional table)Attributes (Think columns of data in a table)Provisioned throughput capacityWrite throughput 0.0065 per hour for every 10 units; Read throughput 0.0065 per hour for every 50 unitsFirst 25 GB of storage is freeStorage costs of 25 cents per additional GB per MonthCan be expensive for writes, but really really cheap for readsThe combined key/value size must not exceed 400 KB for any given documentUS East (N. Virginia) RegionDefault LimitMaximum capacity units per table or global secondary index:40,000 read capacity units and 40,000 write capacity unitsMaximum capacity units per account:80,000 read capacity units and 80,000 write capacity unitsAll Region Resource or OperationDefault LimitMaximum capacity units per table or global secondary index:10,000 read capacity units and 10,000 write capacity unitsMaximum capacity units per account:20,000 read capacity units and 20,000 write capacity unitsMaximum number of tables:256For additional information about DynamoDB Limits, see?Limits in Amazon DynamoDBElasticache:Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud.ElastiCache deployments are deployed as clusters and nodes. Memcached clusters can comprise of up to 20 nodes whereas with Redis, you have a single node in a cluster which are grouped into Redis Replication Group.Can be used for DB caching in conjunction with services like RDSWeb service that makes it easy to deploy, operate, and scale in memory cache in the cloudImproves the performance of web applications by allowing you to retrieve information from fast, managed in-memory caches, instead of relying entirely on slower disk based databasesImproves application performance by storing critical pieces of data in memory for low-latency accessCached information may include the results of I/O intensive database queries or the results of computationally intensive calculationsSupports 2 open-source in-memory caching engines:Memcached:Widely adopted memory object caching system; deployed as clusters and nodesElasticache is protocol complaint with memcached, so popular tools that you use today with existing memcached environments will work seamlessly with the serviceNo Multi AZ supportRedis:Popular open-source in-memory key-value store that supports data structures such as sorted sets and listsElasticache supports Master/Slave replication and Multi-AZ which can be used to achieve cross AZ redundancyGood choice if your DB is read heavy and not prone to frequent changingAll Region Resource or OperationDefault LimitDescriptionNodes per region:50The maximum number of nodes across all clusters in a region.Nodes per cluster (Memcached):20The maximum number of nodes in an individual Memcached cluster.Nodes per cluster (Redis):1The maximum number of nodes in an individual Redis cluster.Clusters per replication group (Redis):6The maximum number of clusters in a Redis replication group. One is the read/write primary. All others are read only replicas.Parameter groups per region:20The maximum number of parameters groups you can create in a region.Security groups per region:50The maximum number of security groups you can create in a region.Subnet groups per region:50The maximum number of subnet groups you can create in a region.Subnets per subnet group:20The maximum number of subnets you can define for a subnet group.Redshift:Fast and powerful, fully managed, petabyte-scale data warehouse service in the cloud. Customers can start small for just 25 cents per hour with no commitments or upfront costs and scale to a petabyte or more for 1000 per TB per year. Less than a tenth of most other data warehousing solutions.?Used for data warehousing / business intelligenceUses 1024KB/1MB block size for its columnar storageTools like Cognos, Jaspersoft, SQL Server Reporting Services, Oracle Hyperion, SAP NetWeaverUsed to pull in very large and complex data setsUsed by management to do queries on data such as current performance vs target10 times faster than traditional RDSMassively Parallel Processing (MPP)Automatically distributes data and query load across all nodesCurrently only available in 1 AZ at a timeCan restore snapshots to new AZ's in the event of an outage2 types of transactions:On-line Transaction Processing (OLTP) - Standard transaction driven database insert/retrieval -Pulls up a row of data such as Name, Date etc..On-line Analytics Processing (OLAP) - Pulls up a row of data such as Name, Date etc..Uses different type of architecture both from a DB and infrastructure layerPull in data from multiple queries, gathering tons of information depending on what type of report is requiredStart with Single Node (160GB)Multi-node configurations available:Leader Node - Manages client connections and receives queriesCompute Node - Store data and perform queries and computationsCan have up to 128 compute nodesColumnar data storage:Instead of storing data as a series of rows, redshift organizes data by column.Unlike row-based systems, which are ideal for transaction processing, Column-based systems are ideal for data warehousing and analytics where queries often involve aggregates performed over large data sets.Only columns involved in the queries are processed and columnar data is stored sequentially on the storage mediaColumn-based systems require far fewer I/Os, greatly improving query performanceAdvanced compression:Columnar data stores can be compressed much more than row-based data stores because similar data is stored sequentially on diskRedshift employs multiple compression techniques and can often achieve significant compression relative to traditional relational data storesDoes not require indexes or materialized views so uses less space than traditional relational db systemsAutomatically samples your data and selects the most appropriate compression schemePriced on 3 thingsTotal number of hours you run across your compute nodes for the billing periodYou are billed for 1 unit per node per hour, so 3-node cluster running an entire month would incur 2,160 instance hoursYou will not be charged for leader node hours, only compute nodes will incur chargesCharged on backupsCharged for data transfers (only within VPC not outside)Security:Encrypted in transit using SSLEncrypted at rest using AES-256 encryptionTakes care of key management by defaultManage your own keys through Hardware Security Module (HSM)AWS Key Management ServiceResource or OperationDefault LimitNodes per cluster:101Nodes per cluster:200Reserved Nodes:200Snapshots:20DMS (Database Migration Service):AWS Database Migration Service helps you migrate databases to AWS easily and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases. The service supports homogenous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora or Microsoft SQL Server to MySQL.?Allows migration of your production DB platforms to AWS or between services like MySQL -> PostgreSQLOnce started, AWS manages all the complexities of the migration process like data type transformation, compression, and parallel transfer for faster transfer, while ensuring that data changes to the source database that occur during the migration process are automatically replicated to the targetAWS schema conversion tool automatically converts the source DB schema and a majority of the custom code, including views, stored procedures and functions to a format compatible with the target DBAnalytics:EMR (Elastic Map Reduce):Amazon Elastic Map Reduce is a managed Hadoop framework that enables you to process large amounts of data across EC2 instances which is highly scalable. You can process data for analytics purposes, machine learning and business intelligence workloads and move data into and out of other AWS data stores and databases, such as Amazon S3 and DynamoDB.Not covered as exam topic currentlyData Pipeline:AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services, as well as on-premise data sources, at specified intervals. With AWS Data Pipeline, you can regularly access your data where it’s stored, transform and process it at scale, and efficiently transfer the results to AWS services such as Amazon S3, Amazon RDS, Amazon DynamoDB, and Amazon Elastic MapReduce (EMR).?Not covered as exam topic currentlyElastic Search:Amazon Elasticsearch Service is a managed service that makes it easy to deploy, operate, and scale Elasticsearch in the AWS Cloud.?Elasticsearch is a powerful open source search and analytics engine that makes data easy to exploreNot covered as exam topic currentlyKinesis:Amazon Kinesis is a web service that enables you to ingest, process and analyse large streaming data. With this data you can identify trends, insights and conclude information to meet your business objectives.Typical use cases include the ability to ingest and analyse data in real time such as application logs, website interaction, loT and Geospatial data.For example, cab companies can analyse geospatial data in real time to ensure availability of drivers at various location where expected demand is high.Kinesis = managed service platform for real time streaming of big data.Web apps, mobile devices, wearables generate huge amounts of streaming data.Use kinesis to digest big dataAmazon Kinesis Capabilities:Amazon Kinesis Streams - Amazon Kinesis Streams enables you to collect and process large amounts of data in real time. Using Amazon Kinesis Streams Applications (data consumers) you can process this data in real time as it moves through the stream. Typical data types Include application logs, social media, market data feeds, and web clickstream data.Amazon Kinesis Firehose - enables you to load massive amounts of real-time streaming data to destinations such as Amazon S3, Redshift or Elasticsearch. You do not need to write applications or manage resources as you would with Amazon Kinesis Streams in the case of managing Shards (processing power). You configure your data producers to send data to Kinesis Firehose and it automatically delivers the data to the destination that you specified; NO consumers and NO shards, it’s all automatic.Amazon Kinesis Analytics - can be used to process streaming data using SQL queries in real time. You can perform time series analytics, feed real-time dashboards, and create real time metrics. With your SQL queries, you can construct applications that transform and gain insights into your data.If any questions reference streaming, think KinesisUsed to consume big dataStream large amounts of social media, news feeds, logs, etc in the cloudElastic Map Reduce is for big data processingBusiness intelligence and reporting, would be derived from redshiftResource or OperationDefault LimitNotesDelivery streams per region:20Delivery stream capacity:2,000 transactions/second?5,000 records/second?5 MB/secondThe three capacity limits scale proportionally.?For example, if you increase the throughput limit to?10MB/second, the other limits increase to 4,000?transactions/sec and 10,000 records/sec.Shards per region:US EAST, US WEST, EU: 50?All other supported regions: 25Machine Learning:Amazon Machine Learning is a service that makes it easy for developers of all skill levels to use machine learning technology. Amazon Machine Learning provides visualization tools and wizards that guide you through the process of creating machine learning (ML) models without having to learn complex ML algorithms and technology.?Not covered as exam topic currentlyQuick Sight:Amazon QuickSight is a very fast, cloud-powered business intelligence (BI) service that makes it easy for all employees to build visualizations, perform ad-hoc analysis, and quickly get business insights from their data.?Not covered as exam topic currentlySecurity and Identity:IAM (Identity and Access Management):AWS Identity and Access Management (IAM) enables you to securely control access to AWS services and resources for your users.?Allows for centralized control and shared access to your AWS Account and/or AWS servicesBy default when you create a user, they have NO permissions to do anythingRoot account has full admin access upon account creationNot region specific, can be shared between all regionsGranular permission sets for AWS resourcesIncludes Federation Integration which taps into Active Directory, Facebook, Linkedin, etc. for authenticationMulti-factor authentication supportAllows configuration of temporary access for users, devices and servicesSet up and manage password policy and password rotation policy for IAM usersIntegration with many different AWS servicesSupports PCI DSS complianceAccess can be applied to:Users - End users (people)Groups - Collection of users under one set of permissionsRoles - Assigned to AWS resources, specifying what the resource (such as EC2) is allowed to access on another resource (S3), roles must be used because policies cannot be directly attached to AWS servicesPolicies - Document that defines one or more permissionsPolicies can be applied to users, groups and rolesIAM provides pre-built policy templates to assign to users and groups. examples include:Administrator access: Full access to ALL AWS resources.Power user access: Admin Access except it does NOT allow user/group management.Read only access: Only view AWS resources (i.e. user can only view what is in an S3 bucket)You can assign up to 10 policies to a single groupPolicy documents must have a version, and a statement in the body; The statement must consist of Effects (Allow, Deny), Actions(Which action to allow/deny such a * for all actions), and Resources (affected resources such as * for all resources)All resources can share the same policy documentThere are 3 different types of roles:Service RolesCross account access rolesUsed when you have multiple AWS accounts and another AWS account must interact with the current AWS accountIdentity provider access rolesRoles for facebook or similar Identity providersIn order for a new IAM user to be able to log into the console, the user must have a password setBy default a new users access is only accomplished through the use of the access key/secret access keyIf the users password is a generated password, it also will only be shown at the time of creation.Customizable Console Sign-in link can be configured on the main IAM page (aws.)Customizable Console Sign-in links must be globally unique. If a sign in link name is already taken, you must choose an alternativeRoot account is email address that you used to register your accountRecommended that root account is not used for login, and should be secured with Multi-factor Authentication (MFA)Can create Access Keys/ Secret Access Keys to allow IAM users (or service accounts) to be used with AWS CLI or API callsAccess Key ID is equivalent to a user-name, Secret Access Key is equivalent to a passwordAccess Keys/ Secret Access Keys credentials are automatically generated for new usersWhen creating a user's credentials, you can only see/download the credentials at the time of creation not after.Access Keys can be retired, and new ones can be created in the event that secret access keys are lostTo create a user password, once the users have been created, choose the user you want to set the password for and from the User Actions drop list, click manage password. Here you can opt to create a generated or custom password. If generated, there is an option to force the user to set a custom password on next login. Once a generated password has been issued, you can see the password which is the same as the access keys. Its shown once onlyClick on Policies from the left side menu and choose the policies that you want to apply to your users. When you pick a policy that you want applied to a user, select the policy, and then from the top Policy Actions drop menu, choose attach and select the user that you want to assign the policy toResource or OperationDefault LimitGroups per account:100Instance profiles:100Roles:250Server Certificates:20Users:5000Number of policies allowed to attach to a single group:10For additional information about IAM Limits, see?Limits in IAM entities and objectsSecurity Token Service (STS):AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users).STS allows you to create temporary security credentials that grant trusted users access to your AWS resources.These temporary credentials are for short-term use, and can be active for a few minutes to several hours.Once expired, they can no longer be used to access your AWS resources.When requested through an STS API call, credentials are returned with three components:Security TokenAn Access Key IDA Secret Access keySTS Benefits:No distributing or embedding long-term AWS security credentials in an application.Grant access to AWS resources without having to create an IAM identity for them.The basis for IAM roles and identity federation.Since the credentials are temporary, you don't have to rotate or revoke them.You decide how long they are active for.When to use STS:Identity Federation:Enterprise identity federation (authenticate through your companies network)STS Supports Security Assertion Markup Language (SAML), which allows for use of Microsoft Active Directory (of your own solutions).Web identity federation (3rd party identity providers. i.e. Facebook. Google. Amazon)Roles for Cross-Account AccessUsed for organizations that have more than one AWS accountRoles for Amazon EC2 (and other AWS services)Grant access to application running on an EC2 instance to access other AWS services without having to imbed credentials.Directory Service:AWS Directory Service makes it easy to setup and run Microsoft Active Directory (AD) in the AWS cloud, or connect your AWS resources with an existing on-premises Microsoft Active Directory.?Not covered as exam topic currentlyWAF (Web Application Firewall):AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.?Allows customers to secure their cloud infrastructureNot covered as exam topic currentlyResource or OperationDefault LimitWeb ACLs per account:10Rules per account:50Conditions per account:50For additional information about Web Application Firewall Service Limits, see?Limits in Amazon WAFCloud HSM (Hardware Security Module):Amazon Hardware Security Module (HSM) is a cloud service that uses dedicated HSM appliances within the AWS Cloud to help you design and deploy stringent data security solutions which meet regulatory and compliance requirements.The hardware appliance is designed to provide a secure key storage and cryptographic solution which involves utilizing tamper proof hardware modules and enables you to use the key material without exposing it to anyone else. You are in complete control of access to the cryptographic keys that can then be used to encrypt and decrypt data. Amazon manages and maintains the hardware but does not have access to your keys.Designed for VPC:You cannot configure CloudHSM without a VPC.The server on which your application and the HSM client is running must have network connectivity to the HSM. Ideally, you can configure your application to be in the same VPC, use VPC peering, a VPN connection, or Direct Connect.You can connect CloudHSM instances in your VPC to your datacenter using the VPN capability built into VPC or with AWS Direct Connect.Best Practice Recommendations:Use two of more HSM Appliances in different Availability Zones to provide High Availability.The failure of a single HSM appliance in a non-HA design can result in the permanent loss of keys and data.High Availability design ensures that Cloud HSM appliances are grouped together to form one logical device and service is maintained even if one or more HSM is not available.Do not initialize the HSM unless you have the keys backed up or the keys are not required as it destroys the key material inside the HSM.CloudHSM provides additional protection over KMS, and is best for cases where this is strict contractural or regulatory requirements.Resource or OperationDefault LimitHSM appliances:3High-availability partition groups:20Clients:800KMS (Key Management Service):AWS Key Management Service (KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data, and uses Hardware Security Modules (HSMs) to protect the security of your keys.?Not covered as exam topic currentlyManagement Tools:CloudFormation:Amazon CloudFormation is essentially ‘Infrastructure as Code’ where you can design and deploy a collection of related AWS resources that can then be customized into a template for repeat deployment. You can then provision and/or update your resources in an orderly fashion with predictable outcomes.After the AWS resources are deployed, you can modify and update them in a controlled and predictable way, in effect applying version control to your AWS infrastructure the same way you do with your software. You can also visualize your templates as diagrams and edit them using a drag-and-drop interface with the AWS CloudFormation Designer.CloudWatch:Amazon provides a monitoring service to oversee its core resources such as EC2 instances, DynamoDB Databases, Elastic Load Balancers and other various services. Amazon CloudWatch can be used to collect and track metrics, aggregate log files, set alarms, and automatically react to changes in your AWS resources. By using Cloudwatch, you have access to system-wide visibility into resource utilization, application performance, and operational health.By default all EC2 instances will have basic monitoring, which is a 5 minute pollIf you want detailed CloudWatch monitoring, you get more graphs at a 1 minute poll intervalStandard monitoring is on by default (5 min intervals)Detailed monitoring is on a 1 minute intervalDetailed monitoring does cost 3.50 per instance per monthCPU/Disk/Network In/Status metrics are availableRAM is a host level metric and not available on a per instance basisEvents can trigger Lambda functions or SNS events based on criteria, which helps you to respond to state changes within your AWS resources.Logs help you to aggregate, monitor, and store log dataLogs can go down to the application level but requires an agent to be installed.Alarms can be set against any metrics that are available, and will perform an alert/notification and an action when the alarm criteria is metCloudWatch is used for performance monitoring, not auditing, that is what CloudTrail is forYou can create dashboards with custom widgets to keep track of what is happening in your environmentOS level metrics that required a third party script (perl) to be installed (provided by AWS)Memory utilization, memory used, and memory availableDisk Swap utilizationDisk space utilization, disk space used, disk space availableCloudWatch Resource LimitDefault LimitCommentsDescribeAlarms:3 transactions per second (TPS)The max number of operation requests you can make per second without being throttled. HYPERLINK "" GetMetricStatistics:400 transactions per second (TPS)The max number of operation requests you can make per second without being throttled. HYPERLINK "" ListMetrics:25 transactions per second (TPS)The max number of operation requests you can make per second without being throttled. HYPERLINK "" PutMetricAlarm:3 transactions per second (TPS)The max number of operation requests you can make per second without being throttled. HYPERLINK "" PutMetricData:150 transactions per second (TPS)The max number of operation requests you can make per second without being throttled.CloudWatch Event Resource LimitDefault LimitRules50 per accountCloudWatch Logs Resource LimitDefault LimitCommentsCreateLogGroup:500 log groups/account/regionIf you exceed your log group limit, you get a ResourceLimitExceeded exception. HYPERLINK "" DescribeLogStreams:5 transactions per second (TPS)/account/regionIf you experience frequent throttling, you can request a limit increase. HYPERLINK "" FilterLogEvents:5 transactions per second (TPS)/account/regionThis limit can be changed only in special circumstances. HYPERLINK "" GetLogEvents:5 transactions per second (TPS)/account/regionWe recommend subscriptions if you are continuously processing new data. If you need historical data, we recommend exporting your data to Amazon S3. This limit can be changed only in special circumstances.Cloud Trail:AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you.?Provides way for customers to audit access to what people are doing on the platform in your accountNot covered as exam topic currentlyOpsWorks:AWS OpsWorks is a configuration management service that helps you configure and operate applications of all shapes and sizes using Chef.?Configuration management service which uses Chef in the backgroundConsists of recipes to maintain a consistent stateLook for term chef, recipes or cook books think OpsWorksConfig:AWS Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications to enable security and governance.?Provides customer with configuration history, change notifications, and inventoryCan perform tasks such as ensuring that all EBS volumes are encrypted etc..Service Catalog:AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS.?Create and manage catalogs of services you are allowed to use on AWSNot covered as exam topic currentlyTrusted Advisor:An on-line resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment, Trusted Advisor provides real time guidance to help you provision your resources following AWS best practices.?Automated service that scans customer environment and offers advise on how how to save money, lock down resources, and reports security vulnerabilitiesApplication Services:API Gateway:Amazon API Gateway is a managed service to enable you to publish and manage APls. You can then enable your applications to access data, business logic, and functionality from your back end services which can include applications running on EC2 Instances or Lambda functions. The API Gateway itself will accept incoming APl calls and forward them to the backend services and thus manages all tasks involved in processing thousands of concurrent API calls.API Gateway is a fully-managed service that allows you to create and manage your own APls for your application.API Gateway acts as a "front door" for you application, allowing access to data/logic/functionality from your back-end services.API Gateway Main Features:Build RESTful APlsDeploy APls to a "Stage" (different envs: i.e. dev. beta, production)Each stage can have it's own throttling, caching metering, and logging.Create a new API version by cloning an existing one.You can create and work on multiple versions of an API (API version control).Roll back to previous API deploymentsA history of API deployments are kept.Custom domain namesCustom domain names can point to an API or stage.Create and manage API keys for access AND meter usage of the API keys through Amazon Cloudwatch LogsSet throttling rules based on the number of request per second (for each HTTP method)Request over the limit throttled (HTTP 429 response)Security using Signature v.4 to sign and authorize API callsTemporary credentials generated through Amazon Cognito and Security Token Service (STS)Benefits of API Gateway:Ability to cache API responsesDDoS protection via CloudFrontSDK generation for iOS. Android, and JavascriptSupports Swagger (a very popular framework of API dev tools)Request/response data transformation (i.e. JSON IN to XML OUT)Resource or OperationDefault LimitAPIs per account:60API keys per account:500Client certificates per account:60Throttle Rate:1K requests per second (rps) with a burst limit of 2K rpsUsage plans per account:300Custom authorizers per API:10Resources per API:300Stages per API:10AppStream:Amazon AppStream enables you to stream your existing Windows applications from the cloud, reaching more users on more devices, without code modifications.AWS version of XenAppSteam Windows apps from the cloudNot covered as exam topic currentlyCloudSearch:Amazon CloudSearch is a managed service in the AWS Cloud that makes it simple and cost-effective to set up, manage, and scale a search solution for your website or application.?Makes it simple to manage and scale search across your entire applicationNot covered as exam topic currentlyElastic Transcoder:Amazon Elastic Transcoder is web service that enables you to transcode media files (audio or video) to a particular format, size and quality. This enables you to produce media files for playback on a wide range of devices such as mobiles, tablets, desktop computers and televisions.The transcoding process involves accepting input files from an S3 bucket, transcoding it and writing the resulting files to another S3 bucket.Key Steps in Transcoding data include:Create a transcoding pipeline which specifies the input 53 Bucket, the output S3 bucket and the storage class.Specify an Identity and Access Management (IAM) role that is used by the service to access your files.Create a transcoding job by specifying your input files, output file and the transcoding preset to use or specify a custom present if you have created one.Four core components:JobsPipelinesPresetsNotificationsPay based on the minutes that you transcode and the resolution at which you transcode.Resource or OperationDefault LimitUS-EAST (VA) , US-WEST(Oregon), EU (Ireland)All OthersPipelines per region:4User-defined presets:50Max no. of jobs processed simultaneously by each pipeline:N/A2012For additional information about ElasticTranscoder Limits, see?Limits in Amazon ElasticTranscoderSES (Simple E-Mail Service):Amazon Simple Email Service (Amazon SES) is a cost-effective email service built on the reliable and scalable infrastructure that developed to serve its own customer base. With Amazon SES, you can send and receive email with no required minimum commitments – you pay as you go, and you only pay for what you use.?Not covered as exam topic currentlyResource or OperationDefault LimitDaily sending quota:200 messages per 24 hour periodMaximum send rate:1 EMail per secondRecipient address verification:All recipient addresses must be verifiedSQS (Simple Queue Service): most important service going into examAmazon Simple Queue Service is a managed queuing service that makes it possible to decouple components of a cloud application. The service is reliable and highly scalable and you can easily move data between components performing tasks; messages won't be lost and it isn't required for each component to be available at all times. SQS makes building distributed, decoupled applications easy and can be used in conjunction with EC2 and other AWS services.SQS allows storage of message copies on various servers to allow for redundancy and high availability. SQS ensures the delivery of message at least once and you can have multiple readers and writers interacting with the same queue.SQS now offers two types of queues for the SQS service. These are Standard queues and First-In-First-Out (FIFO) queues.Standard Queues:- High throughput, nearly unlimited transaction per second- Decouple live user requests from backend processing work- Allocate tasks to multiple worker nodes- Batch messages for future processingFIFO Queues:- Limited throughput, 300 messages per second- Ensure user entered commands are executed in the right order- Display correct product price by sending price modifications in right order— stock exchange- Prevent a customer from placing an order before registering for an accountUsed to allow customers the ability to decouple infrastructure componentsVery first service AWS released. Even older then EC2SQS Work?ow:Generally a "worker" instance will "poll" a queue to retrieve waiting messages for processing,Auto Scaling can be applied based off of queue size so that if a component of your application has an increase in demand, the number of worker instances can increase.Messages can contain up to 256 KB of text in any formatActs as a buffer between the component producing and saving data, and the component receiving and processing the dataEnsures deliver of each message at least once and supports multiple readers and writers interacting with the same queue; SQS cannot guarantee that it will not create duplicates of that messageA single queue can be used simultaneously by many distributed application components, with no need for those components to coordinate or communicate with each otherWill always be available and deliver messagesDoes not guarantee FIFO delivery of messagesMessages can be delivered multiple times and in any orderIf sequential processing is a requirement, sequencing information can be placed in each message so that message order can be preservedSQS always asynchronously PULLs messages from the queueRetention period of 14 days12 hour visibility timeout by defaultIf you find that the default visibility timeout period (12 hours) is insufficient to fully process and delete the message, the visibility timeout can be extended using the ChangeMessageVisibility actionIf the ChangeMessageVisibility action is specified to set an extended timeout period, SQS restarts the timeout period using the new valueEngineered to provide delivery of all messages at least oneDefault short polling will return messages immediately if messages exist in the queueLong polling will check the queue for 1-20 seconds at a time - and will return a response only after that time period has expired; The long polling returns immediately all the messages if the queue is not empty. It waits only if and as long as there is nothing in the queueshort polling: a subset of sqs servers are polled, but can happen that a message is not foundlong polling: wait untill the message is available, unless connetion timeout , reduce request and decrease costs (reduce the number of CPU cycles and empty responses)256kb message sizes (originally 64kb)Billed for 64kb chunksFirst million messages free, then $.50 per additional million thereafterSingle request can have from 1 to 10 messages, up to a max payload of 256KBEach 64KB chunk of payload is billed as 1 request. If you send a single API request with a 256KB payload, you will be billed for 4 requests (256/64 KB chunks)"Decouple" = SQS on examAuto-scaling supportedMessage prioritization is not supportedProcess:Component 1 sends a message to the queueComponent 2 retrieves the message from the queue and starts the visibility timeout periodVisibility timer only starts when the message is picked up from the queueComponent 2 processes the message and then deletes it from the queue during the visibility timeout periodIf the visibility timeout period expires, the message will stay in the queue and not be deletedThe process is only complete when the queue receives the command to delete the message from the queueSWF vs SQSSQS has a retention period of 14 days, SWF up to 1 year for workflow executionsSWF presents task-oriented API, SQS = message-oriented APISWF ensures a task is assigned only once and never duplicated, with SQS you need to handle the potential for duplicate messagesSWF keeps track of all the tasks & events in an application. With SQS you need to implement application leveltracking, especially if you have multiple queues.SNS vs SQSBoth messaging services in AWSSNS – PushSQS – Polls (pulls)For additional information about SQS Limits, see?Limits in Amazon SQSSWF (Simple Workflow Service)Amazon SWF is a web service that manages the coordination of workflows across distributed systems which can involve any number of activities to build scalable and resilient applications. It enables the coordination of activities and its execution across multiple computing devices and across different AWS regions. These workflows can be carried out sequentially or in parallel.Amazon SWF will maintain individual execution states to enable the seamless flow of activities and this can also involve human interaction as a component that performs set tasks to help complete the workflow process for a given application.A series of tasks represent a workflow which can spread across different distributed systems and be coordinated in a manner to ensure that the application functions as a whole. In addition, key points to note include:- Activities within a workflow can run sequentially, in parallel, synchronously and asynchronously- Activities can be carried out in different locationsBuild, run and scale background jobs or tasks that have sequential stepsWay to process human oriented tasks using a frameworkWorkflow retention is always shown in seconds (3.1536E+07 seconds)Ensures a teaks is assigned only once and is never duplicated; SQS duplicate messages are allowed, and must be handledKeeps track of all tasks and events in an application, SQS would need an implementation of a custom application-level tracking mechanismA collection of work-flows is referred to as a domainDomains isolate a set of types, executions, and task lists from others within the same accountYou can register a domain by using the AWS console or using the RegisterDomain action in the SWF APIDomain parameters are specified in JSON formatSWF Actors:Workflow starters - An application that can initiate a WorkflowDecider's - Control the flow or coordination of activity tasks such as concurrency, or scheduling in a work-flow execution; If something has finished in a work-flow (or fails), a decider decides what to do nextActivity Workers - Programs that interact with SWF to get tasks, process received tasks, and return the resultsAmazon SWF interacts with its Actors as discussed above by assigning them with tasks. There are three types of tasks:Activity TaskLambda TaskDecision TasksBrokers the interactions between workers and the decider; Allows the decider to get consistent views into the progress of tasks and to initiate new tasks in an ongoing mannerStores tasks, assigns them to workers when they are ready and monitors their progressEnsures that a task is assigned only once and is never duplicatedMaintains the application state durably, workers and decider's don't have to keep track of the execution state, and can run independently, with the ability to scale quicklySWF is designed to help users coordinate synchronous and asynchronous tasksFor additional information about SWF Limits, see?Limits in Amazon SWFTroubleshooting:EC2 Troubleshooting:Connectivity issues to an EC2 instanceCorrect ports on the security group are may not be open.Cannot attach and EBS volume to an EC2 instanceEBS volumes must live in the same availability zone as the EC2 instance they are to be attached to.You can create a snapshot from the volume and launch the volume in the correct availability zone.Cannot launch additional instancesYou have probably reached the EC2 limit and need to contact AWS to increase limit.Unable to download package updatesThe EC2 instance may not have a public/Elastic IP address, and/or does not belong to a public subnet.Applications seeming to slow down on T2 micro instancesT2 micro instances utilize CPU credits (for "burstable" processing), so chances are your application is using too much processing power and needs a larger instance or different instance type.AMI unavailable in other regionsAMl's are only available in the regions that they are created.An AMI can be copied to another region but will receive a new AMI id."Capacity error" when attempting to launch an instance in a placement groupStart and stop all the instances in the placement group (AWS tries to locate them as close as possible).VPC Troubleshooting:New EC2 instances are not automatically being assigned a public IP addressModify the Auto-Assign Public IP setting on the subnet.NAT Gateway is con?gured but instances inside a private subnet still cannot download packagesNeed to add 0.0.0.0/0 route to the NAT gateway on the route table for private subnets.Traf?c is not making it to the instances even though security group rules are correctCheck the Network Access Control Lists to ensure the proper ports from the proper sources are open. (also check you IGW and route table settings)Error when attempting to attach multiple internet gateways to a VPCOnly one internet gateway can be attached to a VPC at any given time.Error when attempting to attach multiple Virtual Private gateways to a VPCOnly one Virtual Private Gateway can be attached to a VPC at any given time.VPC Security group (for EC2 instances) does not have enough rules for the required applicationAssign the EC2 instance to multiple security groups.Cannot SSH/communicate with resources inside of a private subnetEither you have not setup a VPN. or you have not connected to an EC2 instance (Bastion host) within the VPC to launch a connection from.Successful site-to-site VPN connection but unable to access extended resourcesNeed to add on-premise routes to the Virtual Private Gateway route table.Failure to create a VPC peering connection between two VPC’s in different regionsPeering connections can only be created between two VPC's in the same region.ELB Troubleshooting:Load balancing is not occurring between instances in multiple availability zonesMake sure "Enable Cross-Zone load balancing" has been selected.Instances are healthy but are not registering as healthy with the ELBCheck configuration for the "health check" to make sure you have selected the proper ping protocol, ping port, and ping path.The ELB is con?gured to listen on port 80, but traffic is not making it to the instances that belong to the ELBYou may have mistaken the "Listeners" for the security group. Listeners are not the same as the security group rules, port 80 still needs to be open on the security group that the ELB is using.Access logs on web servers show IP address of the ELB not the source trafficEnable Access Logs to Amazon 53 (found under "attributes")Unable to add instances from a specific subnet to the ELBMost likely the subnet that the instance lives in has not been added to the ELBs configuration.Auto Scaling Troubleshooting:An Auto Scaled instance continues to start and stop (or create/terminate) in short intervalsThe scale-up and scale-down thresholds may be too close to each other. Either raise the scale-up threshold or lower the scale-down threshold.Auto Scaling does not occur even though scaling policies are con?gured correctlyThe "max" number of instances set in the auto scaling group may have been reached.Developer Tools:CodeCommit:AWS CodeCommit is a fully-managed source control service that makes it easy for companies to host secure and highly scalable private Git repositories.?AWS implementation of GITNot covered as exam topic currentlyCodeDeploy:AWS CodeDeploy is a service that automates code deployments to any instance, including Amazon EC2 instances and instances running on-premises.?Automate code deployments, AWS CI/CD serviceNot covered as exam topic currentlyCodePipeline:AWS CodePipeline is a continuous delivery service for fast and reliable application updates. CodePipeline builds, tests, and deploys your code every time there is a code change, based on the release process models you define.?Build, test, and deploy code based on commitsNot covered as exam topic currentlyMobile Services:Mobile Hub:AWS Mobile Hub lets you easily add and configure features for your mobile apps, including user authentication, data storage, backend logic, push notifications, content delivery, and analytics. After you build your app, AWS Mobile Hub gives you easy access to testing on real devices, as well as analytics dashboards to track usage of your app – all from a single, integrated console.?Build, run, and test usage of your mobile applicationsNot covered as exam topic currentlyCognito:Amazon Cognito lets you easily add user sign-up and sign-in to your mobile and web apps. With Amazon Cognito, you also have the options to authenticate users through social identity providers such as Facebook, Twitter, or Amazon, with SAML identity solutions, or by using your own identity system. In addition, Amazon Cognito enables you to save data locally on users devices, allowing your applications to work even when the devices are offline. You can then synchronize data across users devices so that their app experience remains consistent regardless of the device they use.?Save mobile data like game states or preferencesNot covered as exam topic currentlyDevice Farm:AWS Device Farm is an app testing service that lets you test and interact with your Android, iOS, and web apps on many devices at once, or reproduce issues on a device in real time. View video, screenshots, logs, and performance data to pinpoint and fix issues before shipping your app.?Enables customers to test their mobile applications against real smart phones in the cloudNot covered as exam topic currentlyMobile Analytics:With Amazon Mobile Analytics, you can measure app usage and app revenue. By tracking key trends such as new vs. returning users, app revenue, user retention, and custom in-app behavior events, you can make data-driven decisions to increase engagement and monetization for your app.?Measure mobile application usage, revenue and track new/returning users, etc..Not covered as exam topic currentlySNS (Simple Notification Service):Amazon Simple Notification Service (SNS) is a web service that enables you to send messages to endpoints that subscribe to receiving it. Publishers and subscribers, otherwise known as producers and consumers are the primary terminologies used when describing the functioning of Amazon SNS.Publishers produce and send messages to a ‘topic’ asynchronously and subscribers subscribe to those topics in order to consume those mon Amazon SNS Scenarios:Amazon SNS can be used in a variety of scenarios including:- Monitoring applications and services- Providing time sensitive information- Delivering messages at regular intervals and on the basis of certain triggers- Event updates noti?cation services, for example, Autoscaling noti?cations'Fanout' Scenario- End customer's email address for order placement notification- Accounts department's accounting software to initiate the dispatch and invoicing process- Data warehouse application for analyticsApplication and Systems Alerts- Push Email and Text Messaging- Mobile Push Noti?cationsWeb service that allows customers to setup, operate, and send notifications from the cloudCan push to Apple, Google, FireOS, and Windows devices, as well as Android devices in China with Baidu cloud pushFollows the publish-subscribe (pub-sub) messaging paradigm, with notifications being delivered to clients using a push mechanism that eliminates the need to poll for updatesCan deliver notifications by SMS, email, SQS queues, or any HTTP endpointSNS notifications can be used to trigger lambda functionsWhen a message is published to an SNS topic that has a lambda function subscribed to it, the function is invoked with the payload of the published message. The lambda function would receive the message payload as an input parameter, and can manipulate the info in the message, publish the message to other SNS topics or send the message to other AWS servicesAllows you to group multiple recipients using topicsTopics : How you label and group different endpoints that you send messages to; The group of subscriptions that you send a message to; topics = communication channelsTopics are access points for allowing recipients to dynamically subscribe for copies of the notificationOne topic can support deliveries to multiple endpoint types, for example, IOS, Android, and SMS recipients can be grouped togetherWhen message is published, SNS delivers appropriately formatted copies of your message to each subscriberEmail notifications will be JSON formated not XMLSubscriptions have to be confirmedSubscription expire after 3 days if they are not confirmedTTL is the number of seconds since the message was publishedIf the message is not delivered within the TTL time, then the message will expireTo prevent messages from being lost, all messages published to SNS are stored redundantly across multiple AZ'sInstantaneous, PUSH based delivery (No Polling) --> SQS requires pollingSimple API and easy integration with applicationsFlexible message deliver over multiple transport protocolsInexpensive, pay as you go modelWeb based AWS management console offers simplicity of point and click interfaceCan be used in conjunction with SQS to fan a single message out to multiple SQS queuesRemember:SNS - PUSHSQS - PULL (poll)Subscribers (endpoint that a message is sent):HTTPHTTPSEmailEmail-JSONSQS (currently only standard queues NOT FIFO)ApplicationLambdaMessages can be customized for each of the available protocolsPublisher (entity that triggers the sending of a message):HumanS3 eventCloudwatch AlarmResource or OperationDefault LimitTopics :100,000Account spend threshold for SMS:50 USDDelivery rate for promotional SMS messages:20 Messages per secondDelivery rate for transactional SMS messages:20 Messages per secondEnterprise Applications:Workspaces:Amazon WorkSpaces is a fully managed, secure desktop computing service which runs on the AWS cloud. Amazon WorkSpaces allows you to easily provision cloud-based virtual desktops and provide your users access to the documents, applications, and resources they need from any supported device, including Windows and Mac computers, Chromebooks, iPads, Fire tablets, and Android tablets.Amazon Workspaces in a managed web based Virtual Desktops Infrastructure solutions in the cloud. Rather than procure, design and build your own virtual desktop infrastructure, you can start using Amazon Workspaces in a very short space of time to provision necessary virtual desktops in the cloud with the required amount of compute, storage and memory resources.Access is granted using credentials setup by your AWS Account Administrator or can be integrated with your existing Active Directory domain.Virtual Desktop Infrastructure (VDI) that provides a bundle of compute resources, storage space, and software application access that allow a user to interact with just as a traditional desktopUsers can connect to a WorkSpace from any supported device (PC, Mac, Chrome-book, iPad, Kindle Fire, or Android) using a free Workspace Client applicationCan be integrated into Active Directory using federated servicesRuns Windows 7 provided by Windows Server 2008 R2Users can personalize their workspace with their favorite settings for items such as wallpaper, icons, shortcuts, etc. This can be locked down by an administratorBy default you will be given local admin access so you can install your own applicationsWorkspaces are persistentAll data on the D:\ is backed up every 12 hoursWorkDocs:Amazon WorkDocs is a fully managed, secure enterprise storage and sharing service with strong administrative controls and feedback capabilities that improve user productivity.?AWS version of Dropbox for the enterpriseNot covered as exam topic currentlyWorkMail:Amazon WorkMail is a secure, managed business email and calendar service with support for existing desktop and mobile email clients.AWS version of Exchange Server for E-mail ServicesNot covered as exam topic currentlyInternet of Things:IoT (Internet of Things):AWS IoT is a managed cloud platform that lets connected devices easily and securely interact with cloud applications and other devices. AWS IoT can support billions of devices and trillions of messages, and can process and route those messages to AWS endpoints and to other devices reliably and securely.?Not covered as exam topic currentlyWell Architected Framework:Consists of 5 pillars (Security, Reliability, Performance Efficiency, Cost Optimization and Operational Excellence):SecurityApply security at all layersEnable TraceabilityAutomate response to security eventsFocus on securing your systemAutomate security best practicesEncrypt your data both in transit and at rest using ELB, EBS, S3 and RDSUse IAM and MFA for privilege managementSecurity in the cloud has 4 areas:Data ProtectionOrganize and classify your data into segments such as public, available only to org/dept/userImplement a least privilege access system so people can only access what they needEncrypt everything where possible, whether it be at rest or in transitCustomers maintain full control of your dataAWS makes it easy to manage keys using KMS or KMS-CDetailed logging is available that contains important content such as file access and changesDesigned storage systems for exceptional resiliency.S3 is designed for 11 nines durability. If you store 10K objects on S3, you can on average expect to incur a loss of a single object once every 10,000,000 years.Versioning which can protect against accidental overwrites, deletes, and similar harmAWS never initiates the movement of data between regions. Content placed in a region will remain in that region, unless manually moved.Privilege ManagementEnsures that only authorized and authenticated users are able to access your resourcesMechanisms in place such as ACLs, Role based access controls, Password management such as password rotation policiesInfrastructure ProtectionHow do you protect your data centerRFID controlsSecurityLockable cabinetsCCTVAmazon handles all of the physical, really customer is responsible for VPC protection.Enforce network and host level boundary protectionEnforce the integrity of the OS, updates, patches, and anti-virusDetective ControlsDetect or identify a security breach, tools available to help with this are:CloudTrailCloudWatchAWS ConfigS3GlacierReliabilityAbility of a system to recover from a service or infrastructure outage/disruptionsAbility to dynamically acquire computing resources to meet demandTest recovery proceduresAutomatically recover from failureScale horizontally to increase aggregate system availabilityStop guessing capacityConsists of 3 areas:Foundations:Make sure you have the prerequisite foundations in placeConsider the size of communication links between HQ and data centersMis-provisioning connections could result in 3-6 upgrade time-framesAWS handles most of the foundations for you. The cloud is designed to be essentially limitless meaning that AWs handles the networking, and compute requirements themselves. They set service limits to limit accidental spin up of too many resources.Change Management:Be aware of how change affects a system so you can plan pro-actively around it.Monitoring allows you to detect any changes to your environment and react.Traditionally change control is done manually and carefully co-ordinated with auditingCloudWatch can be configured to monitor your environment and services such as auto-scaling, to automate change in response to changes in your prod environment.Failure Management:Always architect your system with the assumption that failure will occurBecome aware of these failures, how they occurred, how to respond to them and then plan on how to prevent them in the future.Performance Efficiency:Focuses on how to use computing resources efficiently to meet requirementsHow to maintain that efficiency as demand changes and technology evolvesDemocratize advanced technologies (Consume as service vs setup and maintain)Go Global in minutesUse server-less architecturesExperiment more oftenConsists of 4 areas:Compute:Choose the right kind of serverAWS servers are virtualized and at the click of a button you can change server typesYou can even switch to running with no servers, and use LambdaStorage:Optimal storage solutions for your environment depend on access methods (block, file or object), patterns of access, throughput, frequency of access, frequency of update, availabilty constraints, and durability constraints.S3 has 11x9's durability and cross region replicationEBS has different mediums such as magnetic, SSD, or provisioned IOPS SSDCan easily switch between different mediumsDatabases:Optimal database solution depends on number of factors, do you need database consistency, high availability, No-SQL, DR, Relational tables?Lots of options, RDS, DynamoDB, Redshift, etc..Space Time Trade off:Using services such as RDS to add read replicas reduces the load of your database and creates multiple copies of the data to help lower latencyCan use Direct Connect to provide predictable latency between HQ and AWSUse the global infrastructure to have copies of environment in regions closest to where your customer base is located.Caching services such as Elasticache or CloudFront to reduce latencyCost OptimizationReduce cost to minimum and use those saving for other parts of your businessAllows you pay the lowest price possible while still achieving your business objectivesTransparently attribute expenditureUse managed services to reduce the cost of ownershipTrade capital expense for operating expenseBenefit from economies of scale (AWS buys servers by the thousands)Stop spending money on data center operationsDesign Principles:Stop guessing your capacity needsTest systems at production scaleLower the risk of architecture changeAutomate to make architectural experimentation easierAllow for evolutionary architecturesComprised of 4 different areas:Matched Supply and demandAlign supply with demandDon't over or under provision, instead expand as demand growsAuto-scaling or lambda execute or respond when a request comes inServices such as CloudWatch can help you keep track as to what your demand is.Cost-Effective resourcesUse correct instance typeWell architected system will use the most cost efficient resources to reach the end business goalExpenditure awarenessNo longer need to get quotes for physical servers, choosing a supplier, have resources delivered, installed, manufactured, etc..Can provision things within secondsBe aware of what each team is spending and where is crucial to any well architected systemUse cost allocation tags to track this, billing alerts as well as consolidated billing.Optimizing over timeA service that you chose yesterday man not be the best service to be using todayConstantly re-evaluate your existing architectureSubscribe to the AWS blogUse Trusted AdvisorOperational ExcellenceOperational excellence of a product workload is generally gauged by its agility, reliability, and performance. The best way forward is to standardize and manage these workloads on a routine basis using automation. AWS, under its new Operational Excellence pillar, suggests six principles, which will help drive the operational excellence:Perform operations with codeAlign operations processes to business objectivesMake regular, small, incremental changesTest for responses to unexpected eventsLearn from operational events and failuresKeep operations procedures currentWhite Paper Review:6 Advantages of CloudTrade capital expense for variable expenseBenefit from massive economies of scaleStop guessing about capacityIncrease speed and agilityStop spending money running and maintaining data centersGo Global in minutes14 Regions, each with different number of AZ'sStorage devices uses DoD 5220.22-M or NIST 800-88 to destroy data when a device has reached the end of its useful life. All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industry standard practicesVPC provides a private subnet within the cloud and the ability to use an IPsec VPN to provide an encrypted tunnel between the VPC and your data centerAWS prod is segregated from the AWS Corporate network by means of a complex set of network security / segregation devicesProvides protection against DDoS, Man in the middle attacks, IP spoofing, Port Scanning, and Packet Sniffing by other tenantsAWS has a host based firewall infrastructure that will not permit an instance to send traffic with a source IP or MAC address other than its own, which prevents IP SpoofingUnauthorized port scans by EC2 customers are a violation of the Acceptable use policyYou may request permission to conduct vulnerability scans as required to meet your specific compliance requirementsAny pre-approved vulnerability scans must be limited to your own instances and must not violate the Acceptable use policy; You MUST request a vulnerability scan in advancePassword for root or IAM user accounts into the console should be protected by MFAUse access keys to access AWS APIs (using AWS SDK, CLI, REST/Query APIs)Use SSH Key Paris to login to EC2 instances, or CloudFront signed URLSUse x.509 Certs to tighten security of your applications/cloudfront via HTTPSTrusted Advisor inspects your environment and makes recommendations when opportunities exist to save money, improve system performance, or close security gapsDifferent instances running on the same physical machine are isolated from each other via the Xen hypervisorAWS firewall resides within the hypervisor layer, between the physical network and the the instances virtual interface.ALL packets must pass through this layer. Any instance's neighbors have no more access to the instance than any other host on the Internet and can be treated as if they are separate hostsPhysical RAM is separated using similar mechanismsCustomer instances have no access to raw disk devices, but instead are presented with virtualized disksAWS proprietary disk virtualization layer automatically resets every block of storage used by the customer, so that one customers data is never unintentionally exposed to anotherMemory allocated to guests is scrubbed (set to 0) by the hypervisor when it is unallocated to a guestMemory is not returned to the pool of free memory available for new allocations until th memory scrub process has completedVirtual instances are completely controlled by you, the customer. You have full root access or administrative control over accounts, services, and applications. AWS does not have any access rights to any instance or guest OSEC2 provides a complete firewall solution. The inbound firewall is configured in a default deny any any mode and EC2 customers must explicitly open the ports needed to allow inbound trafficEncryption of sensitive data is generally a good practice and AWS provides the ability to encrypt EBS volumes and their snapshots with AES-256. The encryption occurs on the servers that host the EC2 instances and EBS storageEBS encryption feature is only available on EC2's more powerful instance types (M3, C3, R3, G2)SSL termination on ELB is supported and recommendedX- forwarded for headers enabled, passes real IP from ELB's to web serversYou can procure rack space within the facility housing the AWS direct connect location and deploy your equipment nearby. Once deployed, you can connect to this equipment to AWS direct connect using cross-connectUsing 802.1q VLANs dedicated connections can be partitioned into multiple virtual interfaces. This allows you to use the connection to access public resources such as objects stored in S3 using public IP address space and private resources such as EC2 instances running within the VPC private IP space, while maintaining network separation between public and private environmentsAWS management re-evaluates the strategic business plan at least bi-annuallyAWS security regularly scans all Internet facing service endpoint IP addresses for vulnerabilities. These do NOT include customer instancesExternal vulnerability threat assessments are performed regularly by independent security firms, and their findings are passed to managementData Center Security:State of the art electronic surveillance and MF access controlStaffed 24x7 by security guardsAccess is authorized on a least privilege basisCompliance:SOC 1/SSAE 16/ISAE 3402 (formally SAS 70 Type II)SOC2SOC3FISMA, DIACAP, and FedRAMPPCI DSS Level 1ISO 27001ISO 9001HIPAACloud Security Alliance (CSA)Motion Picture Association of America (MPAA)ITARFIPS 140-2DSS 1.0Data Security:Shared security modelAWS:Responsible for securing the underlying infrastructureResponsible for protecting the global infrastructure that runs all of the services offered on the AWS cloud.Infrastructure comprised of hardware, software, networking, and facilities that run AWS servicesResponsible for the security configuration of its products that are considered managed services, such as DynamoDB, RDS, Redshift, Elastic MapReduce, lambda, and Workspaces.User:Responsible for anything put on the cloudEC2, VPC, S3 security configuration and management tasksAccount Management (MFA, SSL, TLS, CloudTrail API/User activity loggingQCM:By default EBS volumes are set to ‘Delete on Termination’, meaning they persist only if instructed to when createdRoute53 supports zone apex records (or naked domain names e.g. )Common OLTP (OnLine Transaction Processing) systems are : MySQL, PostgreSQL, Amazon Aurora, Oracle RDBMS; handle processes all kinds of queries (read, insert, update & delete)Well known OLAP (OnLine Analytical Processing) systems are : Amazon Redshift; mostly optimized for readingThere is no charge for data replication from primary to secondary RDS, it’s freeDatabase snapshots : I/O operations are suspended for the duration of the snapshot; I/O operations to the DB are sent to a Replica (if available) for the duration of the snapshotFour level of AWS premium support: Basic, Developper, Business, EnterpriseRDS : changes to the backup window take effect immediatelyRDS : Aurora store 6 copies of my data by defaultEC2 and EMR allows root access (ie you can login using SSH)Glacier doesn’t support AWS Import/ExportYou cannot select a specific AZ in which to place your DynamoDB tableVPC - Subnet : 20/ CIDR block -> 4091 hosts, /24 CIDR block -> 251 hosts, 28/ CIDR block -> 11 hostsS3 objects stored using Glacier option are only accessible through the S3 APIs or S3 Management ConsoleThe “owner” refers to the identity and email used to create the account AWS accountS3 bucket : only the owner of an S3 bucket can permanently delete a versionA dedicated host (NOT EC2 shared tenancy instance) is required if you’d like to use your existing Windows Server licensesCloudWatch stores metrics for terminated EC2 instances or deleted Elastic Load Balancers for 2 weeks-------------------------------------------------Stored Volumes enables you temporarily store the bulk of your data locally on storage hardware and then create point in time backups which are asynchronously backed up to Amazon S3. Stored volumes provide you with an option to con?gure inexpensive off-site backups that you can recover locally or to an Amazon EC2 instance to enable DR features as part ofyour overall IT Strategy. When you deploy a cached or stored volume gateway, you can create iSCSI storage volumes on your gateway.For cross-region replication to work, you must enable versioning on both the source and target buckets and ensure that S3 has permission to replicate objects from that source bucket to the destination bucket on your behalf by using IAM.ElastiCache for Redis can be configured in a Multi-AZ mode to offer enhanced availability.You have three SWF Actors; these are Activity Workers, Workflow Starters and Deciders.The three options available to create IAM Policies are:Copy an AWS Managed PolicyPolicy Generator and, Create Your Own Policy.You can use the reader endpoint to provide high availability for your read-only queries from your DB cluster. You can place multiple Aurora Replicas in different Availability Zones and then connect to the reader endpoint for your read workload.Federation with Active Directory uses the SAML 2.0 protocol and the AssumeRoleWithSAML returns a set of temporary security credentials for users who have been authenticated via a SAML authentication response. This operation provides a mechanism for tying an enterprise directory service to role-based AWS access without user-specific credentials or configuration.CloudFront is highly effective when delivering both static and dynamic content to users located across different locations. With the help of edge locations, content is made available with low latency and customers gain improved performance.FIFO queues are limited to 300 transactions per second (TIPS) when compared to standard queues where you can get almost unlimited number of transactions per second. FIFO queues are designed to be used when the order of messaging being sent and received is critical and duplicates cannot be allowed.The "fanout" scenario is when an Amazon SNS message is sent to a topic and then replicated and pushed to multiple Amazon SQS queues, HTTP endpoints, or email addresses.The following are general identifiers of the standard SQS queue service Queue Name and URL MessageID Receipt HandleBoot/Root volumes of public AMIs are not encrypted and there is no option to encrypt it at the time of launch. You need to create an AMI image of the EC2 Instance, copy the AMI and during the copy process, enable encryption on the boot volume and then launch a new EC2 Instance from the AMI.You cannot use MFA-protected API access to control access for federated users, because the GetFederatedSession API does not accept MFA parameters.Fine Grained Access Control (FGAC) gives a DynamoDB table owner a high degree of control over data in the table. The owner can indicate who (caller) can access which items or attributes of the table and perform what actions (read / write capability). FGAC is used with AWS Identity and Access Management (IAM), which manages the security credentials and the associated permissions.Amazon Standard snowball devices come in 80GB and 50GB formats-------------------------------------------------A role is something that another entity can "assume," where the entity is another AWS resource like an EC2 instance. AWS resources cannot have permission policies directly applied to them, so to access another resource, they must "assume" a role and gain permissions assigned to the role.Managing user access rights is handled by IAM, NOT VPCs.Volumes created from an EBS snapshot must be initialized. Initializing occurs the first time a storage block on the volume is read, and the performance impact can be impacted by up to 50%. You can avoid this impact in production environments by manually reading all the blocks.Target groups are where we assign different sets of EC2 instances to receive traffic in an Application Load Balancer. Launch configurations and Auto Scaling groups can be used with either load balancing type and CloudWatch events are not used in the Application Load Balancer configuration.An alias (type "A" alias) allows you to point the domain to an AWS-specific endpoint, such as an ELB, Cloudfront distribution, or S3 bucket (as opposed to just an IPv4 IP address).To use an S3 bucket for Route 53 DNS failover, the bucket name must match the domain name.An AWS VPC connection automatically has two parallel IPsec tunnels for redundancyBenefits of API Gateway include: Ability to cache API responses DDoS protection via CloudFront SDK generation for IOS, Android, and Javascript Supports Swagger (a framework of API dev tools) Request/response data transformationSWF: A decision task is used to communicate (back to the decider) that a given task has been completed.API Gateway is a fully-managed service that allows you to create and manage your own APIs for your application. API Gateway acts as a "front door" for your application, allowing access to data/logic/functionality from your back-end services.API Gateway: You can create lifecycle stages (dev, beta, production) for which to deploy APIs. Each stage can have its own throttling, caching metering, and logging. You can also create a new API version by cloning an existing one. In addition, you can roll back to previous versions of an API.System Status Checks are AWS hardware/software issues that we have no control over.Troubleshooting connection issue to an EC2 instance: One of VPC Flow Logs main benefits is for troubleshooting why certain traffic is not reaching an EC2 instance.Elastic BeanStalk is primarily used to deploy simple, single-tier applications; NOT complex, multi-tier applicationsKinesis: Consumers can include Redshift and S3, but also other services like DynamoDB or a real-time dashboard/Kinesis-enabled app.EMR is a service that deploys out EC2 instances based on the Hadoop framework and also supports HADOOP, Apache Spark, HBase, Presto, and Flink.ECS (à reviser):The ECS Agent is responsible for starting/stopping tasks. It also monitors tasks and resource utilization.The Task Definition is the blueprint for your application and defines items such as: 1) Which ports should be open on the container instance 2) Which container image to use 3) Where to get the container image 4) What data volumes to use Items such as: 1) Which operating system the container should use 2) Which libraries should be included in the container Are outlined in the DockerfileThe container/Docker image, which is built from the Dockerfile, contains all the actual software, code, runtime, system tools, and libraries that will be used in the container.ECR is short for EC2 Container Registry. It is a repository service for storing container images.-------------------------------------------------Amazon S3 automatically scales and places files with similar namespaces together. By introducing a hash and creating randomness in key names, you encourage S3 to spread out the data and therefore improve read performance and scalability.Amazon SNS does not currently support forwarding messages to Amazon SQS FIFO queues. You can use SNS to forward messages to standard queues.In the S3 Management Console, create a lifecycle rule and specify a desired expiration period against the 'End and Clean up Incomplete Multipart Uploads' under the Action on Incomplete Multipart Uploads option.SWF: The decider is a program that controls the coordination of tasks, i.e. their ordering, concurrency, and scheduling per the application logic.Setup an IAM role to grant the required access: steps are - Create a New Role in IAM - Select AWS Service Role and select Amazon EC2 - Attach the Policy AmazonS3FullAccess - Create RoleS3 : HYPERLINK "" object full name is newhires/experienceform.doc. The use of prefixes and delimiters is used to help manage objects but becomes the actual key name.To improve your Recovery Time Objective (RTO) and Recovery Point Objective (RPO) on RDSDB Snapshots can be used to restore point in time backups of your database to a new instance. You can use Multi-AZ to enable quick failovers and read replicas can ensure that users can continue to access (read) data to help spread load.CloudFormation: Sometimes you might need to make changes to the running resources in a stack. You can generate a change set, which is summary of your proposed changes. Change sets allow you to see how your changes might impact your running resources, especially for critical resources, before implementing them.Snowball: Using Amazon Shippable storage appliances, you can transfer massive amounts of data into the AWS Cloud bypassing Internet bandwidth issues altogether. The Trusted Platform Module (TPM) offers complete securityWhen you take a snapshot of an EBS volumes that is in use, the snapshot will exclude data that is cached by the application and OS. This can cause a problem when taking snapshots of RAID Arrays comprising of two or more EBS volumes wherein restoring from the snapshots can result in degraded integrity of the array. You need to ensure that when you take the snapshot of a RAID array, not data is being written/read from those volumes. This means you need to ensure you are creating application consistent snapshots of the RAID array by stopping applications from writing to the Raid array and flushing all cache data to the disks.You can perform a custom action as Auto Scaling launches and terminates EC2 instances. This is enabled via a process called lifecycle hooks and enables you to perform an action such as install and configure software on new launches or download certain log files from an instance before termination. Adding lifecycle hooks to your Auto Scaling group gives you greater control over how instances launch and terminate; Auto Scaling lifecycle hooks enable you to perform custom actions by pausing instances as Auto Scaling launches or terminates them. For example, while your newly launched instance is paused, you could install or configure software on it.When creating a custom VPC, the main route table is created as well as a default security group and default NACL. The NACL is set to allow all inbound and outbound traffic, but the security group only allows all outbound traffic. IGW not created by default-------------------------------------------------S3 access logging: By default, logging is disabled. To enable logging, you need to add logging configuration to the bucket for which you wish to log access to and Grant the Amazon S3 Log Delivery group write permission on the bucket where you wish to store the logs in.Redshift: When you run a cluster with at least two compute nodes, data on each node will always be mirrored on disks on another node and you reduce the risk of incurring data loss from disk failures.Amazon Trusted Advisor: The 5 categories are cost optimization, security, performance and fault tolerance, service limits.NAT Gateway is a managed service and requires very little configuration, namely the fact that you need to specify the public subnet in which to place them in and associate them with an EIP.Time-based instances are run by AWS OpsWorks Stacks on a specified daily and weekly schedule. You can configure your stack to automatically adjust the number of instances to accommodate predictable usage patterns.Storage Gateway: The File Gateway option uses the Network File System (NFS) protocol.When creating a new volume from a snapshot you can change the volume type. So you can change from Magnetic to Provisioned IOPs.You need to assign the permission s3:PutAccelerateConfiguration to enable users to enable or disable S3 Transfer Acceleration on S3 Buckets.AWS CloudHSM supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS).You can use Elastic Network Interfaces (ENI) and attach it to your instances to create multi-home instances that can be used to separate management traffic with other public traffic.Alarm States for CloudWatch: An alarm can be in the following three states: OK - Alarm - Insufficient_DataELB: Keep-alive allows your load balancer to reuse connections to your EC2 instances and this can reduce CPU utilizationRedis enables you to create snapshots for point in time restores.SNS: Token included in the confirmation message sent to end-points on a subscription request are valid for 3 days.-------------------------------------------------To conduct a penetration test on your AWS estate, you need to get permission from Amazon ?rst by raising a ticketAuto Scaling - default termination policy works: instance with the oldest launch configuration -> instance closest to next billing hour -> instance at randomNetwork access logging on a system : you should make use of an OS level logging tools such as iptables and log events to CloudWatch or S3Copy AMI from region A to B for DR: Copy the AMI from A, manually apply launch permissions, user-de?ned tags, and Amazon S3 bucket permissions of the default AMI to the new instance, and launch the instance.Storage Gateway: Gateway-Cached volumes retain a copy of frequently accessed data subsets locally. Cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on-premises.Route53 has a security feature that prevents internal DNS from being read by external sources. The work around is to create a EC2 hosted DNS instance that does zone transfers from the internal DNS, and allows itself to be queried by external servers.-------------------------------------------------By default, is data in S3 encrypted? NO, but it can be when the right APIs are called for SSE (Server-side encryption)After recent AWS updates you can now migrate reserved instances from 1 AZ to another without having to sell and repurchase.An EC2 IAM role should be used when deploying EC2 instances to grant permissions rather than storing IAM user credentials in EC2 instancesCloudHSM: the keys are lost if you did not have a copy; restore HSM snapshot doesn’t workIf you own three Reserved Instances with the same instance type and Availability Zone, the billing system checks each hour to see how many total instances you have running that match those parameters. If it is three or less, you will be charged the Reserved Instance rate for each instance running that hour.Amazon SQS: visibility timeout is a period of time during which Amazon SQS prevents other consumers from receiving and processing the message; during this time, the consumer processes and deletes the message; if the consumer fails before deleting the message and your system doesn't call the DeleteMessage action for that message before the visibility timeout expires, the message becomes visible to other consumers and the message is received again. If a message must be received only once, your consumer should delete it within the duration of the visibility timeout; Each queue starts with a default setting of 30 seconds for the visibility timeout, max is 12hSQS: Increase the visibility timeout for messages in the SQS queue NOT help address concerns about the cost of the AWS resources it uses.We need to use Auto Scaling to meet minimum requirements because otherwise if an Availability Zone fails and takes an instance down with it, the only remaining instance would receive double the amount of requests (it's original amount, plus what was being distributed to the other instance). This not only creates a single point of failure, but it can also overwhelm that single instance. Instead, by having an Auto Scaling group, another instance gets automatically created to replace the unresponsive one and to continue distributing requests between two separate instances.By default, a request is denied, but this can be overridden by an allow. In contrast, if a policy explicitly denies a request that deny can't be overridden.-------------------------------------------------DynamoDB: Provisioned throughput calculations, read capacity units, write capacity units, key types, and usage, indexes, query vs. scanSQS: Limits, characteristics (no order queue), message dimensions (message can contain up to 256KB of text, billed at 64KB chunks), design (two queues if you need priority), visibility timeout, maximum long polling timeout, message retention, costs-------------------------------------------------Cloudtrail:CloudTrail is enabled globallyCloudTrail is enabled on a per-region basisLogs can be delivered to a single Amazon S3 bucket for aggregation.Amazon S3 is secure by default. Only the bucket and object owners originally have access to Amazon S3 resources they create. Amazon S3 supports user authentication to control access to data. You can use access control mechanisms such as bucket policies and Access Control Lists (ACLs) to selectively grant permissions to users and groups of users. You can securely upload/download your data to Amazon S3 via SSL endpoints using the HTTPS protocol. If you need extra security you can use the Server Side Encryption (SSE) option or the Server Side Encryption with Customer-Provide Keys (SSE-C) option to encrypt data stored-at-rest. Amazon S3 provides the encryption technology for both SSE and SSE-C. Alternatively you can use your own encryption libraries to encrypt data before storing it in Amazon S3.To speed access to relevant data, many developers pair Amazon S3 with a database, such as Amazon DynamoDB or Amazon RDS. Amazon S3 stores the actual information, and the database serves as the repository for associated metadata (e.g., object name, size, keywords, and so on). Metadata in the database can easily be indexed and queried, making it very efficient to locate an object’s reference via a database query. This result can then be used to pinpoint and then retrieve the object itself from Amazon S3.To create an “application-consistent” snapshot of your RAID array, stop applications from writing to the RAID array, and flush all caches to disk. Then ensure that the associated EC2 instance is no longer writing to the RAID array by taking steps such as freezing the file system, unmounting the RAID array, or *shutting down the associated EC2 instance*. After completing the steps to halt all I/O, take a snapshot of each EBS volume.The access logs for Elastic Load Balancing capture detailed information for all requests made to your load balancer and stores them as log files in the Amazon S3 bucket that you specify. Each log contains details such as the time a request was received, the client’s IP address, latencies, request path, and server responses. You can use these access logs to analyze traffic patterns and to troubleshoot your back-end applications.Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed. EBS can be used while the snapshot is in progress.Amazon RDS supports SOAP only through HTTPSEBS = Amazon Elastic Block Store and NOT Elastic Block StorageThe EC2 Compute Unit (ECU) provides the relative measure of the integer processing power of an Amazon EC2 instance.HTTP Query-based requests are HTTP requests that use the HTTP verb GET or POST and a Query parameter named ACTIONRDS: You NO need to specify Port or Protocol for DB security group. Only VPC/instance security group only needed.We recommend that you use db.t1.micro instances with Oracle to test setup and connectivity only; the system resources for a db.t1.micro instance do not meet the recommended configuration for Oracle.Amazon S3 doesn’t automatically give a user who creates a bucket or object permission to perform other actions on that bucket or object. Therefore, in your IAM policies, you must explicitly give users permission to use the Amazon S3 resources they create.Currently you can request to increase the limit on users per AWS account, groups per AWS account, roles per AWS account, instance profiles per AWS account, and server certificates per AWS account.Amazon CloudFormation provides a template to map network resources for Amazon Web Services.By default, elastic network interfaces (ENI) that are automatically created and attached to instances using the console are set to terminate when the instance terminates. However, network interfaces created using the command line interface aren’t set to terminate when the instance terminates.Dynamic DB is designed to scale without limits, but if you go beyond 10,000 write capacity units you have to contact AWS first.When using IAM to control access to your RDS resources, Key names are case-insensitive. For example, aws:CurrentTime is equivalent to AWS:currenttime.Can I control if and when MySQL based RDS Instance is upgraded to new supported versions? YES ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download