Upload
amazon-web-services
View
192
Download
0
Embed Size (px)
Citation preview
PowerPoint Presentation
AWS John ChangEcosystem Solutions ArchitectSeptember 2016
From 2010
AWS 10 Years Later:ACTIVE customers per monthRUN rateYoy growthQ2 2015 to Q2 20161,000,000+$11B+58%
The Technology Platform of Choice
Broad Global Geographic FootprintExpanding In 201613 Regions
35 Availability Zones
Use the AWS Well Architected FrameworkBuild on the five pillars of core cloud functionalityImpacts Design, Implementation, Deployment, OperationsIt is the path, not the destinationWhere you are on the path determines the service you use
The Path To Well Architected, Cloud Native Applications
services -> compute Units
Virtual Machines
Containers
Functions
Compute TargetsVirtual MachinesMonolithsMulti-tierSOAContainersmicroservicesFunctionsServerlessMartin Fowler claims that CD and cloud are more important that what you pick
Per VMPer Hour
Multi-threadedMulti-task
AMI
Patching
The World Of Virtual MachinesHours to Months
The initial output of your development is a configured AMIRun time is typical hours to months or longerUpdates happen through patching the software that runs inside the VM.Major releases create new AMIs.Often includes 3rd party software from PHP to WORDPRESS to SAPDependency management a challengeIs a container, software control scaling up and down.Units of cost is per VM per hourWorks well with existing tools and techniques AWS & 3rd party
The World Of Virtual Machines
Virtual Machines
The Strong VM EcosystemAuto Scaling
VPC
RDS
MySQL-compatible relational database
Performance and availability of commercial databases
Simplicity and cost-effectiveness of open source databases
Delivered as a managed service
Amazon RDS For Aurora
Customer Success With Amazon RDS For Amazon Aurora
Expedia Expedia, one of the largest travel companies in the world, uses Amazon Aurora for their Travel Data Application that inserts 300 millions rows of data per day, at a peak 70,000 rows per second with 17 millisecond average response time for read and 30 millisecond average response time for write.Alfresco over seven billion documents, powering the daily tasks of more than 11 million users worldwideScaled to 1 billion documents with a throughput of 3 million per hour, which is 10 times faster than their MySQL environmentDocument load rate 1,000 documents per second (with 10 nodes) even the 1B documentUnited Nations Operates multiple websites with global reach that require mission-critical reliability and consistent performance. Being able scale the database up and down to respond to global events is highly desiredAbility scale up and down as needed keeps the cost lowThompson DMS example! Migrated from Oracle to Aurora using DMS entire migration process was completed in less than 4 weeks
The Strong VM EcosystemEBS
Auto Scaling
VPC
RDS
General PurposeSSD
Provisioned IOPS SSD
Throughput Optimized HDD
Cold HDD
Amazon Elastic Block Store (Amazon EBS)
Max Throughput/VolumeEBS General Purpose SSD: 160 MB/sEBS Provisioned IOPS SSD: 320 MB/sEBS Throughput Optimized HDD: 500 MB/sEBS Cold HDD: 250 MB/s
PriceEBS General Purpose SSD: $0.10/GB-monthEBS Provisioned IOPS SSD: $0.065/provisioned IOPS $0.125/GB-monthEBS Throughput Optimized HDD: $0.045/GB-monthEBS Cold HDD: $0.025/GB-month
Two New EBS Updates
Cost of EBS Snapshots decreased by 47%47%DECREASE
NEW
NEW
Two New EBS Updates
pricing retroactive to August 1- Customers have told us that they want to create point-in-time versions of EBS volumes using the EBS snapshot protection method, but that storing large quantities of snapshots is intimidating. We are reducing the cost of EBS snapshots by 47%, giving customers who store large quantities significant savings and lowering the barrier to entry for customers storing lower amounts of snapshots.
Increased performance with up to 66% more IOPS per GB66%INCREASE
NEWCost of EBS Snapshots decreased by 47%47%DECREASE
NEW
Two New EBS Updates
- Customers have also asked to have smaller, hotter EBS volumes (that dont require overprovisioning storage to get the required IOPS). We are also changing the ratio of performance for Provisioned IOPS EBS volumes. Now customers can get 66% more IOPS per GB (or get the same amount of IOPS with 40% less storage). This gives customers greater flexibility, allowing them to provision the right amount of storage and IOPS for their applications.
What is the customer need its addressing? Finer tuning and flexibility for data protection (EBS Snapshots) and performance (lowering the GB?IOPS ratio)Example customers using it: INFORAre they referenceable? Yes
The Strong VM EcosystemELB
EBS
Auto Scaling
VPC
RDS
Classic Load Balancing
ELBTraffic
EC2 InstancesEC2 InstancesEC2 InstancesLOGOUTLISTSIGN UPLISTSIGN UPLOGOUTFull App #1Full App #2Full App #3
SIGN UPLOGOUTLIST
High performance load balancing for applications
Application Load Balancer
NEW
Application Load Balancer
ALB
EC2 InstancesEC2 InstancesEC2 InstancesSIGN UPLOGOUTLISTApp Component #1App Component #2App Component #3
/signup/logout/list
High-performance Load Balancing Of ApplicationsContent-based routingHTTP/2WebSocket
Detailed logging
- This new load balancer, which also supports the WebSocket and HTTP/2 protocols, provides aload balancing option that operates at the application layer and can route content across multiple services or containers running on EC2.
- Increased cluster efficiency: This allows you to configure rules in a single load balancerand have it route requests right to the correct service based on the content of the request. It also allows you to load balance across multiple ports on the same EC2 instance, and schedule multiple tasks across services or containers so they can scale independently, which both simplifies management and increases cluster efficiency.
- improved metric and health check detail for greater visibility into each individual service
Monolithic ApplicationServicesMicroservicesThe Story Of Amazon.com
Monolith -> Services microservices > One of our customers here it the UK that is using this modern architecture is Mondo - a brand new UK based bank, with their technology built from the group up on the secure foundations that AWS provides, using Microservices for speed and agility in the historically slow moving world of retail banking services. To tell us more, please welcome CEO of Mondo, Tom Blomfield.
Small functional building blocks as output of the development process
Minimizing dependency and reducing complexity of dependency managementThe Move To Microservices Is Assisted By
The Rise of Containers
Minutes to Days
Versioning
Multi-threadedSingle-task
Per VMPer Hour
Container File
The World Of Containers
Output of your development process is a container fileRuntime is typical minutes to hours to daysContainers are immutableUpdates happen through creating a new container versionApplication systems consist of multiple containers
High Availability
InfrastructureManagement
Security
Task Scheduling
PipelineIntegration
ContainerManagement
ServiceDiscovery
ResourceAccess
The Challenges Of Container Based Operation
Server
Guest OS
Bins/LibsBins/LibsApp2App1Scheduling One Resource Is Straightforward
The Docker CLI is great if you want to run a container on your laptop for example docker run myimage.
AZ 1
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
AZ 2
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
AZ 3
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Server
Guest OS
Scheduling A Cluster Is Hard
But its challenging to scale to 100s of containers. Now youre suddenly managing a cluster & cluster management is hard. You need a way to intelligently place your containers on the instances that have the resources and that means you need to know the state of everything in your system. For examplewhat instances have available resources like memory and ports?How do I know if a container dies?How do I make my application highly available?How do I hook into other resources like ELB?Can I extend whatever system that I use, e.g. CD pipeline, third party schedulers, etc.Do I need to operate another piece of software?These are the questions and challenges that our customers had which led us to build Amazon ECS
ContainerManagement
No ClusterTo Manage
Batch and Long Running Task Scheduling
ContainerRegistry
Access to EBS, ELB,CloudWatch
IntegrationWith IAM
Multi-AZ Aware
Amazon EC2 Container Service (ECS)The Best Way To Run Your Containers In Production
Maintains Available ResourcesTracks Resource ChangesAccepts Resource RequestsGuarantees Accuracy and Consistency
What Is A Container Manager?
So what is a container manager? A cluster is a pool or resources, such as CPU, Memory, Ports and Disk Space, anything that we have a limited quantity of. A Container Manager keeps track of the resources available in the cluster, where they are and who is using them.
Much of what a container manager does is simple accounting, but rapid changes across thousands of instances with multiple failure modes makes doing this job a little more tricky than just running something like a MySQL database.
EC2 instancesLOADBALANCER
Internet
ecs agentTASKContainerTASKContainer
ecs agentTASKContainerTASKContainer
Agent Communication ServiceAmazon ECS
APICLUSTER MANAGEMENT ENGINEKEY/VALUE STORE
ecs agentTASKContainerTASKContainer
Amazon ECS
LOADBALANCER
ECS provides a simple solution to container management: We have a cluster management engine that coordinates the cluster of instances, which is just a pool of CPU, memory, storage, and networking resourcesThe instances are just EC2 instances that are running our agent that have been checked into a cluster. You own them and can SSH into them if you wantDynamically scalable. Possible to have a 1 instance cluster, and then a 100 or even 1000 instance cluster.Segment for particular purposes, e.g. dev/testResource Manager is responsible for keeping track of resources like memory, CPU, and storage and their availability at any given time in the cluster. We measure the set of available resources through the Amazon ECS Agent, and use mostly Linux cgroup constructs exposed through Docker to enforce these resource allocations at a local level. Without containers and cgroups it would be very difficult to dynamically partition and allocate resources such as CPU, memory, and disk locally. Without them we would be relying on good intentions of tasks to take up the same amount of resources they requested.On each instance, we have the ECS agent which communicates with the engine and processes ECS commands and turns them into Docker commandsTo instructs the EC2 instances to start, stop containers and monitor the used and available resources Its all open source on Github and we develop in the open, so wed love to see you involved through pull requests. To coordinate this cluster we need a single source of truth for all the instances in the cluster, tasks running on the instances, and containers that make up the task, and the resources available. This is known as cluster stateSo at the heart of ECS is a key/value store that stores all of this cluster stateTo be robust and scalable, this key/value store needs to be distributed for durability and availabilityBut because the key/value store is distributed, making sure data is consistent and handling concurrent changes becomes more difficultFor example, if two developers request all the remaining memory resources from a certain EC2 instance for their container, only one container can actually receive those resources and the other would have to be told their request could not be completed.As such, some form of concurrency control has to be put in place in order to make sure that multiple state changes dont conflict.
ECS container management is decoupled from container scheduling
ECS Container SchedulingBatch jobsLong-running apps
ECS service schedulerHealth managementScale-up and scale-downAZ awareGrouped containersECS Task schedulerRun tasks onceBatch jobsRunTask (random)StartTask (placed)
Amazon ECS is a shared state, optimistic concurrency system that provides flexible scheduling capabilities for your tasks and containers. The ECS schedulers leverage cluster state information provided by the ECS API to make an appropriate placement decision. Amazon ECS provides theRunTaskand StartTask actions for batch jobs or single run tasks, and the service scheduler, for long-running tasks and applications.
The service scheduler ensures that the specified number of tasks are constantly running and reschedules tasks when a task fails (for example, if the underlying container instance fails for some reason). The service scheduler optionally also makes sure that tasks are registered against an Elastic Load Balancing load balancer. You can update your services that are maintained by the service scheduler, such as deploying a new task definition, or changing the running number of desired tasks.If you have updated the Docker image of your application, you can create a new task definition with that image and deploy it to your service. The service scheduler uses two parameters, the minimum healthy percent and maximum percent, to determine the deployment strategy:minimumHealthyPercent represents the minimum number of running tasks during a deployment.maximumPercent represents an upper limit on the number of running tasks during a deployment, enabling you to define the deployment batch size.
ECS Scheduling
Amazon ECS supports more than two schedulers and can have many working at the same time. This typically works best when placement logic is different between schedulers to reduce contention, but so long as schedulers retry failed attempts they should all be able to work together.
Deep Integration With Other AWS ServicesElastic Load BalancingAmazon Elastic Block StoreAmazon Virtual Private CloudAmazon CloudWatchAWS Identify and Access ManagementAWS CloudTrail
ECS is deeply integrated with the rest of the AWS platform. AWS has industry leading functionality for load balancing, auto scaling, networking, identity and access management, logging, and monitoring. With ECS, your containers are natively integrated to have all these capabilities of the AWS platform. Theres no additional code to write or software to install to leverage the existing AWS platform.
You can set up each cluster in its own Virtual Private Cloud and use security groups to control network access to your ec2 instances. You can store persistent information using EBS and you can route traffic to containers using ELB. CloudTrail integration captures every API access for security analysis, resource change tracking, and compliance auditing
Amazon CloudWatch LogsAmazon S3Amazon KinesisAWS LambdaAmazon Elasticsearch ServiceAmazon ECSStoreStreamProcessSearchAmazon CloudWatch LogsAmazon CloudWatch LogsAmazon CloudWatch Logs
CloudWatch Logging With awslogs
You can configure the containers in your tasks to send log information to CloudWatch Logs. This enables you to view different logs from your containers in one convenient location, and it prevents your container logs from taking up disk space on your container instances. You can use subscriptions to get access to a real-time feed of log events from CloudWatch Logs and have it delivered to other services such as an Amazon Kinesis stream or AWS Lambda for custom processing, analysis, or loading to other systems.
DynamoDB
ECS ClusterEC2 InstanceEC2 Instance
TASK A
TASK B
TASK B
S3 Storage
IAM Roles For Tasks
In Amazon ECS, you have always had the benefit of being able to use IAM roles for Amazon EC2 in order to simplify API requests from your containers. This also allows you to follow AWS best practices by not storing your AWS credentials in your code or configuration files, as well as providing benefits such as automatic key rotation.
Previously, you had to use IAM roles for Amazon EC2, meaning the IAM policies you assigned to the EC2 instances in your ECS cluster had to contain all the IAM policies for the tasks performed within the same cluster. This means that if you had one container that needed access to a specific S3 bucket and another container that needed access to a DynamoDB table, you had to assign both IAM permissions to the same EC2 instance.
With the introduction of the newly-launched IAM roles for ECS tasks, you can now secure your infrastructure further by assigning an IAM role directly to the ECS task rather than to the EC2 container instance. This way, you can have one task that uses a specific IAM role for access to S3 and one task that uses an IAM role to access a DynamoDB table.
IAM Roles For Tasks
{ "family": signup-app", "taskRoleArn": "arn:aws:iam::123456789012:role/DynamoDBRoleForTask", "volumes": [], "containerDefinitions": [{ "environment": [ ... ], "name": signup-web", "mountPoints": [], "image": amazon/signup-web", "cpu": 25, "portMappings": [ ... ], "entryPoint": [ ... ],"memory": 100, "essential": true, "volumesFrom": [] } ]}
You can specify an IAM role for each ECS task. The applications in the tasks containers can then use the AWS SDK or CLI to make API requests to authorized AWS services. This allows the EC2 instance to have a minimal role, respecting the Least Privilege access policy and allowing you to manage the instance role and the task role separately. You will also gain visibility as to which task is using which role, tracked in the CloudTrail logs.
AMAZON ECSPublish metricsAMAZON CLOUDWATCHELASTIC LOAD BALANCING
Auto Scaling ECS serviceAvailability Zone AAvailability Zone B
TASK A
Scale in /Scale outpoliciesAdd/remove ECS tasks
Automatic Service Scaling
TASK C
TASK B
Amazon EC2 Container Service (Amazon ECS) can now automatically scale container-based applications by dynamically growing and shrinking the number of tasks run by an Amazon ECS service.Now, you can automatically scale an Amazon ECS service based on any Amazon CloudWatch metric. For example, you can use CloudWatch metrics published by Amazon ECS, such as each services average CPU and memory usage. You can also use CloudWatch metrics published by other services or use custom metrics that are specific to your application. For example, a web service could increase the number of tasks based on Elastic Load Balancing metrics like SurgeQueueLength, while a batch job could increase the number of tasks based on Amazon SQS metrics like ApproximateNumberOfMessagesVisible.
Dynamic content routing and shared load balancers
Application Load Balancer And Amazon ECS
NEW
Amazon S3 now supports the IPv6 protocol, so applications can connect to Amazon S3 for object storage over IPv6. You can meet IPv6 compliance requirements, more easily integrate with existing IPv6-based on-premises applications, and remove the need for expensive networking equipment to handle the address translation between IPv4 and IPv6. You can also now utilize the existing source address filtering features in IAM policies and bucket policies with IPv6 addresses, expanding your options to secure applications interacting with Amazon S3.
ECS Is The Best Way To Run Your Containers In Production
No server is easier to manage than no serverServerless Computing
Code
Single-threadedSingle-task
Versioning
Microseconds to Seconds
Per Memory/SecondPer Request Free Tier
The World Of Lambda Functions
Unit of the development is a single functionImmutableRuns for milliseconds to secondsNo scalability or reliability management neededCost closely aligned with usage
Languages
Node.js (JavaScript)PythonJava (Java 8 compatible)TriggersThe State Of LambdaS3 Bucket
API gateway
DynamoDB Table
AWS CloudTrail
Scheduled eventsKinesis Stream
CustomSNS Notification
Languages: Node.js (JavaScript)PythonJava (Java 8 compatible).
Triggers:S3 BucketDynamoDB TableKinesis StreamSNS
Customer Success With AWS LambdaMobile chat appCloud telephony
Ad data analytics and routingReal-time video ad bidding
Data ProcessingThreat intelligence and analytics
Mobile app analyticsNews content processing
Web ApplicationsGame metrics analytics
Image content filteringNews content processing
Gene sequence searchWeb Applications
ElastiCache
Create robust, scalable and secure APIs in minutesAmazon API GatewaySigning & authorization
Versioning
SDK Generation
CachingMetering & throttling
Create robust, scalable and secure APIs in minutesSigning & authorization
Versioning
SDK Generation
CachingMetering & throttling
Usage Plans
NEWAmazon API Gateway
API-based businesses have to invest in a monitoring and metering infrastructure to track usage of their services by 3rd party developers, this is often time consuming and expensive, particularly for APIs running high-throughput workloads. With usage plans, developers can distribute API keys to 3rd party developers, restrict their usage to a defined quota and throughput, and meter their access to generate billing documents.Usage plans in Amazon API Gateway make it easy for developers to configure any number of plans, set access control and rate limit rules, and meter API usage. All metering data is accessible via the AWS console or through a set of reporting APIs. API Gateway customers only pay for the requests they receive and not for idle infrastructure.
Set rate limitsMeter API usageAccessControlPer APIDeveloper Key
Usage Plans Enable Easy Monitoring And Metering
API-based businesses have to invest in a monitoring and metering infrastructure to track usage of their services by 3rd party developers, this is often time consuming and expensive, particularly for APIs running high-throughput workloads. With usage plans, developers can distribute API keys to 3rd party developers, restrict their usage to a defined quota and throughput, and meter their access to generate billing documents.Usage plans in Amazon API Gateway make it easy for developers to configure any number of plans, set access control and rate limit rules, and meter API usage. All metering data is accessible via the AWS console or through a set of reporting APIs. API Gateway customers only pay for the requests they receive and not for idle infrastructure.
Mobile AppsWebsitesServices
InternetAPI Gateway CacheAPI GatewayCloudWatchLambda FunctionsEC2 Endpoints
Any Other Endpoint
Amazon API Gateway
DynamoDB
API GatewayLambda Lambda Lambda RDSS3
Mobile AnalyticsSNSMobile DeviceAmazon Cognito
Mobile Backend
Data & ContentAnalyticsNotificationsAmazon API Gateway Role In Mobile
github.com/awslabs Executable Reference Architectures For Serverless Applications
executable reference architectures for serverless applications
DynamoDBLambda Function 2
API GatewayS3
Route 53Lambda Function 1
www.mydashboard.comStreams
Static HTML/JS Website
Twilio Phone Number or ShortcodeReference Architecture Serverless Web Application
DynamoDBLambda Function 1 S3
Lambda Function 2 Lambda Function N S3
SNS
Reference Architecture Serverless File Processing
The Real-time File Processing reference architecture is a general-purpose, event-driven, parallel data processing architecture that uses AWS Lambda. This architecture is ideal for workloads that need more than one data derivative of an object. This simple architecture is described in this diagram and "Fanout S3 Event Notifications to Multiple Endpoints" blog post on the AWS Compute Blog. This sample application demonstrates a Markdown conversion application where Lambda is used to convert Markdown files to HTML and plain te
DynamoDBLambda Function 1 S3
CloudWatch
KinesisLambda Function2
Event IngestionMonitoring & AlarmsReference Architecture Serverless Stream Processing
You can use AWS Lambda and Amazon Kinesis to process real-time streaming data for application activity tracking, transaction order processing, click stream analysis, data cleansing, metrics generation, log filtering, indexing, social media analysis, and IoT device data telemetry and metering. The architecture can be created with an AWS CloudFormation templat
DynamoDBLambda Function 1 S3
SNS
API GatewayAmazon Cognito
Lambda Function 3 Lambda Function 4 CloudSearchCloudFrontMedia Files RepositoryContent DeliveryNetworkMobile UsersSearch EngineMobile Push NotificationsLambda Function 2
Database Stream
Reference Architecture Serverless Mobile Backend
The Mobile Backend reference architecture demonstrates how to use AWS Lambda along with other services to build a serverless backend for a mobile application. The specific example application provided in this repository enables users to upload photos and notes using Amazon Simple Storage Service (Amazon S3) and Amazon API Gateway respectively. The notes are stored in Amazon DynamoDB, and are processed asynchronously using DynamoDB streams and a Lambda function to add them to an Amazon CloudSearch domain.
DynamoDBS3
Kinesis
CloudWatch
Lambda Function 1 Lambda Function 3 Lambda Function 4
Event DataMonitoring & AlarmsEvent StorageEvent StorageRedshiftEvent Analytics
Elastic MapReduceEvent Analytics
Connected DevicesSynchronous Calls
SpotInstances
Reference Architecture Serverless IoT Backend
The Internet of Things (IoT) Backend reference architecture demonstrates how to use AWS Lambda in conjunction with Amazon Kinesis, Amazon DynamoDB, Amazon Simple Storage Service (Amazon S3), and Amazon CloudWatch to build a serverless system for ingesting and processing sensor data. By leveraging these services, you can build cost-efficient applications that can meet the massive scale required for processing the data generated by huge deployments of connected devices.
Amazon Cognito IdentityCognito User Pools
You can easily and securely add sign-up and sign-in functionality to your mobile and web apps with a fully-managed service that scales to support 100s of millions of users.
Federated User Identities
Your users can sign-in through social identity providers such as Facebook, Twitter and SAML providers and you can control access to AWS resources from your app.
Guest
Your own auth
SAML
Email or Phone Number Verification
Forgot Password
User Sign-up and Sign-in
User Profile
SMS-based MFAUser Scenarios
Manage users in a User Pool
Select Email and Phone Verification
Customize with Lambda Triggers
Setup Password Policies
Create and Manage User Pools
Define AttributesAdministrator Scenarios
Token-based Authentication
Secure Remote Password Protocol
SMS-based Multi-factor AuthenticationSecure Foundation
Lambda HookExample Scenarios
Pre user sign-upCustom validation to accept or deny the sign-up requestCustom messageAdvanced customization and localization of verification messagesPre user sign-inCustom validation to accept or deny the sign-in requestPost user sign-inEvent logging for custom analytics
Post user confirmationCustom welcome messages or event logging for custom analytics
Customization Using Lambda Hooks
Amazon Cognito - Your User PoolsNow available in 4 regions: IAD, PDX, NRT and DUBGA Launched on 07/28Custom Authentication flowGlobal user sign-outAdmin support for user sign-inConfigurable expiration for refresh tokensAPI Gateway integrationRemember trusted devicesUser SearchCustomizable email addressesAttribute permissions and scopesWorldwide SMS support
LambdaAPI GatewayS3CloudFrontDynamoDBKinesis
ElastiCacheCloudSearchElasticSearchSQSSESSNSThere Are No Cattle, There Is Only The Herd
LambdaAPI GatewayS3CloudFrontDynamoDBKinesis
ElastiCacheCloudSearchElasticSearchSQSSESSNSThere Are No Cattle, There Is Only The Herd
So weve talked a bit about kinesis and how it works with Lambda
Can we make it easier to build real-time streaming data applications?
Kinesis AnalyticsRun standard SQL queriesover streaming data
Kinesis FirehoseEasily load streaming data into AWS
Kinesis StreamsBuild custom applications tocollect & analyze streaming data
Processing Real Time Data With Amazon Kinesis
Also, announced Kinesis Analytics (which will be available in early 2016) Allows developers to use familiar queries in standard SQL to run time series based analysis over real time streaming data Service will provide over 100 pre-built queries or can write your own to run analysis such as running totals, weighed moving averages or filter events, or cerate threshold alerts on real time data
Generally available todayAmazon Kinesis Analytics
NEW
Amazon Kinesis AnalyticsPowerful real-time processing
Easy to use
Automatic elasticity
Use standard SQL
Easily Analyze Streaming Data With Standard SQLAutomatic schema generation
Rich SQL editor
Built-in templates
Out-of-box integration for ingestion and output
Real-time: Sub 1-second processing latency
-Intelligently recognizes data formats of input streams and automatically creates a schema. You can refine this schema using an interactive schema editor-SQL editor is fully functional, complete with all the bells and whistles including automatic syntax checking, automatic indenting, and testing with live data-We give you pre-built templates for everything from a simple query to advanced anomaly detection -Integrated with Kinesis Streams and Firehose for easy ingestion and output; you can build complete solutions in just minutes-Truly real-time, end-to-end processing in sub-second speeds
Available Today
IPv6 Endpoints For Amazon S3
NEW
Amazon S3 now supports the IPv6 protocol, so applications can connect to Amazon S3 for object storage over IPv6. You can meet IPv6 compliance requirements, more easily integrate with existing IPv6-based on-premises applications, and remove the need for expensive networking equipment to handle the address translation between IPv4 and IPv6. You can also now utilize the existing source address filtering features in IAM policies and bucket policies with IPv6 addresses, expanding your options to secure applications interacting with Amazon S3.
Security Baseline
Glacier Vault Lock & SEC Rule 17a-4(f)
27018AWS is responsible for the security OFthe Cloud
ComputeStorageDatabaseNetworkingAWS Global Infrastructure
Availability Zones
Regions
Edge Locations
Build Everything On A Constantly Improving
Customers have their choice of security configurations IN the Cloud
Customer applications & content
CUSTOMERSSecurity And Compliance Is A Shared Responsibility
Platform, Applications, Identity & Access Management
Operating System, Network & Firewall Configuration
Client-side Data Encryption
Server-side Data Encryption
Network TrafficProtectionAWS is responsible for the security OFthe Cloud
ComputeStorageDatabaseNetworkingAWS Global Infrastructure
Availability Zones
Regions
Edge Locations
NetworkingVirtual Private CloudWeb ApplicationFirewallIdentityIAMActive Directory IntegrationSAMLFederation
]Broadest Services To Secure Applications
Identity Examples
NetworkingVirtual Private CloudWeb ApplicationFirewallEncryptionCloudHSMServer-sideEncryptionEncryption SDKIdentityIAMActive Directory IntegrationSAMLFederationKey ManagementService
]Broadest Services To Secure Applications
Encryption Examples
Bring your own keys to AWS Key Management Service
AWS Key Management Service Import Key
NEW
AWS Key Management Service (KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data. Starting today, you can import key material from your key management infrastructure into a KMS Customer Master Key (CMK), and use it seamlessly in all KMS-integrated AWS services and your custom applications just as any other KMS CMK, giving you even greater control over the creation, lifecycle and durability of your keys.
Import your keys into AWS KMS
Greater control over the generation, lifecycle management, and durability of your keys
Meet compliance requirements to generate and store copies of keys outside of your cloud provider
AWS Key Management Service
- maintain control of their encryption keys while also being able to leverage the automation, integration, and cost structure of KMS. - greater control over the generation, lifecycle management, and durability of your keys - meet compliance requirements to generate and store copies of keys outside of your cloud provider
- greater control over the creation, lifecycle and durability of your keys.- During the import process, you can set an expiration period for the imported key material. At the end of the expiration period, AWS KMS will delete the key material. If you need to continue using the key material after it expires, you can re-import the key material with a new expiration period. You can also delete imported key material directly, without any waiting period.- seamlessly in all KMS-integrated AWS services
Broadest Services To Secure ApplicationsComplianceConfigCloudTrailServiceCatalogConfigRulesInspectorNetworkingVirtual Private CloudWeb ApplicationFirewallEncryptionCloudHSMServer-sideEncryptionEncryption SDKIdentityIAMActive Directory IntegrationSAMLFederationKey ManagementService
Compliance Examples
Systematic approach to ensure securityFormalizes AWS account design Automates security controls Streamlines auditing
Provides control insights throughout the IT management processSecurity By Design
IAM
KMS
CloudHSM
Cloudtrail
Config
We are doing the same with security in AWS. Were designing security and compliance to not simply in OS and application controls as done in the last few decades; were designing it in everything about the IT environment; the permissions, the logging, the use of approved machine images, the trust relationships, the changes made, enforcing encryption, and more. Were converting manual, administrative controls to technically enforced controls with the assurance that, if designed properly, the controls are operating 100% of the time. We call this Secure by Design or SbD. AWS is a modern platform that allows you to formalize the design of security controls in the platform itself. It simplifies system use for administrators and those running IT, and makes your AWS environment much simpler to audit. Its creating an environment where there are no control findings at the audit (similar to having no quality findings at the end of a manufacturing process). Its a systematic way to security assurance, and gives you insight to how things are operating and insight into how to respond to emerging threats.
Huge & LessFrequentMedium& OftenPersistentConnectivity
Small &FrequentDesign the Data Transfer to the Cloud
Continual Connectivity (Direct Connect)Exist On A SpectrumSmall Data Often (Streaming)Medium Data, RegularlyLarge Data Infrequently (Snowball)
Snowball Petabyte Scale Data Transport Service
50 TBE Ink Shipping LabelRuggedized Case 8.5G ImpactRain & Dust ResistantTamper Resistant Case & Electronics All Data Encrypted End-to-endData is available within a weekENCRYPTION KEYS MANAGED OFF DEVICE
LargeCustomer Dataset
End-To-EndCustodyCustomerDataset Loaded
Large Customer DatabaseSnowballEnd to End CustodySnowballCustomer Dataset Loaded
New 80T Snowball Device
Growing Snowball
So today we're growing the Snowball. We launched a new 80-terabyte Snowball device that's 60 percent larger, and it's available today.
New 80T Snowball DeviceGrowing SnowballAll AWS Regionsby End of 2016
Regions: FRA launched 7/26. We're committing to having Snowball available in all AWS regions by the end of 2016.
New 80T Snowball DeviceAPI For 3rd Party NEWGrowing SnowballAll AWS RegionsBy End Of 2016
TODAY:1. An API that allows 3rd parties to integrate with Snowball, simplifying management. Now customers can order Snowballs and manage import and export jobs from within their 3rd party software. Integrations are pending from NetApp, CommVault and Druuva.
New 80T Snowball Device
API For 3rd PartyS3 API
NEWGrowing Snowball
NEW
All AWS RegionsBy End Of 2016
TODAY:
2. Support for the native S3 API. Now applications can write to the Snowball using the S3 API, instead of the Snowball client, for simpler integration with existing processes.
On-Premises Databases
AWS Database Migration Service
engine a
Schema Conversion Tool Schema & Data Transformation Database Migration Service
Convert DatabaseFunctions
TablesPartitonsSequencesViewsStored ProceduresTriggersFunctionsengine b
Migrate Between Database Engines
Automate the discovery of data center applications and their dependencies
AWS Application Discovery Service
Automatically discover app inventoryIdentifies app & infrastructure dependenciesMeasures performance baselineData encrypted with AmazonSimplify Application Discovery With AWS ADS
The service automatically discovers the inventory of applications running in a customer data centerThe service determines how application are dependent on each other or on underlying infrastructureThe service measure applications and processes running on hosts to determine a performance baselineThe service secures application discovery data through end to end encryptionThe service leverages an open data format and public API for this service gives partners and customers more flexibility to integrate with ISV and SI partner application migration offerings
Get Help With Migration From ISV And SI Partners
SI Partners: Accenture, Infosys, Wipro, 2nd Watch, Cognizant, Datapipe, RAX, Slalom, CloudReach, LogicWorks, SixNines, ClearScale, Minjar
ISV Partners: Racemi, CloudEndure, RISC Networks
Its a journey
So, what do customers want from hybrid implementations.Hybrid means a lot of things to a lot of different people.So, we focus less on market definitions and more on customer requirements.Lets dig into a use case to discuss what organizations want when they are asking for Hybrid.
Thank you.
Working with AWS lets us put more focus on building an awesome product, and less on worrying about our infrastructure.