In this Quest, you will delve deeper into the uses and capabilities of Amazon Redshift. You will use a remote SQL client to create and configure tables, and gain practice loading large data sets into Redshift. You will explore the effects of schema variations and compression. You will explore visualization of Redshift data, and connect Redshift with Amazon Machine Learning to create a predictive data model.
In this lab, you will generate an Amazon Machine Learning model, test and shape the ML model and then try real-time predictions. To successfully complete this lab, you should be familiar with the Amazon S3 service. You should understand the concepts of bucket and object, and how to perform put and get operations on objects in an S3 bucket using the S3 console or AWS CLI. You should have at least taken the lab “Introduction to Amazon Simple Storage Service (S3)”. For the lab to function as written, please DO NOT change the auto assigned region.
In this lab you will use the AWS Management Console to bundle custom Amazon Elastic Block Store (EBS)–backed Amazon Machine Images (AMIs). You will learn how to map additional Amazon EBS and/or ephemeral volumes in your AMI. Lastly you will look at some security best practices to create AMIs that are suitable for public sharing.
In this lab you will enable client-side at-rest encryption using AWS KMS-managed key for data stored in Amazon S3 with the EMR File System (EMRFS). Within Amazon EMR you will create security configuration to encrypt the object written to S3 with client-side encryption using the AWS KMS-managed key specified by you, and decrypt objects with the same key that was used to encrypt them. This will allow you to more easily leverage frameworks like Apache Spark, Apache Tez, and Apache Hadoop MapReduce on Amazon EMR to run big data analytics, stream processing, machine learning, and ETL workloads on confidential data.