Month: July 2019

Introducing the “Preparing for the California Consumer Privacy Act” whitepaper

Introducing the “Preparing for the California Consumer Privacy Act” whitepaper

AWS has published a whitepaper, Preparing for the California Consumer Protection Act, to provide guidance on designing and updating your cloud architecture to follow the requirements of the California Consumer Privacy Act (CCPA), which goes into effect on January 1, 2020.

The whitepaper is intended for engineers and solution builders, but it also serves as a guide for qualified security assessors (QSAs) and internal security assessors (ISAs) so that you can better understand the range of AWS products and services that are available for you to use.

The CCPA was enacted into law on June 28, 2018 and grants California consumers certain privacy rights. The CCPA grants consumers the right to request that a business disclose the categories and specific pieces of personal information collected about the consumer, the categories of sources from which that information is collected, the “business purposes” for collecting or selling the information, and the categories of third parties with whom the information is shared. This whitepaper looks to address the three main subsections of the CCPA: data collection, data retrieval and deletion, and data awareness.

To read the text of the CCPA please visit the website for California Legislative Information.

If you have questions or want to learn more, contact your account executive or leave a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Julia Soscia

Julia is a Solutions Architect at Amazon Web Services based out of New York City. Her main focus is to help customers create well-architected environments on the AWS cloud platform. She is an experienced data analyst with a focus in Big Data and Analytics.

Author photo

Anthony Pasquarielo

Anthony is a Solutions Architect at Amazon Web Services. He’s based in New York City. His main focus is providing customers technical guidance and consultation during their cloud journey. Anthony enjoys delighting customers by designing well-architected solutions that drive value and provide growth opportunity for their business.

Author photo

Justin De Castri

Justin is a Manager of Solutions Architecture at Amazon Web Services based in New York City. His primary focus is helping customers build secure, scaleable, and cost optimized solutions that are aligned with their business objectives.

from AWS Security Blog

Optimizing Kubernetes Clusters on Spot Instances for Cost and Performance – AWS Online Tech Talks

Optimizing Kubernetes Clusters on Spot Instances for Cost and Performance – AWS Online Tech Talks

Optimizing Kubernetes Clusters on Spot Instances for Cost and Performance – AWS Online Tech Talks
Unlock deep cost savings by running Spot Instances as worker nodes in Kubernetes clusters. Spot Instances are spare EC2 capacity available at up to 90% off the On-Demand price and can be reclaimed by EC2 with a two minute warning. In this tech talk, we’ll cover the best practices needed to achieve production grade for your Kubernetes clusters when using Spot Instances as worker nodes. These will include using a diversified set of EC2 Instance Types using Amazon EC2 Auto Scaling groups, handling Spot interruptions to drain worker nodes, and auto scaling your Kubernetes deployments.

Learning Objectives:
– Learn how to add Spot Instances as worker nodes in Kuberentes clusters
– Learn how to handle Spot interruptions to avoid performance and availability impact
– See how you can auto-scale Spot worker nodes

View on YouTube

Announcing the new Predictions category in Amplify Framework

Announcing the new Predictions category in Amplify Framework

The Amplify Framework is an open source project for building cloud-enabled mobile and web applications. Today, AWS announces a new category called “Predictions” in the Amplify Framework.

Using this category, you can easily add and configure AI/ML uses cases for your web and mobile application using few lines of code. You can accomplish these use cases with the Amplify CLI and either the Amplify JavaScript library (with the new Predictions category) or the generated iOS and Android SDKs for Amazon AI/ML services. You do not need any prior experience with machine learning or AI services to use this category.

Using the Amplify CLI, you can setup your backend by answering simple questions in the CLI flow. In addition, you can orchestrate advanced use cases such as on-demand indexing of images to auto-update a collection in Amazon Rekognition. The actual image bytes are not stored by Amazon Rekognition. For example, this enables you to securely upload new images using an Amplify storage object which triggers an auto-update of the collection. You can then identify the new entities the next time you make inference calls using the Amplify library. You can also setup or import a SageMaker endpoint by using the “Infer” option in the CLI.

The Amplify JavaScript library with Predictions category includes support for the following use cases:

1. Translate text to a target language.
2. Generate speech from text.
3. Identify text from an image.
4. Identify entities from an image. (for example, celebrity detection).
5. Label real world entities within an image/document. (for example, recognize a scene, objects and activity in an image).
6. Interpret text to find insights and relationships in text.
7. Transcribe text from audio.
8. Indexing of images with Amazon Rekognition.

The supported uses cases leverage the following AI/ML services:

  • Amazon Rekognition
  • Amazon Translate
  • Amazon Polly
  • Amazon Transcribe
  • Amazon Comprehend
  • Amazon Textract

The iOS and Android SDKs now include support for SageMaker runtime which you can use to call inference on your custom models hosted on SageMaker. You can also extract text and data from scanned documents using the newly added support for Amazon Textract in the Android SDK. These services add to the list of existing AI services supported in iOS and Android SDKs.

In this post, you build and host a React.js web application that uses text in English language as an input and translates it to Spanish language. In addition, you can convert the translated text to speech in the Spanish language. For example, this type of use case can be added to a travel application, where you can type text in English and playback the translated text in a language of your choice. To build this app you use two capabilities from the Predictions category: Text translation and Generate speech from text.

Secondly, we go through the flow of indexing images to update a collection from the Amplify CLI and an application when using Amazon Rekognition.

Building the React.js Application

Prerequisites:

Install Node.js and npm if they are not already installed on your machine.

Steps

To create a new React.js app

Create a new React.js application using the following command:

$ npx create-react-app my-app

To set up your backend

Install and configure the Amplify CLI using the following command:

$ npm install -g @aws-amplify/cli
$ amplify configure

To create a new Amplify project

Run the following command from the root folder of your React.js application:

$ amplify init

Choose the following default options as shown below:

? Enter a name for the project: my-app
? Enter a name for the environment: dev
? Choose your default editor: Visual Studio Code
? Choose the type of app that you're building: javascript
? What javascript framework are you using: react
? Source Directory Path:  src
? Distribution Directory Path: build
? Build Command:  npm run-script build
? Start Command: npm run-script start
? Do you want to use an AWS profile? Yes
? Please choose the profile you want to use: default

To add text translation

Add the new Predictions category to your Amplify project using the following command:

$ amplify add predictions

The command line interface asks you simple questions to add AI/ML uses cases. There are 4 option: Identify, Convert, Interpret, and Infer.

  • Choose the “Convert” option.
  • When prompted, add authentication if you do not have one.
  • Select the following options in CLI:
? Please select from of the below mentioned categories: Convert
? You need to add auth (Amazon Cognito) to your project in order to add storage for user files. Do you want to add auth now? Yes
? Do you want to use the default authentication and security configuration? Default configuration
? How do you want users to be able to sign in? Username
? Do you want to configure advanced settings? No, I am done.
? What would you like to convert? Convert text into a different language
? Provide a friendly name for your resource: translateText6c4601e3
? What is the source language? English
? What is the target language? Spanish
? Who should have access? Auth and Guest users

To add text to speech

Run the following command to add text to speech capability to your project:

$ amplify add predictions
? Please select from of the below mentioned categories: Convert
? What would you like to convert? Convert text to speech
? Provide a friendly name for your resource: speechGeneratorb05d231c
? What is the source language? Mexican Spanish
? Select a speaker Mia - Female
? Who should have access? Auth and Guest users

To integrate the predictions library in a React.js application

Now that you set up the backend, integrate the Predictions library in your React.js application.

The application UI shows “Text Translation” and “Text to Speech” with a separate button for each functionality. The output of the text translation is the translated text in JSON format. The output of Text to Speech is an audio file that can be played from the application.

First, install the Amplify and Amplify React dependencies using the following command:

$ npm install aws-amplify aws-amplify-react

Next, open src/App.js and add the following code

import React, { useState } from 'react';
import './App.css';
import Amplify from 'aws-amplify';
import Predictions, { AmazonAIPredictionsProvider } from '@aws-amplify/predictions';
 
import awsconfig from './aws-exports';
 
Amplify.addPluggable(new AmazonAIPredictionsProvider());
Amplify.configure(awsconfig);
 
 
function TextTranslation() {
  const [response, setResponse] = useState("Input text to translate")
  const [textToTranslate, setTextToTranslate] = useState("write to translate");

  function translate() {
    Predictions.convert({
      translateText: {
        source: {
          text: textToTranslate,
          language : "en" // defaults configured in aws-exports.js
        },
        targetLanguage: "es"
      }
    }).then(result => setResponse(JSON.stringify(result, null, 2)))
      .catch(err => setResponse(JSON.stringify(err, null, 2)))
  }

  function setText(event) {
    setTextToTranslate(event.target.value);
  }

  return (
    <div className="Text">
      <div>
        <h3>Text Translation</h3>
        <input value={textToTranslate} onChange={setText}></input>
        <button onClick={translate}>Translate</button>
        <p>{response}</p>
      </div>
    </div>
  );
}
 
function TextToSpeech() {
  const [response, setResponse] = useState("...")
  const [textToGenerateSpeech, setTextToGenerateSpeech] = useState("write to speech");
  const [audioStream, setAudioStream] = useState();
  function generateTextToSpeech() {
    setResponse('Generating audio...');
    Predictions.convert({
      textToSpeech: {
        source: {
          text: textToGenerateSpeech,
          language: "es-MX" // default configured in aws-exports.js 
        },
        voiceId: "Mia"
      }
    }).then(result => {
      
      setAudioStream(result.speech.url);
      setResponse(`Generation completed, press play`);
    })
      .catch(err => setResponse(JSON.stringify(err, null, 2)))
  }

  function setText(event) {
    setTextToGenerateSpeech(event.target.value);
  }

  function play() {
    var audio = new Audio();
    audio.src = audioStream;
    audio.play();
  }
  return (
    <div className="Text">
      <div>
        <h3>Text To Speech</h3>
        <input value={textToGenerateSpeech} onChange={setText}></input>
        <button onClick={generateTextToSpeech}>Text to Speech</button>
        <h3>{response}</h3>
        <button onClick={play}>play</button>
      </div>
    </div>
  );
}
 
function App() {
  return (
    <div className="App">
      <TextTranslation />
      <hr />
      <TextToSpeech />
      <hr />
    </div>
  );
}
 
export default App;

In the previous code, the source language for translate is set by default in aws-exports.js. Similarly, the default language is set for text-to-speech in aws-exports.js. You can override these values in your application code.

To add hosting for your application

You can enable static web hosting for our react application on Amazon S3 by running the following command from the root of our application folder:

$ amplify add hosting

To publish the application run:

$ amplify publish

The application is now hosted on the AWS Amplify Console and you can access it at a link that looks like http://my-appXXXXXXXXXXXX-hostingbucket-dev.s3-website-us-XXXXXX.amazonaws.com/

On-demand indexing of images

The “Identify entities” option in Amplify CLI using Amazon Rekognition can detect entities like celebrities by default. However, you can use Amplify to index new entities to auto-update the collection in Amazon Rekognition. This enables you to develop advanced use cases such as uploading a new image and thereafter having the new entities in an input image being recognized if it matches an entry in the collection. Note that Amazon Rekognition does not store any image bytes.

Here is how it works on a high level for reference:

Note, if you delete the image from S3 the entity is removed from the collection.
You easily can setup the indexing feature from the Amplify CLI using the following flow:

$ amplify add predictions
? Please select from of the below mentioned categories Identify
? You need to add auth (Amazon Cognito) to your project in order to add storage for user files. Do you want to add auth now? Yes
? Do you want to use the default authentication and security configuration? Default configuration
? What would you like to identify? Identify Entities
? Provide a friendly name for your resource identifyEntities5a41fcea
? Would you like use the default configuration? Advanced Configuration
? Would you like to enable celebrity detection? Yes
? Would you like to identify entities from a collection of images? Yes
? How many entities would you like to identify 50
? Would you like to allow users to add images to this collection? Yes
? Who should have access? Auth users
? The CLI would be provisioning an S3 bucket to store these images please provide bucket name: myappentitybucket

If you have already setup storage from the Amplify CLI by running `amplify add storage`, the bucket that was created is reused. To upload images for indexing from the CLI, you can run `amplify predictions upload` and it prompts you for a folder location with your images.

After you have setup the backend through the CLI, you can use an Amplify storage object to add images to S3 bucket which will trigger the auto-indexing of images and update the collection in Amazon Rekognition.

In your src/App.js add the following function that uploads image test.jpg to Amazon S3:

function PredictionsUpload() {
  
 function upload(event) {
    const { target: { files } } = event;
    const [file,] = files || [];
    Storage.put('test.jpg', file, {
      level: 'protected',
      customPrefix: {
        protected: 'protected/predictions/index-faces/',
      }
    });
  }

  return (
    <div className="Text">
      <div>
        <h3>Upload to predictions s3</h3>
        <input type="file" onChange={upload}></input>
      </div>
    </div>
  );
}

Next, call the Predictions.identify() function to identify entities in an input image using the following code. Note, that we have to set “collections: true” in the call to identify.

function EntityIdentification() {
  const [response, setResponse] = useState("Click upload for test ")
  const [src, setSrc] = useState("");

  function identifyFromFile(event) {
    setResponse('searching...');
    
    const { target: { files } } = event;
    const [file,] = files || [];

    if (!file) {
      return;
    }
    Predictions.identify({
      entities: {
        source: {
          file,
        },
        collection: true
        celebrityDetection: true
      }
    }).then(result => {
      console.log(result);
      const entities = result.entities;
      let imageId = ""
      entities.forEach(({ boundingBox, metadata: { name, externalImageId } }) => {
        const {
          width, // ratio of overall image width
          height, // ratio of overall image height
          left, // left coordinate as a ratio of overall image width
          top // top coordinate as a ratio of overall image height
        } = boundingBox;
        imageId = externalImageId;
        console.log({ name });
      })
      if (imageId) {
        Storage.get("", {
          customPrefix: {
            public: imageId
          },
          level: "public",
        }).then(setSrc); 
      }
      console.log({ entities });
      setResponse(imageId);
    })
      .catch(err => console.log(err))
  }

  return (
    <div className="Text">
      <div>
        <h3>Entity identification</h3>
        <input type="file" onChange={identifyFromFile}></input>
        <p>{response}</p>
        { src && <img src={src}></img>}
      </div>
    </div>
  );
}

To learn more about the predictions category, visit our documentation.

Feedback

We hope you like these new features! Let us know how we are doing, and submit any feedback in the Amplify Framework Github Repository. You can read more about AWS Amplify on the AWS Amplify website.

from AWS Mobile Blog

AWS ParallelCluster with AWS Directory Services Authentication

AWS ParallelCluster with AWS Directory Services Authentication

AWS ParallelCluster simplifies the creation and deployment of HPC clusters. In this post we combine ParallelCluster with AWS Directory Services to create a multi-user, POSIX-compliant system with centralized authentication and automated home directory creation.

To grant only the minimum permissions to the nodes in the cluster, no AD configuration parameters or permissions are stored directly on the cluster nodes. Instead, the ParallelCluster nodes when booted will automatically trigger an AWS Lambda function, which in turn uses AWS Systems Manager Parameter Store and AWS KMS to securely join the node to the domain. Users will log in to ParallelCluster nodes using their AD credentials.

VPC configuration for ParallelCluster

The VPC used for this configuration can be created using the “VPC Wizard” tool. You can also use an existing VPC that meets the AWS ParallelCluster network requirements.

 

 

In Select a VPC Configuration, choose VPC with Public and Private Subnets and then click Select.

 

Prior to starting the VPC Wizard, allocate an Elastic IP Address. This will be used to configure a NAT gateway for the private subnet. A NAT gateway is required to enable compute nodes in the AWS ParallelCluster private subnet to download the required packages and to access the AWS services public endpoints. See AWS ParallelCluster network requirements.

Please be sure to select two different availability zones for the Public and Private subnets. While this is not strictly required for ParallelCluster itself, we will later use these subnets again for SimpleAD, which requires subnets to be in two distinct availability zones.

 

You can find more details about VPC creation and configuration options in VPC with Public and Private Subnets (NAT).

AWS Directory Services configuration

For simplicity in this example, we will configure Simple AD as the directory service, but this solution will work with any Active Directory system.

Simple AD configuration is performed from the AWS Directory Service console. The required configuration steps are are described in Getting Started with Simple AD.

For this example, set the Simple AD configuration as follows:

Directory DNS name: test.domain
Directory NetBIOS name: TEST
Administrator password: <Your DOMAIN password>

 

 

In the networking section, select the VPC and the two subnets created in the previous steps.

The following screenshot contains the Directory details:

 

Make note of the DNS addresses listed in the directory details as these will be needed later (in this example, 10.0.0.92 and 10.0.1.215).

DHCP options set for AD

In order for nodes to join the AD Domain, a DHCP option set must be configured for the VPC, consistent with the domain name and DNS of the Simple AD service previously configured.

From the AWS VPC dashboard, set the following:

Name: custom DHCP options set
Domain name: test.domain eu-west-1.compute.internal
Domain name servers: 10.0.0.92, 10.0.1.215

The “Domain name” field must contain the Simple AD domain and the AWS regional domain (where the cluster and SimpleAD are being configured), separated by a space.

 

 

You can now assign the new DHCP options set to the VPC:

 

How to manage users and groups in Simple Active Directory

See Manage Users and Groups in Simple AD. If you prefer to use a Linux OS for account management, see How to Manage Identities in Simple AD Directories for details.

Using AWS Key Management Service to secure AD Domain joining credentials

AWS Key Management Service is a secure and resilient service that uses FIPS 140-2 validated hardware security modules to protect your keys. This service will be used to generate a key and encrypt the domain joining password, as explained in the next section.

In the AWS Console, navigate to the AWS Key Management Service (KMS) and click on Create key.

In Display name for the key, write “SimpleADJoinPassword” and click Next, leaving the default settings for all other sections.

In Customer managed keys, take note of the created Key ID.

 

AWS Systems Manager Parameter Store

AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management. We will use it to securely store the domain joining information, i.e. the domain name and the joining password.

From the AWS console, access the AWS Systems Manager and select Parameter Store. You need to create two specific parameters: the DomainName which contains the name of the domain and the DomainPassword that contains the domain administrator password.

To create the first parameter, Click on Create parameter and add the following information in the Parameter details section:

Name: DomainName
Type: String
Value: test.domain

Click on Create parameter to create the parameter.

You can now create the DomainPassword parameter with the following details:

Name: DomainPassword
Type: SecureString
KMS KEY ID: alias/SimpleADJoinPassword
Value: <your_ad_password>

Click on Create parameter to create it.

The result should be similar to the screenshot below:

 

AWS ParallelCluster configuration

AWS ParallelCluster is an open source cluster management tool to deploy and manage HPC clusters in the AWS cloud; to get started, see Installing AWS ParallelCluster.

After the AWS ParallelCluster command line has been configured, create the cluster template file provided below in .parallelcluster/config . The master_subnet_id contains the ID of the created public subnet; the compute_subnet_id contains the private one.

The ec2_iam_role is the role that will be used for all the instances of the cluster. The steps for creating this role will be explained in the next section.

[aws]
aws_region_name = eu-west-1

[cluster slurm]
scheduler = slurm
compute_instance_type = c5.large
initial_queue_size = 2
max_queue_size = 10
maintain_initial_size = false
base_os = alinux
key_name = AWS_Ireland
vpc_settings = public
ec2_iam_role = parallelcluster-custom-role
pre_install = s3://pcluster-scripts/pre_install.sh
post_install = s3://pcluster-scripts/post_install.sh

[vpc public]
master_subnet_id = subnet-01fc20e143543f8af
compute_subnet_id = subnet-0b1ae2790497d83ec
vpc_id = vpc-0cdee679c5a6163bd

[global]
update_check = true
sanity_check = true
cluster_template = slurm

[aliases]
ssh = ssh {CFN_USER}@{MASTER_IP} {ARGS}

The s3://pcluster-scripts bucket contains the pre- and post-installation scripts required for the configuration of the master and compute nodes inside the domain. A unique bucket name will be required – create an S3 bucket and replace s3://pcluster-scripts with your chosen name.

The pre_install script installs the required packages and joins the node inside domain:

#!/bin/bash

# Install the required packages
yum -y install sssd realmd krb5-workstation samba-common-tools
instance_id=$(curl http://169.254.169.254/latest/meta-data/instance-id)
region=$(curl  -s http://169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/[a-z]$//')
# Lambda function to join the linux system in the domain
aws --region ${region} lambda invoke --function-name join-domain-function /tmp/out --payload '{"instance": "'${instance_id}'"}' --log-type None
output=""
while [ -z "$output" ]
do
  sleep 5
  output=$(realm list)
done
#This line allows the users to login without the domain name
sed -i 's/use_fully_qualified_names = True/use_fully_qualified_names = False/g' /etc/sssd/sssd.conf
#This line configure sssd to create the home directories in the shared folder
mkdir /shared/home/
sed -i '/fallback_homedir/c\fallback_homedir = /home/%u' /etc/sssd/sssd.conf
sleep 1
service sssd restart
# This line is required for AWS Parallel Cluster to understand correctly the custom domain
sed -i "s/--fail \${local_hostname_url}/--fail \${local_hostname_url} | awk '{print \$1}'/g" /opt/parallelcluster/scripts/compute_ready

The post_install script configures the ssh service to accept connections with a password:

#!/bin/bash

sed -i 's/PasswordAuthentication no//g' /etc/ssh/sshd_config
echo "PasswordAuthentication yes" >> /etc/ssh/sshd_config
sleep 1
service sshd restart

Copy the pre_install and post_install scripts into the S3 bucket created previously.

AD Domain join with AWS Lambda

AWS Lambda allows you to run code without provisioning or managing servers. Lambda is used in this solution to securely join the Linux node to the Simple AD domain.

You can Create a Lambda Function with the Console.

For Function name, enter join-domain-function.

As Runtime, enter Python 2.7.

Choose “Create function” to create it.

 

The following code should be entered within the Function code section, which you can find by scrolling down in the page. Please modify <REGION> with the correct value.

import json
import boto3
import time

def lambda_handler(event, context):
    json_message = json.dumps(event)
    message = json.loads(json_message)
    instance_id = message['instance']
    ssm_client = boto3.client('ssm', region_name="<REGION>") # use region code in which you are working
    DomainName = ssm_client.get_parameter(Name='DomainName')
    DomainName_value = DomainName['Parameter']['Value']
    DomainPassword = ssm_client.get_parameter(Name='DomainPassword',WithDecryption=True)
    DomainPassword_value = DomainPassword['Parameter']['Value']
    response = ssm_client.send_command(
             InstanceIds=[
                "%s"%instance_id
                     ],
             DocumentName="AWS-RunShellScript",
             Parameters={
                'commands':[
                     'echo "%s" | realm join -U [email protected]%s %s --verbose;rm -rf /var/lib/amazon/ssm/i-*/document/orchestration/*'%(DomainPassword_value,DomainName_value,DomainName_value)                       ]
                  },
               )
    return {
        'statusCode': 200,
        'body': json.dumps('Command Executed!')
    }

In the Basic settings section, set 10 sec as Timeout.

Click on Save in the top right to save the function.

In the Execution role section, click on the highlighted section to edit the role.

 

 

In the newly-opened tab, Click on Attach Policies and then Create Policy.

 

The last action opens another new tab in your browser.

Click on Create policy and then JSON.

 

 

The following policy can be entered inside the JSON editor. Please modify the <REGION>, <AWS ACCOUNT ID> and <KEY ID> with the correct values.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetParameter"
            ],
            "Resource": [
                "arn:aws:ssm:<REGION>:<AWS ACCOUNT ID>:parameter/DomainName"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetParameter"
            ],
            "Resource": [
                "arn:aws:ssm:<REGION>:<AWS ACCOUNT ID>:parameter/DomainPassword"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:SendCommand"
            ],
            "Resource": [
                "arn:aws:ec2:<REGION>:<AWS ACCOUNT ID>:instance/*",
                "arn:aws:ssm:<REGION>::document/AWS-RunShellScript"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "kms:Decrypt"
            ],
            "Resource": [
                "arn:aws:kms:<REGION>:<AWS ACCOUNT ID>:key/<KEY ID>"
            ]
        }
    ]
}

In the next section, enter “GetJoinCredentials” as the Name and click Create policy.

Close the current tab and move to the previous one to select the policy for the Lambda role.

Refresh the list, select the GetJoinCredentials policy, and click Attach policy.

 

IAM custom Roles for Lambda and SSM endpoints

To allow ParallelCluster nodes to call Lambda and SSM endpoints, you need to configure a custom IAM Role.

See AWS Identity and Access Management Roles in AWS ParallelCluster for details on the default AWS ParallelCluster policy.

From the AWS console:

  • access the AWS Identity and Access Management (IAM) service and click on Policies.
  • choose Create policy and, in the JSON section, paste the following policy. Be sure to modify <REGION> , <AWS ACCOUNT ID> to match the values for your account, and also update the S3 bucket name from pcluster-scripts to the name you chose earlier.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Resource": [
                "*"
            ],
            "Action": [
                "ec2:DescribeVolumes",
                "ec2:AttachVolume",
                "ec2:DescribeInstanceAttribute",
                "ec2:DescribeInstanceStatus",
                "ec2:DescribeInstances",
                "ec2:DescribeRegions"
            ],
            "Sid": "EC2",
            "Effect": "Allow"
        },
        {
            "Resource": [
                "*"
            ],
            "Action": [
                "dynamodb:ListTables"
            ],
            "Sid": "DynamoDBList",
            "Effect": "Allow"
        },
        {
            "Resource": [
                "arn:aws:sqs:<REGION>:<AWS ACCOUNT ID>:parallelcluster-*"
            ],
            "Action": [
                "sqs:SendMessage",
                "sqs:ReceiveMessage",
                "sqs:ChangeMessageVisibility",
                "sqs:DeleteMessage",
                "sqs:GetQueueUrl"
            ],
            "Sid": "SQSQueue",
            "Effect": "Allow"
        },
        {
            "Resource": [
                "*"
            ],
            "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:TerminateInstanceInAutoScalingGroup",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:DescribeTags",
                "autoScaling:UpdateAutoScalingGroup",
                "autoscaling:SetInstanceHealth"
            ],
            "Sid": "Autoscaling",
            "Effect": "Allow"
        },
        {
            "Resource": [
                "arn:aws:dynamodb:<REGION>:<AWS ACCOUNT ID>:table/parallelcluster-*"
            ],
            "Action": [
                "dynamodb:PutItem",
                "dynamodb:Query",
                "dynamodb:GetItem",
                "dynamodb:DeleteItem",
                "dynamodb:DescribeTable"
            ],
            "Sid": "DynamoDBTable",
            "Effect": "Allow"
        },
        {
            "Resource": [
                "arn:aws:s3:::<REGION>-aws-parallelcluster/*"
            ],
            "Action": [
                "s3:GetObject"
            ],
            "Sid": "S3GetObj",
            "Effect": "Allow"
        },
        {
            "Resource": [
                "arn:aws:cloudformation:<REGION>:<AWS ACCOUNT ID>:stack/parallelcluster-*"
            ],
            "Action": [
                "cloudformation:DescribeStacks"
            ],
            "Sid": "CloudFormationDescribe",
            "Effect": "Allow"
        },
        {
            "Resource": [
                "*"
            ],
            "Action": [
                "sqs:ListQueues"
            ],
            "Sid": "SQSList",
            "Effect": "Allow"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:DescribeAssociation",
                "ssm:GetDeployablePatchSnapshotForInstance",
                "ssm:GetDocument",
                "ssm:DescribeDocument",
                "ssm:GetManifest",
                "ssm:GetParameter",
                "ssm:GetParameters",
                "ssm:ListAssociations",
                "ssm:ListInstanceAssociations",
                "ssm:PutInventory",
                "ssm:PutComplianceItems",
                "ssm:PutConfigurePackageResult",
                "ssm:UpdateAssociationStatus",
                "ssm:UpdateInstanceAssociationStatus",
                "ssm:UpdateInstanceInformation"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssmmessages:CreateControlChannel",
                "ssmmessages:CreateDataChannel",
                "ssmmessages:OpenControlChannel",
                "ssmmessages:OpenDataChannel"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2messages:AcknowledgeMessage",
                "ec2messages:DeleteMessage",
                "ec2messages:FailMessage",
                "ec2messages:GetEndpoint",
                "ec2messages:GetMessages",
                "ec2messages:SendReply"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "lambda:InvokeFunction",
            "Resource": "arn:aws:lambda:<REGION>:<AWS ACCOUNT ID>:function:join-domain-function"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::pcluster-scripts/*"
            ]
        }
    ]
}

Click Review policy, and in the next section enter “parallelcluster-custom-policy” as the Name string. Click Create policy.

Now you can finally create the Role. Choose Role in the left menu and then Create role.

Select AWS service as the type of trusted entity, and EC2 as the service that will use this role.

Choose Next to proceed in the creation process.

 

 

In the policy selection, select the parallelcluster-custom-policy that was just created.

Click through the Next: Tags and then Next: Review pages.

In the Role name box, enter “parallelcluster-custom-role” and confirm with the Create role button.

Deploy ParallelCluster

The cluster can now be created using the following command line:

pcluster create -t slurm slurmcluster

-t slurm indicates which section of the cluster template to use. slurmcluster is the name of the cluster that will be created. For more details, see the AWS ParallelCluster Documentation. A detailed explanation of the pcluster command line parameters can be found in AWS ParallelCluster CLI Commands.

You can now connect to the Master node of the cluster with any Simple AD user and run the desired workload.

Teardown

When you have finished your computation, the cluster can be destroyed using the following command:

pcluster delete slurmcluster

The additonal created resources can be destroyed following the instructions in the AWS documentation:

Conclusion

This blog post has shown you how to deploy and integrate Simple AD with AWS ParallelCluster, allowing cluster nodes to be securely and automatically joined to a domain to provide centralized user authentication. This solution encrypts and stores the domain joining credentials using AWS Systems Manager Parameter Store with AWS KMS, and uses AWS Lambda at node boot to join the AD Domain.

from AWS Open Source Blog

Best Practices for Android Authentication on AWS with AWS Amplify – AWS Online Tech Talks

Best Practices for Android Authentication on AWS with AWS Amplify – AWS Online Tech Talks

Best Practices for Android Authentication on AWS with AWS Amplify – AWS Online Tech Talks
**Understand how AWS helps with mobile authentication best practices
**Learn how to use AWS authentication services with Android
**Learn how to use AWS Amplify as an abstraction layer to make mobile authentication quick and easy

View on YouTube

AWS CodePipeline Adds Pipeline Status to Pipeline Listing

AWS CodePipeline Adds Pipeline Status to Pipeline Listing

You can now view pipeline status from the pipeline listing in AWS CodePipeline. Previously, you had to look at the detail page of a pipeline to obtain its status. Now, you can see the status of the most recent execution for each pipeline directly on the pipeline listing. You are now able to monitor status across multiple pipelines in a unified interface.

from Recent Announcements https://aws.amazon.com/about-aws/whats-new/2019/07/aws-codepipeline-adds-pipeline-status-to-pipeline-listing/

Amplify Framework Update – Quickly Add Machine Learning Capabilities to Your Web and Mobile Apps

Amplify Framework Update – Quickly Add Machine Learning Capabilities to Your Web and Mobile Apps

At AWS, we want to put machine learning in the hands of every developer. For example, we have pre-trained AI services for areas such as computer vision and language that you can use without any expertise in machine learning. Today we are making another step in that direction with the addition of a new Predictions category to the Amplify Framework. In this way, you can add and configure AI/ML uses cases for your web or mobile application with few lines of code!

AWS Amplify consists of a development framework and developer services that make super easy to build mobile and web applications on AWS. The open-source Amplify Framework provides an opinionated set of libraries, user interface (UI) components, and a command line interface (CLI) to build a cloud backend and integrate it with your web or mobile apps. Amplify leverages a core set of AWS services organized into categories, including storage, authentication & authorization, APIs (GraphQL and REST), analytics, push notifications, chat bots, and AR/VR.

Using the Amplify Framework CLI, you can interactively initialize your project with amplify init. Then, you can go through your storage (amplify add storage) and user authentication & authorization (amplify add auth) options.

Now, you can also use amplify add predictions to configure your app to:

  • Identify text, entities, and labels in images using Amazon Rekognition, or identify text in scanned documents to get the contents of fields in forms and information stored in tables using Amazon Textract.
  • Convert text into a different language using Amazon Translate, text to speech using Amazon Polly, and speech to text using Amazon Transcribe.
  • Interpret text to find the dominant language, the entities, the key phrases, the sentiment, or the syntax of unstructured text using Amazon Comprehend.

You can select to have each of the above actions available only to authenticated users of your app, or also for guest, unauthenticated users. Based on your inputs, Amplify configures the necessary permissions using AWS Identity and Access Management (IAM) roles and Amazon Cognito.

Let’s see how Predictions works for a web application. For example, to identify text in an image using Amazon Rekognition directly from the browser, you can use the following JavaScript syntax and pass a file object:

Predictions.identify({
  text: {
    source: file
    format: "PLAIN" # "PLAIN" uses Amazon Rekognition
  }
}).then((result) => {...})

If the image is stored on Amazon S3, you can change the source to link to the S3 bucket selected when adding storage to this project. You can also change the format to analyze a scanned document using Amazon Textract. Here’s how to extract text from a form in a document stored on S3:

Predictions.identify({
  text: {
    source: { key: "my/image" }
    format: "FORM" # "FORM" or "TABLE" use Amazon Textract
  }
}).then((result) => {...})

Here’s an example of how to interpret text using all the pre-trained capabilities of Amazon Comprehend:

Predictions.interpret({
  text: {
    source: {
      text: "text to interpret",
    },
    type: "ALL"
  }
}).then((result) => {...})

To convert text to speech using Amazon Polly, using the language and the voice selected when adding the prediction, and play it back in the browser, you can use the following code:

Predictions.convert({
  textToSpeech: {
    source: {
      text: "text to generate speech"
    }
  }
}).then(result => {
  var audio = new Audio();
  audio.src = result.speech.url;
  audio.play();
})

Available Now
You can start building you next web or mobile app using Amplify today by following the get-started tutorial here and give us your feedback in the Amplify Framework Github repository.

There are lots of other options and features available in the Predictions category of the Amplify Framework. Please see this walkthrough on the AWS Mobile Blog for an in-depth example of building a machine-learning powered app.

It has never been easier to add machine learning functionalities to a web or mobile app, please let me know what you’re going to build next.

Danilo

from AWS News Blog https://aws.amazon.com/blogs/aws/amplify-framework-update-quickly-add-machine-learning-capabilities-to-your-web-and-mobile-apps/