Tag: AI

Amazon SageMaker Ground Truth: Using A Pre-Trained Model for Faster Data Labeling

Amazon SageMaker Ground Truth: Using A Pre-Trained Model for Faster Data Labeling

With Amazon SageMaker Ground Truth, you can build highly accurate training datasets for machine learning quickly. SageMaker Ground Truth offers easy access to public and private human labelers and provides them with built-in workflows and interfaces for common labeling tasks. Additionally, SageMaker Ground Truth can lower your labeling costs by up to 70% using automatic labeling, which works by training Ground Truth from data labeled by humans so that the service learns to label data independently. This previous blog post explains how automated data labeling works and how to evaluate its results.

What you may not know is that SageMaker Ground Truth trains models for you over the course of a labeling job, and that these models are available for use after a labeling job concludes! In this blog post, we will explain how you can use a model trained from a previous labeling job to “jump start” a subsequent labeling job. This is an advanced feature, only available through the SageMaker Ground Truth API.

About this blog post
Time to read 30 minutes
Time to complete 8 hours
Cost to complete Under $600
Learning level Intermediate (200)
AWS services Amazon SageMaker, Amazon SageMaker GroundTruth

This post builds on the following prior post – you may find it useful to review it first:

As part of this blog, we will create three different labeling jobs, as described below.

  1. An initial labeling job with the “auto labeling” feature enabled. At the end of this labeling job, we will have a trained machine learning model capable of making high quality predictions on the sample dataset.
  2. A subsequent labeling job with a different set of images drawn from the same dataset as the first labeling job. In this labeling job, the machine learning model that was produced as an output of the first labeling job will be provided to accelerate the labeling process
  3. A repetition of the second labeling job, but without the pre-trained machine learning model. This labeling job is intended to serve as a control to demonstrate the benefit of using the pre-trained model.

We will use an Amazon SageMaker Jupyter notebook that uses the API to produce bounding box labels for our dataset.

To access the demo notebook, start an Amazon SageMaker notebook instance using an ml.m4.xlarge instance type. You can follow this step-by-step tutorial to set up an instance. On Step 3, make sure to mark “Any S3 bucket” when you create the IAM role! Open the Jupyter notebook, choose the SageMaker Examples tab, and launch object_detection_pretrained_model.ipynb.

Prepare Datasets

Let’s prepare our dataset to be used in creating our labeling jobs. We will create two sets of 1250 images from this dataset. We will use the first batch in our initial labeling job and the other batch for our two subsequent jobs, one with the  pre-trained model and one without.

Next, run all the cells under ‘Prepare Dataset’ in the demo notebook. Running these cells will perform the following steps.

  1. Get the full collection of 2500 images from the dataset repository.
  2. Divide the dataset in two batches of 1250 images each.

Create An Initial Labeling Job With Active Learning

Now let’s run our first job. Run all of the cells under the “Iteration #1: Create Initial Labeling Job” heading of the notebook. You need to modify some of the cells, so read the notebook instructions carefully. Running these sections will perform the following steps.

  1. Prepare the first set of 1250 from the previous step for use in our first labeling job.
  2. Create labeling instructions for an object detection labeling job.
  3. Create an object detection labeling job request.
  4. Submit the labeling job request to SageMaker Ground Truth.

The job should take about four hours. When it’s done, run all of the cells in the “Analyze Initial Active Learning labeling job results” sections. These sections will produce a wealth of information that will help you understand the labeling job that you performed. In particular, we can see that the total cost was $217.18, of which 78% was attributable to the costs of manual labeling by the public work team. It’s worth pointing out that even at this stage there is modest cost savings due to our use of auto labeling – without it, the labeling cost would have been $235. In general, larger datasets (on the order of multiple thousands of objects) will be able to leverage greater use of auto labeling. In the rest of this blog, we will seek to improve the auto labeling performance even on this small 1,250-object dataset through the use of a pre-trained model.

In a previous blog post “Annotate data for less with Amazon SageMaker Ground Truth and automated data labeling” we described the batch-wise nature of a labeling job. In this blog post, we again refer to the batch-by-batch statistics of our labeling job. The plots below show that the model did not begin auto-labeling images until the 4th iteration. In the end, the ML model was able to annotate a little less than half of the entire dataset. We will look to increase the share of machine labeled data and consequently decrease the overall cost by using a pre-trained model in the next step.

Verify that the cell titled “Wait for Completion of Job” returns the job status “Completed” before proceeding to the next step.

Figure 1. Labeling costs and metrics for the initial labeling job.

Create A Second Labeling Job With A Pre-Trained Model

Now that the first labeling job is complete, we’ll prepare the second labeling job. We’ll reuse much of the original labeling job request, but we’ll need to specify the pre-trained machine learning model. We can query the original labeling job to get the Amazon ARN for the final machine learning model trained during the first job.

sagemaker_client.describe_labeling_job(LabelingJobName=job_name)['LabelingJobOutput']['FinalAct

We’ll use this for the InitialActiveLearningModelArn parameter in the labeling job request.

In the demo notebook, run all the cells under the “Iteration #2: Labeling Job with Pre-Trained Model” heading. Running these sections will perform the following steps.

  1. Create an object detection labeling job request in which the model trained in the previous labeling job is provided.
  2. Submit the labeling job request to Ground Truth.

The job should take about four hours. When it’s done, run all of the cells in the “Analyze Active Learning labeling job with pre-trained model results” sections. This will produce a wealth of information similar to what we saw after the previous labeling job. You should already see some key differences in the number of machine-labeled dataset objects! In particular, the machine learning model is able to start labeling data in the third iteration, and when it does, it annotates almost the entire remainder of the dataset! Note that the cost associated with manual labeling is much lower than before. Although the cost associated with auto labeling has increased, this increase is smaller in magnitude than the decrease in the human labeling cost. Consequently, the overall cost of this labeling job – $146.80 – is 33% lower than that of the first labeling job.

Verify that the cell titled “Wait for Completion of Job” returns the job status “Completed” before proceeding to the next step.

Figure 2. Labeling costs and metrics for the second labeling job with the use of a pre-trained model.

Repeat the Second Labeling Job Without the Pre-Trained Model

In the previous labeling job, we saw a substantial improvement in run time and the number of machine labeled dataset objects relative to the first dataset. However, one may naturally ask how much of the difference is due to the difference in the underlying data. Although both datasets have the same labels and are sampled from the same, larger dataset, a controlled study will provide a fairer assessment. To that end, we’ll now repeat the second labeling job with all the same settings, but remove the pre-trained model. In the demo notebook, run all the cells in the “Labeling Job without Pre-trained model”. Running these sections will perform the following steps.

  1. Duplicate the labeling job request from the second labeling job with the removal of the pre-trained model.
  2. Submit the labeling job request to Ground Truth

The job should take about four hours. When it’s done, run all of the cells under the “Iteration #3: Second Data Subset Without Pre-Trained Model” heading. Again, this will produce plots that look similar to those generated in the previous steps. However, these figures should look more similar to the results of the first labeling job than the second. Notice that the overall cost is $189.64, and that the job took five iterations to complete. Notice that this is 29% larger than when we used the pre-trained model to help label this data!

Figure 3. Labeling costs and metrics for the third labeling job, which uses the same dataset as the second labeling job without the benefit of the pre-trained model.

Compare Results

Now that we’ve run all there labeling jobs, we can compare the results more fully. First, consider the left-hand plot shown below. The total elapsed running time for the labeling job that uses the pre-trained model is less than half the time required for the jobs that don’t make use of the pre-trained model. We can also see in the right-hand plot below that this reduction in time goes hand-in-hand with a larger fraction of auto-labeled data. The reason that the labeling job that uses the pre-trained model is so much faster is because the machine learning model does more of the work, which is much more efficient than manual labeling.

It should be noted that some amount of variability is expected in these results. Due to the small random effects introduced by the pool of workers available when these labeling jobs were performed, the small fluctutations that may be seen in training the machine learning model, etc., a repeated trial of these three labeling jobs may result in slightly different numbers. However, the substantial gain in cost and time savings seen in experiment #2 is predominately due to the use of the pre-trained model.

Figure 4. Comparison of labeling time and auto-labeling efficiency across the three labeling jobs.

Finally, the plot below shows that the reduction in labeling time and the increase in the fraction of data annotated by the machine learning model lead to a measurable reduction in the total labeling cost. In this example we see that when labeling the second dataset, using a pre-trained model leads to a 23% reduction in cost relative to the control scenario where the pre-trained model was not used – $146.80 vs $189.64.

Figure 5. Total labeling cost across the three labeling jobs.

Conclusion

Let’s review what we covered in this exercise.

  • We gathered a dataset consisting of 2500 images of birds from the Open Images dataset.
  • We split this dataset into two halves.
  • We created an object detection labeling job for the first subset of 1250 images and saw that approximately 48% of the dataset was machine-labeled.
  • We created a second labeling job for the second subset, and we specified the machine learning model that was trained during the first labeling job. This time we found that approximately 80% of the dataset was machine-labeled.
  • As a final benchmark, we re-ran the second labeling job without specifying the pre-trained model. Now we found that approximately 60% of the dataset was machine-labeled.
  • In the end, we saw a 50% reduction in time required to acquire labels, and a 23% reduction in total labeling cost when we use a pre-trained model. This is highly context dependent, and results will vary from application to application. However, the workflow illustrated in this example demonstrates the value of using a pre-trained model for successive labeling jobs.

If we were to acquire a new unlabeled dataset in this domain (e.g., object detection for birds), we could setup another labeling job, and specify the model trained in our second labeling job. The use of pre-trained machine learning models thus allows you to run labeling jobs in succession, with each job improving from the predictive ability gained through the previous job. Remember that the pre-trained model capability requires you to use the “job chaining” feature (described in https://aws.amazon.com/blogs/aws/amazon-sagemaker-ground-truth-keeps-simplifying-labeling-workflows/) or to use the Amazon SageMaker Ground Truth API, as we demonstrated in the accompanying example notebook.


About the Authors

Prateek Jindal is a software development engineer for AWS AI. He is working on solving complex data labeling problems in the machine learning world and has a keen interest in building scalable distributed solutions for his customers. In his free time, he loves to cook, try out new restaurants, and hit the gym.

 

 

 

Jonathan Buck is a software engineer at Amazon. His focus is on building impactful software services and products to democratize machine learning.

 

from AWS Machine Learning Blog

Another triple for the DeepRacer League brings more world records and the first female winner!

Another triple for the DeepRacer League brings more world records and the first female winner!

The AWS DeepRacer League is the world’s first global autonomous racing league, open to anyone. Developers of all skill levels can compete in person at 22 AWS events globally, or online via the AWS DeepRacer console, for a chance to win an expense paid trip to re:Invent 2019, where they will race to win the Championship Cup 2019.

Last week the AWS DeepRacer League visited three cities around the world – Washington D.C, USA, Taipei, Taiwan, and Tokyo, Japan. Each race spanned multiple days, providing developers with numerous opportunities to record a winning lap time.

The first female winner and another world record

The Tokyo race was the biggest one yet. Over 20,000 AWS customers came to the AWS Summit at the Makuhari Messe, located just outside of the city, for three days of learning, hands-on labs, and networking. There were two DeepRacer tracks for developers to race on throughout the summit, virtual racing pods, and multiple workshops to learn how to build a DeepRacer model.

Virtual racing pods, for customers to build models and learn more about the AWS DeepRacer league.

Hundreds of developers tested out their model on the tracks, but none could take the top spot from our first female winner, [email protected], who took home the cup with a world record winning time of 7.44 seconds – that means DeepRacer is travelling at the equivalent of roughly 100mph if scaled up to a real size car! Here is [email protected] celebrating on the podium with her teammates. Check out the lightning fast winning lap!

She came to the AWS Summit as part of a team created at her company DNP (Dai Nippon Printing, a Japanese printing company operating in areas such as Information Communications, Lifestyles and Industrial Supplies and, Electronics). 28 of them placed on the leaderboard, with the top 3 all being from the team – 2 of them beating the previous world record (7.62 seconds) set just the week before at Amazon re:MARS.

To prepare for such a strong showing, DNP created DeepRacer study groups where employees share their knowledge and newly acquired machine learning skills. They see DeepRacer as a fun and engaging way to grow their engineers’ skills in AI.

“We currently have around 2000 IT personnel in the group and fewer than 200 employees experienced in working with AI. We want to double the number within 5 years.” – Mr. Shinichiro Fukuda, Deputy Director, C & I Center, DNP Information Innovation Division

Source: https://japan.zdnet.com/article/35137517/

Race Stats

The race in Japan was the most competitive yet. The top 33 competitors achieved lap times of under 10 seconds, the top 17 were under 9 seconds and the top 4 were under 8 seconds – breaking the world record twice! Check out the fast times and final results from the race on the Tokyo Leaderboard.

Washington DC

The AWS Public Sector Summit in Washington DC on June 10 also had an exciting race and there was a familiar face back on the tracks – our second place winner at Amazon re:MARS, John Amos. John narrowly missed out on the win and the opportunity to compete at re:Invent, in Las Vegas, when Anthony Navarro beat his world record time in the last few minutes of racing. In Washington, he took the lead early on and held his position steadfast in his pursuit of the win and will now be winging his way to re:Invent with the other Summit and Virtual circuit winners. He is really enjoying his AWS DeepRacer experience and has a new hobby to boot!

“I think everyone should have a hobby and this is a healthy one. There’s lots of stuff you can get addicted to, but with this you’re out there running models using technology. I’m used to playing video games online, but this helps it become real. What you do in the reward function impacts what’s happening with the car, so taking it from the simulator out onto the track is just exhilarating, and who doesn’t love a good challenge?”

Taipei

In Taipei, developers were also burning rubber and posting fast times on the leaderboard, and the winner took the top spot by a narrow margin (just 0.04 of a second), and in an exciting last few minutes of the race! He was [email protected]_CGI, with a winning time of 8.734 seconds. Congratulations to all of the winners this week, it’s going to be an exciting final round at re:Invent 2019.

The AWS DeepRacer League Summit Circuit is in the homestretch

The AWS DeepRacer League Summit circuit only has five more races, (Hong Kong, Cape Town, New York, Mexico City, and Toronto) before the finale in Las Vegas, and it is shaping up to be an exciting event. Join the league at one of the 5 remaining races on the summit circuit, or race online in the virtual circuit today for your chance to win your trip to compete!


About the Author

Alexandra Bush is a Senior Product Marketing Manager for AWS AI. She is passionate about how technology impacts the world around us and enjoys being able to help make it accessible to all. Out of the office she loves to run, travel and stay active in the outdoors with family and friends.

 

 

 

from AWS Machine Learning Blog

Train and deploy Keras models with TensorFlow and Apache MXNet on Amazon SageMaker

Train and deploy Keras models with TensorFlow and Apache MXNet on Amazon SageMaker

Keras is a popular and well-documented open source library for deep learning, while Amazon SageMaker provides you with easy tools to train and optimize machine learning models. Until now, you had to build a custom container to use both, but Keras is now part of the built-in TensorFlow environments for TensorFlow and Apache MXNet. Not only does this simplify the development process, it also allows you to use standard Amazon SageMaker features like script mode or automatic model tuning.

Keras’s excellent documentation, numerous examples, and active community make it a great choice for beginners and experienced practitioners alike. The library provides a high-level API that makes it easy to build all kind of deep learning architectures, with the option to use different backends for training and prediction: TensorFlow, Apache MXNet, and Theano.

In this post, I show you how to train and deploy Keras 2.x models on Amazon SageMaker, using the built-in TensorFlow environments for TensorFlow and Apache MXNet. In the process, you also learn the following:

  • To run the same Keras code on Amazon SageMaker that you run on your local machine, use script mode.
  • To optimize hyperparameters, launch automatic model tuning.
  • Deploy your models with Amazon Elastic Inference.

The Keras example

This example demonstrates training a simple convolutional neural network on the Fashion MNIST dataset. This dataset replaces the well-known MNIST dataset. It has the same number of classes (10), samples (60,000 for training, 10,000 for validation), and image properties (28×28 pixels, black and white). But it’s also much harder to learn, which makes for a more interesting challenge.

First, set up TensorFlow as your Keras backend (and switch to Apache MXNet later on). For more information, see the mnist_keras_tf_local.py script.

The process is straightforward:

  • Grab optional parameters from the command line, or use default values if they’re missing.
  • Download the dataset and save it to the /data directory.
  • Normalize the pixel values, and one hot encode labels.
  • Build the convolutional neural network.
  • Train the model.
  • Save the model to TensorFlow Serving format for deployment.

Positioning your image channels can be tricky. Black and white images have a single channel (black), while color images have three channels (red, green, and blue). The library expects data to have a well-defined shape when training a model, describing the batch size, the height and width of images, and the number of channels. TensorFlow specifically requires the input shape formatted as (batch size, width, height, channels), with channels last. Meanwhile, MXNet expects (batch size, channels, width, height), with channels first. To avoid training issues created by using the wrong shape, I add a few lines of code to identify the active setting and reshape the dataset to compensate.

Now check that this code works by running it on a local machine, without using Amazon SageMaker.

$ python mnist_keras_tf_vanilla.py
Using TensorFlow backend.
channels_last
x_train shape: (60000, 28, 28, 1)
60000 train samples
10000 test samples
<output removed>
Validation loss    : 0.2472819224089384
Validation accuracy: 0.9126

Training and deploying the Keras model

You must make a few minimal changes, but script mode does most of the work for you. Before invoking your code inside the TensorFlow environment, Amazon SageMaker sets four environment variables

  • SM_NUM_GPUS—The number of GPUs present on the instance.
  • SM_MODEL_DIR— The output location for the model.
  • SM_CHANNEL_TRAINING— The location of the training dataset.
  • SM_CHANNEL_VALIDATION—The location of the validation dataset.

You can use these values in your training code with just a simple modification:

parser.add_argument('--gpu-count', type=int, default=os.environ['SM_NUM_GPUS'])
parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])
parser.add_argument('--training', type=str, default=os.environ['SM_CHANNEL_TRAINING'])
parser.add_argument('--validation', type=str, default=os.environ['SM_CHANNEL_VALIDATION'])

What about hyperparameters? No work needed there. Amazon SageMaker passes them as command line arguments to your code.

For more information, see the updated script, mnist_keras_tf.py.

Training on Amazon SageMaker

After deploying your Keras model, you can begin training on Amazon SageMaker. For more information, see the Fashion MNIST-SageMaker.ipynb notebook.

The process is straightforward:

  • Download the dataset.
  • Define the training and validation channels.
  • Configure the TensorFlow estimator, enabling script mode and passing some hyperparameters.
  • Train, deploy, and predict.

In the training log, you can see how Amazon SageMaker sets the environment variables and how it invokes the script with the three hyper parameters defined in the estimator:

/usr/bin/python mnist_keras_tf.py --batch-size 256 --epochs 20 --learning-rate 0.01 --model_dir s3://sagemaker-eu-west-1-123456789012/sagemaker-tensorflow-scriptmode-2019-05-16-14-11-19-743/model

Because you saved your model in TensorFlow Serving format, Amazon SageMaker can deploy it just like any other TensorFlow model by calling the deploy() API on the estimator. Finally, you can grab some random images from the dataset and predict them with the model you just deployed.

Script mode makes it easy to train and deploy existing TensorFlow code on Amazon SageMaker. Just grab those environment variables, add command line arguments for your hyperparameters, save the model in the right place, and voilà!

Switching to the Apache MXNet backend

As mentioned earlier, Keras also supports MXNet as a backend. Many customers find that it trains faster than TensorFlow, so you may want to give it a shot.

Everything discussed above still applies (script mode, etc.). You only make two changes:

  • Use channels_first.
  • Save the model in MXNet format, creating an extra file (model-shapes.json) required to load the model for prediction.

For more information, see the mnist_keras_mxnet.py training code for MXNet.

You can find the Amazon SageMaker steps in the notebook. Apache MXNet uses virtually the same process I just reviewed, aside from using the MXNet estimator.

Automatic model tuning on Keras

Automatic model tuning is a technique that helps you find the optimal hyperparameters for your training job, that is, the hyperparameters that maximize validation accuracy.

You have access to this feature by default because you’re using the built-in estimators for TensorFlow and MXNet. For the sake of brevity, I only show you how to use it with Keras-TensorFlow, but the process is identical for Keras-MXNet.

First, define the hyperparameters you’d like to tune, and their ranges. How about all of them? Thanks to script mode, your parameters are passed as command line arguments, allowing you to tune anything.

hyperparameter_ranges = {
    'epochs':        IntegerParameter(20, 100),
    'learning-rate': ContinuousParameter(0.001, 0.1, scaling_type='Logarithmic'), 
    'batch-size':    IntegerParameter(32, 1024),
    'dense-layer':   IntegerParameter(128, 1024),
    'dropout':       ContinuousParameter(0.2, 0.6)
}

When configuring automatic model tuning, define which metric to optimize on. Amazon SageMaker supports predefined metrics that it can read automatically from the training log for built-in algorithms (XGBoost, etc.) and frameworks (TensorFlow, MXNet, etc.). That’s not the case for Keras. Instead, you must tell Amazon SageMaker how to grab your metric from the log with a simple regular expression:

objective_metric_name = 'val_acc'
objective_type = 'Maximize'
metric_definitions = [{'Name': 'val_acc', 'Regex': 'val_acc: ([0-9\\.]+)'}]

Then, you define your tuning job, run it, and deploy the best model. No difference here.

Advanced users may insist on using early stopping to avoid overfitting, and they would be right. You can implement this in Keras using a built-in callback (keras.callbacks.EarlyStopping). However, this also creates difficulty in automatic model tuning.

You need Amazon SageMaker to grab the metric for the best epoch, not the last epoch. To overcome this, define a custom callback to log the best validation accuracy. Modify the regular expression accordingly so that Amazon SageMaker can find it in the training log.

For more information, see the 02-fashion-mnist notebook.

Conclusion

I covered a lot of ground in this post. You now know how to:

  • Train and deploy Keras models on Amazon SageMaker, using both the TensorFlow and the Apache MXNet built-in environments.
  • Use script mode to use your existing Keras code with minimal change.
  • Perform automatic model tuning on Keras metrics.

Thank you very much for reading. I hope this was useful. I always appreciate comments and feedback, either here or more directly on Twitter.


About the Author

Julien is the Artificial Intelligence & Machine Learning Evangelist for EMEA. He focuses on helping developers and enterprises bring their ideas to life. In his spare time, he reads the works of JRR Tolkien again and again.

 

 

 

from AWS Machine Learning Blog

Schedule an appointment in Office 365 using an Amazon Lex bot

Schedule an appointment in Office 365 using an Amazon Lex bot

You can use chatbots for automating tasks such as scheduling appointments to improve productivity in enterprise and small business environments. In this blog post, we show how you can build the backend integration for an appointment bot with the calendar software in Microsoft Office 365 Exchange Online. For scheduling appointments, the bot interacts with the end user to find convenient time slots and reserves a slot.

We use the scenario of a retail banking customer booking an appointment using a chatbot powered by Amazon Lex. The bank offers personal banking services and investment banking services and uses Office 365 Exchange Online for email and calendars.

Bank customers interact with the bot using a web browser. Behind the scenes, Amazon Lex uses an AWS Lambda function to connect with the banking agent’s Office 365 calendar. This function looks up the bank agent’s calendar and provides available times to Amazon Lex, so these can be displayed to the end user. After the booking is complete, an invitation is saved on the agent’s Office 365 and the bank customer’s calendar as shown in the following graphic:

The following flowchart describes the scenario:

Architecture

To achieve this automation we use an AWS Lambda function to call Office 365 APIs to fulfill the Amazon Lex intent. The Office 365 secrets are stored securely in AWS Secrets Manager. The bot is integrated with a web application that is hosted on Amazon S3Amazon Cognito  is used to authorize calls to Amazon Lex services from the web application.

To make it easy to build the solution, we have split it into three stages:

  • Stage 1: Create an Office 365 application. In this stage, you create an application in Office 365. The application is necessary to call the Microsoft Graph Calendar APIs for discovering and booking free calendar slots. You need to work with your Azure Active Directory (AAD) admin to complete this stage.
  • Stage 2: Create the Amazon Lex bot for booking appointments. In this stage, you create an Amazon Lex bot with necessary intents, utterances, and slots. You also create an AWS Lambda function that calls Office 365 APIs for fulfilling the intent.
  • Stage 3: Deploy the bot to a website. After completion of stage 1 and stage 2, you have a fully functional bot that discovers and books Office 365 calendars slots.

Let’s start building the solution.

Stage 1: Create an Office 365 application

Follow these steps to create the Office 365 application. If you don’t have an existing office 365 account for testing, you can use the free trial of Office 365 business premium.

Notes:

  1. To complete this stage, you will need to work with your Azure Active Directory administrator.
  2. The Office 365 application can be created using Microsoft Azure portal or the Application Registration portal. The following steps uses the Application Registration portal for creating the Office 365 application.

Log in to https://apps.dev.microsoft.com/ with your Office365 credentials and click Add an App.

  1. On the Create App Screen, enter the name and choose Create.
  2. On the Registration screen, Copy the Application Id and choose Generate New Password in the Application Secrets.
  3. In the New password generated pop-up window, save the newly generated password in a secure location. Note that this password will be displayed only once.
  4. Click Add Platform and select Web.
  5. In the Web section, enter the URL of the web app where the Amazon Lex chatbot will be hosted. For testing purposes, you can also use a URL on your computer, such as http://localhost/myapp/. Keep a note of this URL.
  6. In the Microsoft Graph Permissions section, choose Add in Application Permissions sub-section.
  7. In the Select Permission pop-up window, select Calendars.ReadWrite permission.
  8. Choose Save to create the application.
  9. Request your Azure Active Directory (AAD) Administrator to give you the tenant ID for your organization. The AAD tenant ID is available on the Azure portal.
  10. Request your AAD Administrator for the user id of the agents whose calendar you wish to book. This information is available on the Azure portal.
  11. Admin Consent: Your AAD administrator needs to provide consent to the application to access 365 APIs. This is done by constructing the following URL and granting access explicitly.URL: https://login.microsoftonline.com/{Tenant_Id}/adminconsent?client_id={Application_Id}&state=12345&redirect_uri={Redirect_URL}For the previous parameters substitute suitable values.
    • {AAD Tenant_Id}: AAD Tenant ID from step 9
    • {Application_Id}: Application ID from step 2
    • {Redirect_URL}: Redirect URL from step 5

     Your AAD administrator will be prompted for administrator credentials on clicking the URL. On successful authentication the administrator gives explicit access by clicking Accept.

    Notes:

    1. This step can be done only by the AAD administrator.
    2. The administrator might receive a page not found error after approving the application if the redirect URL specified in step 5 is http://localhost/myapp/ . This is because the approval page redirects to the redirect URL configured. You can ignore this error and proceed
  12. To proceed to the next step, a few important parameters need to be saved. Open a text pad and create the following key value pairs. These are the keys that you need to use.

    Key

    Values/ Details

    Azure Active Directory Id The AAD Administrator has this information as described in step 9.
    Application Id The ID of the Office 365 application that you created. Specified in step 2.
    Redirect Uri The redirect URI specified in step 5.
    Application Password The Office 365 application password stored in step 3.
    Investment Agent UserId The user ID of the investment agent from step 10.
    Personal Agent UserId The User ID of the personal banking agent from step 10.

Stage 2: Create the Amazon Lex bot for booking appointments

In this stage, you create the Amazon Lex bot and the AWS Lambda function and store the application passwords in AWS Secrets Manager. After completing this stage you will have a fully functional bot that is ready for deployment. The code for the lambda function is available here.

This stage is automated using AWS CloudFormation and accomplishes the following tasks:

  • Creates an Amazon Lex bot with required intents, utterances, and slots.
  • Stores Office 365 secrets in AWS Secrets Manager.
  • Deploys the AWS Lambda function.
  • Creates AWS Identity and Access Management (IAM) roles necessary for the AWS Lambda function.
  • Associates the Lambda function with the Amazon Lex bot.
  • Builds the Amazon Lex bot.

Choose the launch stack button to deploy the solution.

On the AWS CloudFormation console, use the data from Step 11 of Stage 1 as parameters to deploy the solution.

The key aspects of the solution are the Amazon Lex bot and the AWS Lambda function used for fulfilment. Let’s dive deep into these components.

Amazon Lex bot

The Amazon Lex bot consist of intents, utterances, and slots. The following image describes them.

AWS Lambda function

The AWS Lambda function gets inputs from Amazon Lex and calls Office 365 APIs to book appointments. The following are the key AWS Lambda functions and methods.

Function: 1 – Get Office 365 bearer token

To call Office 365 APIs, you first need to get the bearer token from Microsoft. The method described in this section gets the bearer token by passing the Office 365 application secrets stored in AWS Secrets Manager.

var reqBody = "client_id=" + ClientId + "&scope=https%3A%2F%2Fgraph.microsoft.com%2F.default&redirect_uri=" + RedirectUri + "&grant_type=client_credentials&client_secret=" + ClientSecret;
    var url = "https://login.microsoftonline.com/" + ADDirectoryId + "/oauth2/v2.0/token";

    Request.post({
        "headers": { "content-type": "application/x-www-form-urlencoded" },
        "url": url,
        "body": reqBody,
    }, (error, response, body) => {
        if (error) {
            return console.log(error);
        }
	 accessToken = JSON.parse(body).access_token;
        if (bookAppointment) {

            BookAppointment(accessToken , //other params);
        }
        else {
            GetDateValues(accessToken , //other params);
        }
    });

Function: 2 – Book calendar slots

This function books a slots in the agent’s calendar. The Graph API called is user/events. As noted earlier, the access token is necessary for all API calls and is passed as a header.

var postUrl = "https://graph.microsoft.com/v1.0/users/" + userId + "/events";
    var endTime = parseInt(time) + 1;

    var pBody = JSON.stringify({
        "subject": "Customer meeting",
        "start": { "dateTime": date + "T" + time + ":00", "timeZone": timeZone },
        "end": { "dateTime": date + "T" + endTime + ":00:00", "timeZone": timeZone }
    });

    Request.post({
        "headers": {
            "Content-type": "application/json",
            "Authorization": "Bearer " + accesstoken
        },
        "url": postUrl,
        "body": pBody
    }, (error, response, postResBody) => {
        if (error) {
            return console.log(error);
        }

        //Return successful message to customer and complete the intent..

You have completed Stage 2, and you have built the bot. It’s now time to test the bot and deploy it on a website. Use the following steps to the test the bot in the Amazon Lex console.

Testing the bot

  1. In the Amazon Lex console, choose the MakeAppointment bot, choose Test bot, and then enter Book an appointment.
  2. Select Personal/ Investment and Choose a Day from the response cards.
  3. Specify a time from the list of slots available.
  4. Confirm the appointment.
  5. Go to the outlook calendar of the investment/ personal banking agent to verify that a slot has been booked on the calendar.

Congratulations! You have successfully deployed and tested a bot that is able to book appointments in Office 365.

Stage 3: Make the bot available on the web  

Now your bot is ready to be deployed. You can choose to deploy it on a mobile application or on messaging platforms like Facebook, Slack, and Twilio by using these instructions. You can also use this blog that shows you how you can integrate your Amazon Lex bot with a web application. It gives you an AWS CloudFormation template to deploy the web application.

Note: To deploy this in production, use AWS Cognito user pools or use federation to add authentication and authorization to access the website.

Clean up

You can delete the entire CloudFormation stack. Open the AWS CloudFormation console, select the stack, and choose the Delete Stack option on the Actions menu. It will delete all the AWS Lambda functions and secrets stored in AWS Secrets Manager. To delete the bot, go to the Amazon Lex console, select the Make Appointments bot, and then choose Delete on the Actions menu.

Conclusion

This blog post shows you how to build a bot that schedules appointments with Office 365 and deploys it to your website within minutes. This is one of the many ways bots can help you improve productivity and deliver a better customer experience.


About the Author

Rahul Kulkarni is a solutions architect at Amazon Web Services. He works with partners and customers to help them build on AWS

from AWS Machine Learning Blog

Third time lucky for the winner of AWS DeepRacer League in Chicago and new world records at re:MARS

Third time lucky for the winner of AWS DeepRacer League in Chicago and new world records at re:MARS

The AWS DeepRacer League is the world’s first global autonomous racing league, open to anyone. Developers of all skill levels can compete in person at 22 AWS events globally, or online via the AWS DeepRacer console, for a chance to win an expense paid trip to re:Invent 2019, where they will race to win the Championship Cup 2019.

AWS Summit Chicago – winners

On May 30th, the AWS DeepRacer league visited the AWS Summit in Chicago, which was the 11th live race of the 2019 season. The top three there were as enthusiastic as ever and eager to put their models to the test on the track.

The Chicago race was extremely close to seeing all of the top three participants break the 10-second barrier. Scott from A Cloud Guru at the topped the board with 9.35 seconds, closely followed by RoboCalvin at 10.23 seconds and szecsei with 10.79 seconds.

Before Chicago, the winner Scott from A cloud guru had competed in the very first race in Santa Clara and was knocked from the top spot in the last hour of racing! There he ended up 4th, with a time of 11.75 seconds. He tried again in Atlanta, but couldn’t do better than 8th recording a time of 12.69 seconds. It was third time lucky for him in Chicago, where he was finally crowned champion and scored his winning ticket to the Championship Cup at re:Invent 2019!

Winners from Chicago RoboCalvin (2nd – 10.2 seconds), Scott (winner – 9.35 seconds), Szecsei (3rd – 10.7 seconds).

On to Amazon re:MARS, for lightning fast times and multiple world records!

On June 4th, the AWS DeepRacer League moved to the next race in Las Vegas, Nevada, where the inaugural re:MARS conference took place. Re:MARS is a new global AI event focused on Machine Learning, Automation, Robotics, and Space.

Over 2.5 days, AI enthusiasts visited the DeepRacer track to compete for the top prize. It was a competitive race; the world record was broken twice (the previous record was set in Seoul in April and was 7.998 seconds). John (who eventually came second), was first to break it and was in the lead with a time of 7.84 seconds for most of the afternoon before astronav (Anthony Navarro) knocked him off the top spot in the final few minutes of racing, with a winning time of 7.62 seconds. Competition was strong, and developers returned to the tracks multiple times after iterating on their model. Although the times were competitive, they were all cheering for each other and even sharing strategies. It was the fastest race we have seen yet – the top 10 were all under 10 seconds!

The winners from re:MARS John (2nd – 7.84 seconds), Anthony (1st – 7.62 seconds), Gustav (3rd – 8.23 seconds).

Developers of all skill levels can participate in the League

Participants in the league vary in their ability and experience in machine learning. Re:MARS, not surprisingly brought some speedy times, but developers there were still able to learn something new and build on their existing skills. Similarly, our winner from Chicago had some background in the field, but our 3rd place winner had absolutely none. The league is open to all and can help you reach your machine learning goals. The pre-trained models provided at the track make it possible for you to enter the league without building a model, or you can create your own from scratch in one of the workshops held at the event. And new this week is the racing tips page, providing developers with the most up to date tools to improve lap times, tips from AWS experts, and opportunities to connect with the DeepRacer community. Check it out today and start sharing your DeepRacer story!

Machine learning developers, with some or no experience before entering the league.

Another triple coming up!

The 2019 season is in the home stretch and during the week of June 10th, 3 more races are taking place. There will be a full round up on all the action next week, as we approach the last few chances on the summit circuit for developers to advance to the finals at re:Invent 2019. Start building today for your chance to win!

from AWS Machine Learning Blog

Creating a recommendation engine using Amazon Personalize

Creating a recommendation engine using Amazon Personalize

This is a guest blog post by Phil Basford, lead AWS solutions architect, Inawisdom.

At re:Invent 2018, AWS announced Amazon Personalize, which allows you to get your first recommendation engine running quickly, to deliver immediate value to your end user or business. As your understanding increases (or if you are already familiar with data science), you can take advantage of the deep capabilities of Amazon Personalize to improve your recommendations.

Working at Inawisdom, I’ve noticed increasing diversity in the application of machine learning (ML) and deep learning. It seems that nearly every day I work on a new exciting use case, which is great!

The most well-known and successful ML use cases have been retail websites, music streaming apps, and social media platforms. For years, they’ve been embedding ML technologies into the heart of their user experience. They commonly provide each user with an individual personalized recommendation, based on both historic data points and real-time activity (such as click data).

Inawisdom was lucky enough to be given early access to try out Amazon Personalize while it was in preview release. Instead of giving it to data scientists or data engineers, the company gave it to me, an AWS solutions architect. With no prior knowledge, I was able to get a recommendation from Amazon Personalize in just a few hours. This post describes how I did so.

Overview

The most daunting aspect of building a recommendation engine is knowing where to start. This is even more difficult when you have limited or little experience with ML. However, you may be lucky enough to know what you don’t know (and what you should figure out), such as:

  • What data to use.
  • How to structure it.
  • What framework/recipe is needed.
  • How to train it with data.
  • How to know if it’s accurate.
  • How to use it within a real-time application.

Basically, Amazon Personalize provides a structure and supports you as it guides you through these topics. Or, if you’re a data scientist, it can act as an accelerator for your own implementation.

Creating an Amazon Personalize recommendation solution

You can create your own custom Amazon Personalize recommendation solution in a few hours. Work through the process in the following diagram.

Creating dataset groups and datasets

When you open Amazon Personalize, the first step is to create a dataset group, which can be created from loading historic data or from data gathered from real-time events. In my evaluation of Amazon Personalize at Inawisdom, I used only historic data.

When using historic data, each dataset is imported data from a .csv file located on Amazon S3, and each dataset group can contain three datasets:

  • Users
  • Item
  • Interactions

For the purpose of this quick example, I only prepared the Interactions data file, because it’s required and the most important.

The Interactions dataset contains a many-to-many relationship (in old relational database terms) that maps USER_ID to ITEM_ID. Interactions can be enriched with optional User and Item datasets that contain additional data linked by their IDs. For example, for a film-streaming website, it can be valuable to know the age classification of a film and the age of the viewer and understand which films they watch.

When you have all your data files ready on S3, import them into your data group as datasets. To do this, define a schema for the data in the Apache Avro format for each dataset, which allows Amazon Personalize to understand the format of your data. Here is an example of a schema for Interactions:

{
    "type": "record",
    "name": "Interactions",
    "namespace": "com.amazonaws.personalize.schema",
    "fields": [
        {
            "name": "USER_ID",
            "type": "string"
        },
        {
            "name": "ITEM_ID",
            "type": "string"
        },
        {
            "name": "TIMESTAMP",
            "type": "long"
        }
    ],
    "version": "1.0"
}

In evaluating Amazon Personalize, you may find that you spend more time at this stage than the other stages. This is important and reflects that the quality of your data is the biggest factor in producing a usable and accurate model. This is where Amazon Personalize has an immediate effect—it’s both helping you and accelerating your progress.

Don’t worry about the format of the data, just the key fields being identified.  Don’t get caught up in worrying about what model to use or the data it needs. Your focus is just on making your data accessible. If you’re just starting out in ML, you can get a basic dataset group working quickly with minimal data. If you’re a data scientist, you probably come back to this stage again to improve and add more data points (data features).

Creating a solution

When you have your dataset group with data in it, the next step is to create a solution. A solution covers two areas—selecting the model (recipe) and then using your data to train it. You have recipes and a popularity baseline from which to choose. Some of the recipes on offer include the following:

  • Personalized reranking (search)
  • SIMS—related items
  • HRNN (Coldstart, Popularity-Baseline, and Metadata)—user personalization

If you’re not a data scientist, don’t worry. You can use AutoML, which runs your data against each of the available recipes.  Amazon Personalize then judges the best recipe based on the accuracy results produced. This also covers changing some of the settings to get better results (hyperparameters).  The following image shows a solution with the metric section at the bottom showing accuracy:

Amazon Personalize allows you to get something up and running quickly, even if you’re not a data scientist. This includes not just model selection and training, but restructuring the data into what each recipe requires and hiding the hassle of spinning up servers to run training jobs. If you are a data scientist, this is also good news, because you can take full control of the process.

Creating a campaign

After you have a solution version (a confirmed recipe and trained artifacts), it’s time to put it into action. This isn’t easy, and there is a lot to consider in running ML at scale.

To get you started, Amazon Personalize allows you to deploy a campaign (an inference engine for your recipe and the trained artifacts) as a PaaS. The campaign returns a REST API that you can use to produce recommendations. Here is an example of calling your API from Python:

get_recommendations_response = personalize_runtime.get_recommendations(
    campaignArn = campaign_arn,
    userId = str(user_id),
    itemId = str(item_id)
)

item_list = get_recommendations_response['itemList']

The results:

Recommendations: [
  "Full Monty, The (1997)",
  "Chasing Amy (1997)",
  "Fifth Element, The (1997)",
  "Apt Pupil (1998)",
  "Grosse Pointe Blank (1997)",
  "My Best Friend's Wedding (1997)",
  "Leaving Las Vegas (1995)",
  "Contact (1997)",
  "Waiting for Guffman (1996)",
  "Donnie Brasco (1997)",
  "Fargo (1996)",
  "Liar (1997)",
  "Titanic (1997)",
  "English Patient, The (1996)",
  "Willy Wonka and the Chocolate Factory (1971)",
  "Chasing Amy (1997)",
  "Star Trek: First Contact (1996)",
  "Jerry Maguire (1996)",
  "Last Supper, The (1995)",
  "Hercules (1997)",
  "Kolya (1996)",
  "Toy Story (1995)",
  "Private Parts (1997)",
  "Citizen Ruth (1996)",
  "Boogie Nights (1997)"
]

Conclusion

Amazon Personalize is a great addition to the AWS set of machine learning services. Its two-track approach allows you to quickly and efficiently get your first recommendation engine running and deliver immediate value to your end user or business. Then you can harness the depth and raw power of Amazon Personalize, which will keep you coming back to improve your recommendations.

Amazon Personalize puts a recommendation engine in the hands of every company and is now available in US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Singapore) and EU (Ireland). Well done, AWS!​

 

 

from AWS Machine Learning Blog

Build your own real-time voice translator application with AWS services

Build your own real-time voice translator application with AWS services

Just imagine—you say something in one language, and a tool immediately translates it to another language. Wouldn’t it be even cooler to build your own real-time voice translator application using AWS services? It would be similar to the Babel fish in The Hitchhiker’s Guide to the Galaxy:

“The Babel fish is small, yellow, leech-like—and probably the oddest thing in the universe… If you stick one in your ear, you can instantly understand anything said to you in any form of language.”

Douglas Adams, The Hitchhiker’s Guide to the Galaxy

In this post, I show how you can connect multiple services in AWS to build your own application that works like a bit like the Babel fish.

About this blog post
Time to read 15 minutes
Time to complete 30 minutes
Cost to complete Under $1
Learning level Intermediate (200)
AWS services Amazon Polly, Amazon Transcribe, Amazon Translate, AWS Lambda, Amazon CloudFront, Amazon S3

Overview

The heart of this application consists of an AWS Lambda function that connects the following three AI language services:

  • Amazon Transcribe — This fully managed and continuously trained automatic speech recognition (ASR) service takes in audio and automatically generates accurate transcripts. Amazon Transcribe supports real-time transcriptions, which help achieve near real-time conversion.
  • Amazon Translate — This neural machine-translation service delivers fast, high-quality, and affordable language translation.
  • Amazon Polly — This text-to-speech service uses advanced deep learning technologies to synthesize speech that sounds like a human voice.

A diagrammatic representation of how these three services relate is shown in the following illustration.

To make this process a bit easier, you can use an AWS CloudFormation template, which initiates the application. The following diagram shows all the components of this process, which I later describe in detail.

Here’s the flow of service interactions:

  1. Allow access to your site with Amazon CloudFront, which allows you to get an HTTPS link to your page and which is required by some browsers to record audio.
  2. Host your page on Amazon S3, which simplifies the whole solution. This is also the place to save the input audio file recorded in the browser.
  3. Gain secure access to S3 and Lambda from the browser with Amazon Cognito.
  4. Save the input audio file on S3 and invoke a Lambda function. In the input of the function, provide the name of audio file (that you saved earlier in Amazon S3), and pass the source and target language parameters.
  5. Convert audio into text with Amazon Transcribe.
  6. Translate the transcribed text from one language to another with Amazon Translate.
  7. Convert the new translated text into speech with Amazon Polly.
  8. Save the output audio file back to S3 with the Lambda function, and then return the file name to your page (JavaScript invocation). You could return the audio file itself, but for simplicity, save it on S3 and just return its name.
  9. Automatically play the translated audio to the user.
  10. Accelerate the speed of delivering the file with CloudFront.

Getting started

As I mentioned earlier, I created an AWS CloudFormation template to create all the necessary resources.

  1. Sign into the console, and then choose Launch Stack, which launches a CloudFormation stack in your AWS account. The stack launches in the US-East-1 (N. Virginia) Region.
  2. Go through the wizard and create the stack by accepting the default values. On the last step of the wizard, acknowledge that CloudFormation creates IAM After 10–15 minutes, the stack has been created.
  3. In the Outputs section of the stack shown in the following screenshot, you find the following four parameters:
    • VoiceTranslatorLink—The link to your webpage.
    • VoiceTranslatorLambda—The name of the Lambda function to be invoked from your web application.
    • VoiceTranslatorBucket—The S3 bucket where you host your application, and where audio files are stored.
    • IdentityPoolIdOutput—The identity pool ID, which allows you to securely connect to S3 and Lambda.
  4. Download the following zip file and then unzip it. There are three files inside.
  5. Open the downloaded file named voice-translator-config.js, and edit it based on the four output values in your stack (Step 3). It should then look similar to the following.
    var bucketName = 'voicetranslatorapp-voicetranslat……';
    var IdentityPoolId = 'us-east-1:535…….';
    var lambdaFunction = 'VoiceTranslatorApp-VoiceTranslatorLambda-….';

  6. In the S3 console, open the S3 bucket (created by the CloudFormation template). Upload all three files, including the modified version of voice-translator-config.js.

Testing

Open your application from the link provided in Step 3. In the Voice Translator App interface, perform the following steps to test the process:

  1. Choose a source language.
  2. Choose a target language.
  3. Think of something to say, choose START RECORDING, and start speaking.
  4. When you finish speaking, choose STOP RECORDING and wait a couple of seconds.

If everything worked fine, the application should automatically play the audio in the target language.

Conclusion

As you can see, it takes less than an hour to create your own unique voice translation application, based on the existing, integrated AI language services in AWS. Plus, the whole process is done without a server.

This application currently supports two input languages: US English and US Spanish. However, Amazon Transcribe recently started supporting real-time speech-to-text in British English, French, and Canadian French. Feel free to try to extend your application by using those languages.

To see the source code of the app (including the Lambda function written in JavaScript), you can find it in the voice-translator-app GitHub repo. In addition to using the browser to record your voice, I also used this recorder.js script by Matt Diamond.


About the Author

Tomasz Stachlewski is a Solutions Architect at AWS, where he helps companies of all sizes (from startups to enterprises) in their cloud journey. He is a big believer in innovative technology, such as serverless architecture, which allows companies to accelerate their digital transformation.

 

 

 

 

from AWS Machine Learning Blog

AWS DeepLens (2019 edition) zooms out to more countries around the world

AWS DeepLens (2019 edition) zooms out to more countries around the world

At re:Invent 2017, we launched the world’s first machine learning (ML)–enabled video camera, AWS DeepLens. This put ML in the hands of developers, literally, with a fully programmable video camera, tutorials, code, and pre-trained models designed to expand ML skills. With AWS DeepLens, it is possible to create useful ML projects without a PhD in computer sciences or math, and anyone with a decent development background can start using it.

Today, I’m pleased to announce that AWS DeepLens (2019 edition) is now available for pre-order for developers in Canada, Europe, and Japan on the following websites:

  • Amazon.ca
  • Amazon.de
  • Amazon.es
  • Amazon.fr
  • Amazon.it
  • Amazon.co.jp
  • Amazon.co.uk

We have made significant enhancements to the device to further improve your experience:

  • An optimized onboarding process that allows you to get started with ML quickly.
  • Support for the Intel RealSense depth sensor, which allows you to build advanced ML models with higher accuracy. You can use depth data in addition to 2-D image inputs.
  • Support for the Intel Movidius Neural Compute Stick for those who want to achieve additional AI performance using external Intel accelerators.

The 2019 edition comes integrated with SageMaker Neo, which lets customers train models one time and run them with up to 2X improvement in performance.

In addition to device improvements, we have invested significantly in the content development as well. We included guided instructions for building ML for interesting applications such as worker safety, sentiment analysis, who drinks the most coffee, and so on. We’re making ML available to all who want to learn and develop their skills while building fun applications.

Over the last year, we have had many requests from customers in Canada, Europe, and Japan, asking when we would launch AWS DeepLens in their Region. We were happy to announce today’s news.

“We welcome the general availability of AWS DeepLens in Japan market. It will excite our developer community and developers in Japan to accelerate the adoption of deep learning technologies” said Daisuke Nagao and Ryo Nakamaru, co-leads for Japan AWS User Group AI branch (JAWS-UG AI).

ML in the hands of everybody

Amazon and AWS have a long history with ML and DL tools around the world. In Europe, we opened an ML Development Center in Berlin back in 2013, where developers and engineers support our global ML and DL services such as Amazon SageMaker. This is in addition to the many customers, from startups to enterprises to the public sector, who are using our ML and DL tools in their Regions.

ML and DL have been a big part of our heritage over the last 20 years and the work we do around the world, is helping to democratize these technologies, making them accessible to everyone.

After we announced the general availability of AWS DeepLens in the US in June last year, thousands of devices shipped.  We have seen many interesting and inspirational applications. Two that we’re excited to highlight are the DeepLens Educating Entertainer, or “Dee” for short, and SafeHaven.

Dee—DeepLens Educating Entertainer

Created by Matthew Clark from Manchester, Dee is an example of how image recognition can be used to make a fun, interactive, and educational game for young or less able children.

The AWS DeepLens device asks children to answer questions by showing the device a picture of the answer. For example when the device asks, “What has wheels?”, the child is expected to show it an appropriate picture, such as a bicycle or bus. Right answers are praised and incorrect ones are given hints on how to get it right. Experiences like these help children learn through interaction and positive reinforcement.

Young children, and some older ones with special learning needs, can struggle to interact with electronic devices. They may not be able to read a tablet screen, use a computer keyboard, or speak clearly enough for voice recognition. With video recognition, this can change. Technology can now better understand the child’s world and observe when they do something, such as picking up an object or performing an action. This leads to many new ways of interaction.

AWS DeepLens is particularly appealing for children’s interactions because it can run its deep learning (DL) models offline. This means that the device can work anywhere, with no additional costs.

Before building Dee, Matthew had no experience working with ML technologies. However, after receiving an AWS DeepLens device at AWS re:Invent 2017, he soon got up to speed with DL concepts.  For more details, see Second Place Winner: Dee—DeepLens Educating Entertainer.

SafeHaven

SafeHaven is another AWS DeepLens application that came from developers getting an AWS DeepLens device at re:Invent 2017.

Built by Nathan Stone and Paul Miller from Ipswich, UK, SafeHaven is designed to protect vulnerable people by enabling them to identify “who is at the door?” using an Alexa Skill. AWS DeepLens acts as a sentry on the doorstep, storing the faces of every visitor. When a visitor is “recognized,” their name is stored in a DynamoDB table, ready to be retrieved by an Alexa Skill. Unknown visitors trigger SMS or email alerts to relatives or carers via an SNS subscription.

This has huge potential as an application for private homes, hospitals, and care facilities, where the door should only be opened to recognized visitors. For more details, see Third Place Winner: SafeHaven: Real-Time Reassurance. Re:invented.

Other applications

In Canada, a large Canadian discount retailer used AWS DeepLens as part of a complex loss prevention test pilot for its operations LATAM. A Calgary-based oil company tested out augmenting its sign-in process in its warehouse facilities, adding in facial recognition.

One of the world’s largest automotive manufacturers, headquartered in Canada, is building a use case at one of its plants to use AWS DeepLens for predictive maintenance as well as image classification. Additionally, an internal PoC for manufacturing has been built to show how AWS DeepLens could be used to track who takes and returns tools from a shop, and when.

The Northwestern University School of Professional Studies is developing a computer vision course for their data science graduate students, using AWS DeepLens provided by Amazon. Other universities have expressed interest in developing courses to use AWS DeepLens in the curriculum, such as artificial intelligence, information systems, and health analytics.

Summary

These are just a few examples, and we expect to see many more when we start shipping devices around the world. If you have an AWS DeepLens project that you think is cool and you would like us to check out, submit it to the AWS DeepLens Project Outline.

We look forward to seeing even more creative applications come from the launch in Europe, so check the AWS DeepLens Community Projects page often.


About the Authors

Rick Mitchell is a Senior Product Marketing Manager with AWS AI. His goal is to help aspiring developers to get started with Artificial Intelligence. For fun outside of work, Rick likes to travel with his wife and two children, barbecue, and run outdoors.

 

 

 

from AWS Machine Learning Blog

Amazon SageMaker Neo Helps Detect Objects and Classify Images on Edge Devices

Amazon SageMaker Neo Helps Detect Objects and Classify Images on Edge Devices

Nomura Research Institute (NRI) is a leading global provider of system solutions and consulting services in Japan and an APN Premium Consulting Partner. NRI is increasingly getting requests to help customers optimize inventory and production plans, reduce costs, and create better customer experiences. To address these demands, NRI is turning to new sources of data, specifically videos and photos, to help customers better run their businesses.

For example, NRI is helping Japanese convenience stores use data from in-store cameras to monitor inventory. And, NRI is helping Japanese airports to optimize people flow based on traffic patterns observed inside the airport.

In these scenarios, NRI needed to create a machine learning models that detects objects. NRI needed to detect goods (drinks, snacks, paper products, etc.) and people that leave stores for retailers, and commuters for airports.

NRI turned to Acer and AWS to meet their goals. Acer aiSage, is an edge computing device that uses computer vision and AI to provide real-time insights.  Acer aiSage makes use of Amazon SageMaker Neo, a service that lets you train models that detect objects and classify images once and run them anywhere, and AWS IoT Greengrass, a service that brings local compute, messaging, data caching, sync, and machine learning inference capabilities to edge devices.

“One of our customers, Yamaha Motor Co., Ltd., is evaluating AI-based store analysis and smart store experience.” said Shigekazu Ohmoto, Senior Managing Director, NRI. “We knew that we had to build several computer vision models for such a solution. We built our models using MXNet GluonCV, compiled the models with Amazon SageMaker Neo, and then deployed the models on Acer’s aiSage through AWS IoT Greengrass.  Amazon SageMaker Neo reduced the footprint of the model by abstracting out the ML framework and optimized it to run faster on our edge devices. We leverage full AWS technology stacks including edge side for our AI solutions.”

Here is how object detection and image classification works at NRI.

Amazon SageMaker is used to train, build, and deploy the machine learning model. Amazon SageMaker Neo makes it possible to train machine learning models once and run them anywhere in the cloud and at the edge.

Amazon SageMaker Neo optimizes models to run up to twice as fast, with less than a tenth of the memory footprint, with no loss in accuracy. You start with a machine learning model built using MXNet, TensorFlow, PyTorch, or XGBoost and trained using Amazon SageMaker. Then, choose your target hardware platform. With a single click, Amazon SageMaker Neo compiles the trained model into an executable.

The compiler uses a neural network to discover and apply all of the specific performance optimizations to make your model run most efficiently on the target hardware platform. You can deploy the model to start making predictions in the cloud or at the edge.

At launch, Amazon SageMaker Neo was available in four AWS Regions: US East (N. Virginia), US West (Oregon), EU (Ireland), Asia Pacific (Seoul). As of May 2019, SageMaker Neo is now available in Asia Pacific (Tokyo), Japan.

To learn more about Amazon SageMaker Neo, see the Amazon SageMaker Neo webpage.


About the Authors

Satadal Bhattacharjee is Principal Product Manager with AWS AI. He leads the Machine Learning Engine PM team working on projects such as SageMaker Neo, AWS Deep Learning AMIs, and AWS Elastic Inference. For fun outside work, Satadal loves to hike, coach robotics teams, and spend time with his family and friends.

 

 

 

Kimberly Madia is a Principal Product Marketing Manager with AWS Machine Learning. Her goal is to make it easy for customers to build, train, and deploy machine learning models using Amazon SageMaker. For fun outside work, Kimberly likes to cook, read, and run on the San Francisco Bay Trail.

 

 

 

 

from AWS Machine Learning Blog

Amazon SageMaker Neo Enables Pioneer’s Machine Learning in Cars

Amazon SageMaker Neo Enables Pioneer’s Machine Learning in Cars

Pioneer Corp is a Japanese multinational corporation specializing in digital entertainment products. Pioneer wanted to help their customers check road and traffic conditions through in-car navigation systems. They developed a real-time, image-sharing service to help drivers navigate. The solution analyzes photos, diverts traffic, and sends alerts based on the observed conditions.  Because the pictures are of public roadways, they also had to ensure privacy by blurring out faces and license plate numbers.

Pioneer built their image-sharing service using Amazon SageMaker Neo. Amazon SageMaker is a fully-managed service that provides the ability for developers to build, train, and deploy machine learning models at much less effort and lower cost. Amazon SageMaker Neo is a service that allows developers to train machine learning models once and run them anywhere in the cloud and at the edge. Amazon SageMaker Neo optimizes models to run up to twice as fast, with less than a tenth of the memory footprint, with no loss in accuracy.

You start with an ML model built using MXNet, TensorFlow, PyTorch, or XGBoost and trained using Amazon SageMaker. Then, choose your target hardware platform such as M4/M5/C4/C5 instances or edge devices. With a single click, Amazon SageMaker Neo compiles the trained model into an executable.

The compiler uses a neural network to discover and apply all of the specific performance optimizations to make your model run most efficiently on the target hardware platform. You can deploy the model to start making predictions in the cloud or at the edge.

At launch, Amazon SageMaker Neo was available in four AWS Regions: US East (N. Virginia), US West (Oregon), EU (Ireland), Asia Pacific (Seoul). As of May 2019, SageMaker Neo is now available in Asia Pacific (Tokyo), Japan.

Pioneer developed a machine learning model for real-time image detection and classification using data from cameras in cars. They detect many different kinds of images, such as license plates, people, street traffic, and road signs. The in-car cameras upload data to the cloud and run inference using Amazon SageMaker Neo. The results are sent back to the cars so drivers can be informed on the road.

Here’s how it works.

“We decided to use Amazon SageMaker, a fully managed service for machine learning,” said Ryunosuke Yamauchi, an AI Engineer at Pioneer. “We needed a fully managed service because we didn’t want to spend time managing GPU instances or integrating different applications. In addition, Amazon SageMaker offers hyperparameter optimization, which eliminates the need for time-consuming, manual hyperparameter tuning. Also, we choose Amazon SageMaker because it supports all leading frameworks such as MXNet GluonCV. That’s our preferred framework because it provides state-of-the-art pre-trained object detection models such as Yolo V3.”

To learn more about Amazon SageMaker Neo, see the Amazon SageMaker Neo webpage.


About the Authors

Satadal Bhattacharjee is Principal Product Manager with AWS AI. He leads the Machine Learning Engine PM team working on projects such as SageMaker Neo, AWS Deep Learning AMIs, and AWS Elastic Inference. For fun outside work, Satadal loves to hike, coach robotics teams, and spend time with his family and friends.

 

 

 

Kimberly Madia is a Principal Product Marketing Manager with AWS Machine Learning. Her goal is to make it easy for customers to build, train, and deploy machine learning models using Amazon SageMaker. For fun outside work, Kimberly likes to cook, read, and run on the San Francisco Bay Trail.

from AWS Machine Learning Blog