Tag: AWS Mobile Blog

Developing and testing GraphQL APIs, Storage and Functions with Amplify Framework Local Mocking features

Developing and testing GraphQL APIs, Storage and Functions with Amplify Framework Local Mocking features

This article was written by Ed Lima, Sr. Solutions Architect, AWS and Sean Grove, OneGraph

In fullstack application development, iteration is king. At AWS, we’re constantly identifying steps in the process of shipping product that slow iteration, or sap developer productivity and happiness, and work to shorten it. To that end, we’ve provided cloud APIs, serverless functions, databases, and storage capabilities so that the final steps of deploying, scaling, and monitoring applications are as instantaneous as possible.

Today, we’re taking another step further in shortening feedback cycles by addressing a critical stage in the application cycle: local development.

Working closely with developers, we’ve seen the process of delivering new product features to production:

  1. Prototyping changes locally
  2. Committing and pushing to the cloud resources
  3. Mocking/testing/debugging the updates
  4. Returning to step 1 if there are any fixes to incorporate

In some cases, this can be an incredibly tight loop, executed dozens or hundreds of times by a developer, before new features are ready to ship. It can be a tedious process, and tedious processes make unhappy developers.

AWS AppSync gives developers easy, and convenient access to exactly the right data they need at a global scale via its flexible GraphQL APIs. These APIs, among other data sources, can be backed by Amazon DynamoDB for a scalable key-value and document database that delivers single-digit millisecond performance at any scale. Applications can also use Amazon Simple Storage Service (S3) for an object storage service that offers industry-leading scalability, data availability, security, and performance. On top of it, developers can run their code without provisioning or managing servers with AWS Lambda. All of these services live in the cloud, which is great for production – highly available, fault tolerant, scaling to meet any demand, running in multiple availability zones in different AWS regions around the planet.

In order to optimize and streamline the feedback loop between local and cloud resources earlier in the development process, we talked to many customers to understand their requirements for local development:

  • NoSQL data access via a robust GraphQL API
  • Serverless functions triggered for customized business logic from any GraphQL type or operation
  • Developer tooling, including a GraphiQL IDE fully pre-integrated with open-source plugins such as those from OneGraph, customized for your AppSync API
  • Simulated object storage
  • Instantaneous feedback on changes
  • Debugging GraphQL resolver mapping templates written in Velocity Template Language (VTL)
  • Ability to use custom directives and code generation with the GraphQL Transformer
  • Ability to mock JWT tokens from Amazon Cognito User Pools to test authorization rules locally
  • Work with web and mobile platforms (iOS and Android)
  • And, the ability to work offline

With the above customer requirements in mind we’re happy to launch the new Local Mocking and Testing features in the Amplify Framework.

As a developer using Amplify, you’ll immediately see the changes you make locally to your application, speeding up your development process and removing interruptions to your workflow. No waiting for cloud services to be deployed – just develop, test, debug, model your queries, and generate code locally until you’re happy with your product, then deploy your changes to the scalable, highly available backend services in the cloud as you’ve always done.

Getting Started

To get started, install the latest version of the Amplify CLI by following these steps, and follow along with our example below. Use a boilerplate React app created with create-react-app and initialize an Amplify project in the app folder with the default options by executing the amplify init command. Note, the local mocking and testing features in the Amplify CLI will also work with iOS and Android apps.

Next, we add a GraphQL API using the command amplify add api with API Key authorization and the sample schema for single object with fields (Todo):

When defining a GraphQL schema you can use directives from the GraphQL Transformer in local mocking as well as local code generation from the schema for GraphQL operations. The following directives are currently supported in the local mocking environment:

  • @model
  • @auth
  • @key
  • @connection
  • @versioned
  • @function

The sample GraphQL schema generated by the Amplify CLI has a single “Todo” type defined with @model which means the GraphQL Transformer will automatically create a GraphQL API with an extended schema containing queries, mutations, and subscriptions with built-in CRUDL logic to access a DynamoDB table, also automatically deployed. It basically creates a fully-fledged API backend in seconds:

type Todo @model {
  id: ID!
  name: String!
  description: String
}

At this point, your API is ready for some local development! Fire up your local AppSync and DynamoDB resources by executing either the command  amplify mock to test all supported local resources or amplify mock api to test specifically the GraphQL API and watch as a local mock endpoint starts up. Code will be automatically generated and validated for queries, mutations, subscriptions and a local AppSync mock endpoint will start up:

Collaborating with the Open Source community is always special, it has allowed us to improve and better understand the use cases that customers want to tackle with local mocking and testing. In order to move fast and ensure that we were releasing a valuable feature, we worked for several months with a few community members. We want to give a special thanks to Conduit Ventures for creating the AWS-Utils package, as well as allowing us to fork it for this project and integrate with the Amplify new local mocking environment.

Prototyping API calls with an enhanced local GraphiQL IDE

The mock endpoint runs on localhost and simulates an AWS AppSync API connected to a DynamoDB table (defined at the GraphQL schema with the @model directive), all implemented locally on your developer machine.

We also ship tools to explore and interact with your GraphQL API locally. In particular, the terminal will print out a link to an instance of the GraphiQL IDE, where you can introspect the schema types, lookup documentation on any field or type, test API calls, and prototype your queries and mutations:

We’ve enhanced the stock GraphiQL experience with an open-source plugin that OneGraph have created to make your developer experience even nicer. In the Amplify GraphiQL Explorer, you’ll notice an UI generated for your specific GraphQL API that allows to quickly and easily explore, build GraphQL queries, mutations, or even subscriptions by simply navigating checkboxes. You can create, delete, update, read, or list data from your local DynamoDB tables in seconds.

With this new tooling, you can go from exploring your new GraphQL APIs locally to a full running application in a few minutes. Amplify is leveraging the power of open source to integrate the new local mocking environment with tools such as AWS-Utils and the GraphiQL Explorer to streamline the development experience and tighten the iteration cycle even further. If you’re interested in learning more about how and why the explorer was built, check out OneGraph’s blog on how they on-board users who are new to GraphQL.

What if you need to test and prototype real-time subscriptions? They also work seamlessly in the local environment. While amplify mock api is running, open another terminal window and execute yarn add aws-amplify to install some client dependencies then run yarn start.  In order to test, paste the code bellow to the src/App.js file in the React project, replacing the existing boilerplate code generated by the create-react-app command:

import React, { useEffect, useReducer } from "react";
import Amplify from "@aws-amplify/core";
import { API, graphqlOperation } from "aws-amplify";
import { createTodo } from "./graphql/mutations";
import { listTodos } from "./graphql/queries";
import { onCreateTodo, onUpdateTodo } from "./graphql/subscriptions";

import config from "./aws-exports";
Amplify.configure(config); // Configure Amplify

const initialState = { todos: [] };
const reducer = (state, action) => {
  switch (action.type) {
    case "QUERY":
      return { ...state, todos: action.todos };
    case "SUBSCRIPTION":
      return { ...state, todos: [...state.todos, action.todo] };
    default:
      return state;
  }
};

async function createNewTodo() {
  const todo = { name: "Use AppSync", description: "Realtime and Offline" };
  await API.graphql(graphqlOperation(createTodo, { input: todo }));
}
function App() {
  const [state, dispatch] = useReducer(reducer, initialState);

  useEffect(() => {
    getData();
    const subscription = API.graphql(graphqlOperation(onCreateTodo)).subscribe({
      next: eventData => {
        const todo = eventData.value.data.onCreateTodo;
        dispatch({ type: "SUBSCRIPTION", todo });
      }
    });
    return () => {
      subscription.unsubscribe();
    };
  }, []);

  async function getData() {
    const todoData = await API.graphql(graphqlOperation(listTodos));
    dispatch({ type: "QUERY", todos: todoData.data.listTodos.items });
  }

  return (
    <div>
      <div className="App">
        <button onClick={createNewTodo}>Add Todo</button>
      </div>
      <div>
        {state.todos.map((todo, i) => (
          <p key={todo.id}>
            {todo.name} : {todo.description}
          </p>
        ))}
      </div>
    </div>
  );
}
export default App;

Open two browser windows, one with the local GraphiQL instance and another one with the React App. As you can see in the following animation, you’ll be able to create items, see the mutations automatically triggering subscriptions and displaying the changes in the web app with no need to reload the browser:

 

If you want to access your NoSQL local data directly, as DynamoDB Local uses SQLite internally you can also access the data in the tables by using your IDE extension of choice:

Seamless transition between local and cloud environments

In the screenshot above you’ll notice the GraphQL API is in a “Create” state in the terminal section at the bottom, which means the backend resources are not deployed to the cloud yet. If we check the local “aws_exports.js” file generated by Amplify, which contains the identifiers of the resources created in different categories, you’ll notice the API endpoint is accessed locally and we’re using a fake API Key to authorize calls:

const awsmobile = {
    "aws_project_region": "us-east-1",
    "aws_appsync_graphqlEndpoint": "http://localhost:20002/graphql",
    "aws_appsync_region": "us-east-1",
    "aws_appsync_authenticationType": "API_KEY",
    "aws_appsync_apiKey": "da2-fakeApiId123456"
};

export default awsmobile;

What about testing more refined authentication requirements? You can still authenticate against a Cognito User Pool. The local testing server will honor the JWT tokens generated by Amazon Cognito and the rules defined by the @auth directive in your GraphQL schema. However, as Cognito is not running locally, you need to execute the command amplify push first to create the user pool and easily test users access with, for instance, the Amplify withAuthenticator higher order component on React. After that you can move back to the local environment with the command amplify mock api and authenticate calls with the generated JWT tokens. If you want to test directly from GraphiQL, after your API is configured to use Cognito, the Amplify GraphiQL Explorer provides a way to mock and change the username, groups, and email for a user and generate a local JWT token just by clicking the “Auth” button. The mocked values are used by the GraphQL Transformer @auth directive and any access rules:

After pushing and deploying the changes to the cloud with amplify push, the “aws_exports.js” file will be updated accordingly to point to the appropriate resources:

const awsmobile = {
    "aws_project_region": "us-east-1",
    "aws_appsync_graphqlEndpoint": "https://eriicnzxxxxxxxxxxxxx.appsync-api.us-east-1.amazonaws.com/graphql",
    "aws_appsync_region": "us-east-1",
    "aws_appsync_authenticationType": "API_KEY",
    "aws_appsync_apiKey": "da2-gttjhle72nf3pbfzfil2jy54ne"
};

export default awsmobile;

You can easily move back and forth between local and cloud environments as the identifiers in the exports file are updated automatically.

Local Debugging and Customizing VTL Resolvers

The local mocking environment also allows to easily customize and debug AppSync resolvers. You can edit VTL templates locally and check if they contain errors, including the line numbers causing problems, before pushing to AppSync. In order to do so, with the local API running, navigate to the folder amplify/backend/api/<your API name>/resolvers. You will see a list of resolver templates that the GraphQL Transformer automatically generated. You can modify any of them and, after saving changes, they are immediately loaded into the locally running API service with a message Mapping template change detected. Reloading. If you inject an error, for instance adding an extra curly brace, you will see a meaningful description of the problem and the line where the error was detected as shown below:

In case you stop the mock endpoint, for instance to push your changes to the cloud, all of the templates in the amplify/backend/api/<your API name>/resolvers folder will be removed except for any that you modified. When you subsequently push to the cloud these local changes will be automatically merged with your AppSync API.

As you are developing your app, you can always update the GraphQL schema located at amplify/backend/api/<your API name>/schema.graphql. You can add additional types and any of the supported GraphQL Transform directives then save your changes while the local server is still running. Any updates to the schema will be automatically detected and validated, then immediately hot reloaded into the local API. Whenever you’re happy with the backend, pushing and deploying the changes to the cloud is just one CLI command away.

Integrating Lambda Functions

Today you can already create and invoke Lambda functions written in Node.js locally with the Amplify CLI. Now how can you go even further and integrate lambda functions with GraphQL APIs in the new local mocking environment? It’s very easy to test customized business logic implemented with Lambda in your local API. Let’s start by creating a lambda function for your Amplify project with the command amplify add function to create a function called “factOfTheDay” as follows:

The function calls an external API to retrieve a fact related to the current date. Here’s the code:

const axios = require("axios");
const moment = require("moment");

exports.handler = function(event, _, callback) {
  let apiUrl = `http://numbersapi.com/`;
  let day = moment().format("D");
  let month = moment().format("M");
  let factOfTheDay = apiUrl + month + "/" + day;

  axios
    .get(factOfTheDay)
    .then(response => callback(null, response.data))
    .catch(err => callback(err));
};

Since the function above uses both the axios and moment libraries, we need to install them in the function folder amplify/backend/function/factOfTheDay/src by executing either npm install axios moment or yarn add axios moment. We can also test the function locally with the command amplify mock function factOfTheDay:

In our API we’ll add a field to the “Todo” type so every time we read or create records the Lambda function will be invoked to retrieve the facts of the current day. In order to do that we’ll take advantage of the GraphQL Transformer @function directive and point it to our lambda function by editing the file amplify/backend/api/localdev/schema.graphql:

type Todo @model {
  id: ID!
  name: String!
  description: String
  factOfTheDay: String @function(name: "factOfTheDay-${env}")
}

In order to test, we execute amplify mock to test locally all the mocked categories (in this case, API and Function) and access the local instance of the GraphiQL IDE in the browser:

As you can see, the GraphQL query is successfully invoking the local lambda function as well as retrieving data from the local DynamoDB table with a single call. In order to commit the changes and create the lambda function in the cloud, it’s just a matter of executing amplify push.

Integrating S3 storage

Most apps need access to some sort of content such as audio, video, images, PDFs and S3 is the best way to store these assets. How can we easily bring S3 to our local development environment?

First, let’s add storage to our amplify project with amplify add storage. If you have not previously added the “Auth” category in your project, the “Storage” category will also ask you to set this up and it is OK to do so. While this doesn’t impact local mocking as there are no authorization checks at this time for the Storage category, you must configure it first for cloud deployment to make sure the S3 bucket is secured according to your application requirements:

To start testing, execute amplify mock. Alternatively, you can run amplify mock storage to only mock the Storage category. If you have not pushed Auth resources to the cloud, you’ll need to do so by executing amplify auth push to create/update the Cognito resources as they’ll be needed to secure access to the actual S3 bucket.

You can use any of the storage operations provided by the Amplify library in your application code such as put, get, remove or list as well as use UI components to sign-up/sign-in users and interact with the local content. Files will be saved to your local Amplify project folder under amplify/mock-data/S3. When ready, execute amplify push to create the S3 bucket in the cloud.

Conclusion

With the new local mocking environment, we want to deliver a great experience to developers using the Amplify Framework. Now you can quickly spin up local resources, test, prototype, debug and generate code with open source tools, work on the front-end and create your fullstack serverless application in no time. On top of that, after you’re done and happy with your local development results, you can commit the code to GitHub and link your repository to the AWS Amplify Console which will provide a built-in CI/CD workflow. The console detects changes to the repository and automatically triggers builds to create your Amplify project backend cloud resources in multiple environments as well as publish your front-end web application to a content delivery network. Fullstack local development, testing, debugging, CI/CD, code builds and web publishing made much easier and faster for developers.

It’s just Day 1 for local development, mocking and testing on Amplify, what else would you like to see in our local mocking environment? Let us know if you have any ideas, feel free to create a feature request in our GitHub repository. Our team constantly monitors the repository and we’re always listening to your requests. Go build (now locally in your laptop)!

from AWS Mobile Blog

Supporting backend and internal processes with AWS AppSync multiple authorization types

Supporting backend and internal processes with AWS AppSync multiple authorization types

Imagine a scenario where you created a mobile or web application that uses a GraphQL API built on top of AWS AppSync and Amazon DynamoDB tables. Another backend or internal process such as an AWS Lambda function now needs to update data in the backend tables. A new feature in AWS AppSync lets you grant the Lambda function access to make secure GraphQL API calls through the unified AppSync API endpoint.

This post explores how to use the multiple authorization type feature to accomplish that goal.

Overview

In your application, you implemented the following:

  1. Users authenticate through Amazon Cognito user pools.
  2. Users query the AWS AppSync API to view your data in the app.
  3. The data is stored in DynamoDB tables.
  4. GraphQL subscriptions reflect changes to the data back to the user.

Your app is great. It works well. However, you may have another backend or internal process that wants to update the data in the DynamoDB tables behind the scenes, such as:

  • An external data-ingestion process to an Amazon S3 bucket
  • Real-time data gathered through Amazon Kinesis Data Streams
  • An Amazon SNS message responding to an outside event

For each of these scenarios, you want to use a Lambda function to go through a unified API endpoint to update data in the DynamoDB tables. AWS AppSync can serve as an appropriate middle layer to provide this functionality.

Walkthrough

An Amazon Cognito user pool authenticates and authorizes your API. Keep this in mind when considering the best way to grant the Lambda function access to make secure AWS AppSync API calls.

Choosing an authorization mode

AWS AppSync supports four different authorization types:

  • API_KEY: For using API keys
  • AMAZON_COGNITO_USER_POOLS: For using an Amazon Cognito user pool
  • AWS_IAM: For using IAM permissions
  • OPENID_CONNECT: For using your OpenID Connect provider

Before the launch of the multiple authorization type feature, you could only use one of these authorization types at a time. Now, you can mix and match them to provide better levels of access control.

To set additional authorization types, use the following schema directives:

  • @aws_api_key — A field uses API_KEY for authorization.
  • @aws_cognito_user_pools — A field uses AMAZON_COGNITO_USER_POOLS for authorization.
  • @aws_iam — A field uses AWS_IAM for authorization.
  • @aws_oidc — A field uses OPENID_CONNECT for authorization.

The AWS_IAM type is ideal for the Lambda function because the Lambda function is bound to an IAM execution role where you can specify the permissions this Lambda function can have. Do not use the API_KEY authorization mode; API keys are only recommended for development purposes or for use cases where it’s safe to expose a public API.

Understanding the architecture

Suppose that you have a log viewer web app that lets you view logging data:

  • It authenticates its users using an Amazon Cognito user pool and accesses an AWS AppSync API endpoint for data reads from a “Log” DynamoDB table.
  • Some backend processes publish log events and details to an SNS topic.
  • A Lambda function subscribes to the topic and invokes the AWS AppSync API to update the backend data store.

The following diagram shows the web app architecture.

The following code is your AWS AppSync GraphQL schema, with no authorization directives:

type Log {
  id: ID!
  event: String
  detail: String
}

input CreateLogInput {
  id: ID
  event: String
  detail: String
}

input UpdateLogInput {
  id: ID!
  event: String
  detail: String
}

input DeleteLogInput {
  id: ID!
}

type ModelLogConnection {
  items: [Log]
  nextToken: String
}

type Mutation {
  createLog(input: CreateLogInput!): Log
  updateLog(input: UpdateLogInput!): Log
  deleteLog(input: DeleteLogInput!): Log
}

type Query {
  getLog(id: ID!): Log
  listLogs: ModelLogConnection
}

type Subscription {
  onCreateLog: Log
    @aws_subscribe(mutations: ["createLog"])
  onUpdateLog: Log
    @aws_subscribe(mutations: ["updateLog"])
  onDeleteLog: Log
    @aws_subscribe(mutations: ["deleteLog"])
}

Configuring the AWS AppSync API

First, configure your AWS AppSync API to add the new authorization mode:

  • In the AWS AppSync console, select your API.
  • Under the name of your API, choose Settings.
  • For Default authorization mode, make sure it is set to Amazon Cognito user pool.
  • To the right of Additional authorization providers, choose New.
  • For Authorization mode, choose AWS Identity and Access Management (IAM), Submit.
  • Choose Save.

Now that you’ve set up an additional authorization provider, modify your schema to allow AWS_IAM authorization by adding @aws_iam to the createLog mutation. The new schema looks like the following code:

input CreateLogInput {
  id: ID
  event: String
  detail: String
}

input UpdateLogInput {
  id: ID!
  event: String
  detail: String
}

input DeleteLogInput {
  id: ID!
}

type ModelLogConnection {
  items: [Log]
  nextToken: String
}

type Mutation {
  createLog(input: CreateLogInput!): Log
    @aws_iam
  updateLog(input: UpdateLogInput!): Log
  deleteLog(input: DeleteLogInput!): Log
}

type Query {
  getLog(id: ID!): Log
  listLogs: ModelLogConnection
}

type Subscription {
  onCreateLog: Log
    @aws_subscribe(mutations: ["createLog"])
  onUpdateLog: Log
    @aws_subscribe(mutations: ["updateLog"])
  onDeleteLog: Log
    @aws_subscribe(mutations: ["deleteLog"])
}

type Log @aws_iam {
  id: ID!
  event: String
  detail: String
}

The @aws_iam directive is now authorizing the createLog mutation. Add the directive to the log type. Because directives work at the field level, also give AWS_IAM access to the log type. To do this, either mark each field in the log type with a directive or mark the log type with the @aws_iam directive.

You don’t have to explicitly specify the @aws_cognito_user_pools directive, because it is the default authorization type. Fields that are not marked by other directives are protected using the Amazon Cognito user pool.

Creating a Lambda function

Now that the AWS AppSync backend is set up, create a Lambda function. The function is triggered by an event published to an SNS topic, which contains logging event and detail information in the message body.

The following code example shows how the Lambda function is written in Node.js:

require('isomorphic-fetch');
const AWS = require('aws-sdk/global');
const AUTH_TYPE = require('aws-appsync/lib/link/auth-link').AUTH_TYPE;
const AWSAppSyncClient = require('aws-appsync').default;
const gql = require('graphql-tag');

const config = {
  url: process.env.APPSYNC_ENDPOINT,
  region: process.env.AWS_REGION,
  auth: {
    type: AUTH_TYPE.AWS_IAM,
    credentials: AWS.config.credentials,
  },
  disableOffline: true
};

const createLogMutation =
`mutation createLog($input: CreateLogInput!) {
  createLog(input: $input) {
    id
    event
    detail
  }
}`;

const client = new AWSAppSyncClient(config);

exports.handler = (event, context, callback) => {

  // An expected payload has the following format:
  // {
  //   "event": "sample event",
  //   "detail": "sample detail"
  // }

  const payload = event['Records'][0]["Sns"]['Message'];

  if (!payload['event']) {
    callback(Error("event must be provided in the message body"));
    return;
  }

  const logDetails = {
    event: payload['event'],
    detail: payload['detail']
  };

  (async () => {
    try {
      const result = await client.mutate({
        mutation: gql(createLogMutation),
        variables: {input: logDetails}
      });
      console.log(result.data);
      callback(null, result.data);
    } catch (e) {
      console.warn('Error sending mutation: ',  e);
      callback(Error(e));
    }
  })();
};

The Lambda function uses the AWS AppSync SDK to make a createLog mutation call, using the AWS_IAM authorization type.

Defining the IAM role

Now, define the IAM role that this Lambda function can assume. Grant the Lambda function appsync:GraphQL permissions for your API, as well as Amazon CloudWatch Logs permissions. You also must allow the Lambda function to be triggered by an SNS topic.

You can view the full AWS CloudFormation template that deploys the Lambda function, its IAM permissions, and supporting resources:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Parameters:
  GraphQLApiEndpoint:
    Type: String
    Description: The https endpoint of an AppSync API
  GraphQLApiId:
    Type: String
    Description: The id of an AppSync API
  SnsTopicArn:
    Type: String
    Description: The ARN of the SNS topic that can trigger the Lambda function
Resources:
  AppSyncSNSLambda:
    Type: 'AWS::Serverless::Function'
    Properties:
      Description: A Lambda function that invokes an AppSync API endpoint
      Handler: index.handler
      Runtime: nodejs8.10
      MemorySize: 256
      Timeout: 10
      CodeUri: ./
      Role: !GetAtt AppSyncLambdaRole.Arn
      Environment:
        Variables:
          APPSYNC_ENDPOINT: !Ref GraphQLApiEndpoint

  AppSyncLambdaRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            Service: lambda.amazonaws.com
          Action: sts:AssumeRole
      Policies:
      - PolicyName: AppSyncLambdaPolicy
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Resource: arn:aws:logs:*
            Action:
            - logs:CreateLogGroup
            - logs:CreateLogStream
            - logs:PutLogEvents
          - Effect: Allow
            Resource:
            - !Sub 'arn:aws:appsync:${AWS::Region}:${AWS::AccountId}:apis/${GraphQLApiId}*'
            Action:
            - appsync:GraphQL

  SnsSubscription:
    Type: AWS::SNS::Subscription
    Properties:
      Endpoint: !GetAtt AppSyncSNSLambda.Arn
      Protocol: Lambda
      TopicArn: !Ref SnsTopicArn

  LambdaInvokePermission:
    Type: AWS::Lambda::Permission
    Properties:
      FunctionName: !Ref AppSyncSNSLambda
      Action: lambda:InvokeFunction
      Principal: sns.amazonaws.com
      SourceArn: !Ref SnsTopicArn

Deploying the AWS CloudFormation template

Use the following two commands to deploy the AWS CloudFormation template. Make sure to replace all the CAPS fields with values specific to your AWS account:

aws cloudformation package --template-file "cloudformation.yaml" \
  --s3-bucket "<YOUR S3 BUCKET>" \
  --output-template-file "out.yaml"

aws cloudformation deploy --template-file out.yaml \
    --stack-name appsync-lambda \
    --s3-bucket "<YOUR S3 BUCKET>" \
    --parameter-overrides GraphQLApiEndpoint="<YOUR GRAPHQL ENDPOINT>" \
      GraphQLApiId="<YOUR GRAPHQL API ID>" \
      SnsTopicArn="<YOUR SNS TOPIC ARN>" \
    --capabilities CAPABILITY_IAM

Testing the solution

After both commands succeed, and your AWS CloudFormation template deploys, do the following:

1. Open the console and navigate to the SNS topic that you specified earlier.
2. Choose Publish message.
3. For the raw message body, enter the following:

{
   "event": "sample event",
   "detail": "sample detail"
}

4. Choose Publish message.

Navigate to the Log DynamoDB table that is your AWS AppSync API’s data source. You should see a new “sample event” record created using the CreateLog mutation.

Conclusion

With its new feature, AWS AppSync can now support multiple authorization types. This ability demonstrates how an AWS AppSync API serves as a powerful middle layer between multiple processes while being a secure API for end users.

As always, AWS welcomes feedback. Please submit comments or questions below.

Jane Shen is a cloud application architect in AWS Professional Services based in Toronto, Canada.

 

 

 

 

from AWS Mobile Blog

Announcing the new Predictions category in Amplify Framework

Announcing the new Predictions category in Amplify Framework

The Amplify Framework is an open source project for building cloud-enabled mobile and web applications. Today, AWS announces a new category called “Predictions” in the Amplify Framework.

Using this category, you can easily add and configure AI/ML uses cases for your web and mobile application using few lines of code. You can accomplish these use cases with the Amplify CLI and either the Amplify JavaScript library (with the new Predictions category) or the generated iOS and Android SDKs for Amazon AI/ML services. You do not need any prior experience with machine learning or AI services to use this category.

Using the Amplify CLI, you can setup your backend by answering simple questions in the CLI flow. In addition, you can orchestrate advanced use cases such as on-demand indexing of images to auto-update a collection in Amazon Rekognition. The actual image bytes are not stored by Amazon Rekognition. For example, this enables you to securely upload new images using an Amplify storage object which triggers an auto-update of the collection. You can then identify the new entities the next time you make inference calls using the Amplify library. You can also setup or import a SageMaker endpoint by using the “Infer” option in the CLI.

The Amplify JavaScript library with Predictions category includes support for the following use cases:

1. Translate text to a target language.
2. Generate speech from text.
3. Identify text from an image.
4. Identify entities from an image. (for example, celebrity detection).
5. Label real world entities within an image/document. (for example, recognize a scene, objects and activity in an image).
6. Interpret text to find insights and relationships in text.
7. Transcribe text from audio.
8. Indexing of images with Amazon Rekognition.

The supported uses cases leverage the following AI/ML services:

  • Amazon Rekognition
  • Amazon Translate
  • Amazon Polly
  • Amazon Transcribe
  • Amazon Comprehend
  • Amazon Textract

The iOS and Android SDKs now include support for SageMaker runtime which you can use to call inference on your custom models hosted on SageMaker. You can also extract text and data from scanned documents using the newly added support for Amazon Textract in the Android SDK. These services add to the list of existing AI services supported in iOS and Android SDKs.

In this post, you build and host a React.js web application that uses text in English language as an input and translates it to Spanish language. In addition, you can convert the translated text to speech in the Spanish language. For example, this type of use case can be added to a travel application, where you can type text in English and playback the translated text in a language of your choice. To build this app you use two capabilities from the Predictions category: Text translation and Generate speech from text.

Secondly, we go through the flow of indexing images to update a collection from the Amplify CLI and an application when using Amazon Rekognition.

Building the React.js Application

Prerequisites:

Install Node.js and npm if they are not already installed on your machine.

Steps

To create a new React.js app

Create a new React.js application using the following command:

$ npx create-react-app my-app

To set up your backend

Install and configure the Amplify CLI using the following command:

$ npm install -g @aws-amplify/cli
$ amplify configure

To create a new Amplify project

Run the following command from the root folder of your React.js application:

$ amplify init

Choose the following default options as shown below:

? Enter a name for the project: my-app
? Enter a name for the environment: dev
? Choose your default editor: Visual Studio Code
? Choose the type of app that you're building: javascript
? What javascript framework are you using: react
? Source Directory Path:  src
? Distribution Directory Path: build
? Build Command:  npm run-script build
? Start Command: npm run-script start
? Do you want to use an AWS profile? Yes
? Please choose the profile you want to use: default

To add text translation

Add the new Predictions category to your Amplify project using the following command:

$ amplify add predictions

The command line interface asks you simple questions to add AI/ML uses cases. There are 4 option: Identify, Convert, Interpret, and Infer.

  • Choose the “Convert” option.
  • When prompted, add authentication if you do not have one.
  • Select the following options in CLI:
? Please select from of the below mentioned categories: Convert
? You need to add auth (Amazon Cognito) to your project in order to add storage for user files. Do you want to add auth now? Yes
? Do you want to use the default authentication and security configuration? Default configuration
? How do you want users to be able to sign in? Username
? Do you want to configure advanced settings? No, I am done.
? What would you like to convert? Convert text into a different language
? Provide a friendly name for your resource: translateText6c4601e3
? What is the source language? English
? What is the target language? Spanish
? Who should have access? Auth and Guest users

To add text to speech

Run the following command to add text to speech capability to your project:

$ amplify add predictions
? Please select from of the below mentioned categories: Convert
? What would you like to convert? Convert text to speech
? Provide a friendly name for your resource: speechGeneratorb05d231c
? What is the source language? Mexican Spanish
? Select a speaker Mia - Female
? Who should have access? Auth and Guest users

To integrate the predictions library in a React.js application

Now that you set up the backend, integrate the Predictions library in your React.js application.

The application UI shows “Text Translation” and “Text to Speech” with a separate button for each functionality. The output of the text translation is the translated text in JSON format. The output of Text to Speech is an audio file that can be played from the application.

First, install the Amplify and Amplify React dependencies using the following command:

$ npm install aws-amplify aws-amplify-react

Next, open src/App.js and add the following code

import React, { useState } from 'react';
import './App.css';
import Amplify from 'aws-amplify';
import Predictions, { AmazonAIPredictionsProvider } from '@aws-amplify/predictions';
 
import awsconfig from './aws-exports';
 
Amplify.addPluggable(new AmazonAIPredictionsProvider());
Amplify.configure(awsconfig);
 
 
function TextTranslation() {
  const [response, setResponse] = useState("Input text to translate")
  const [textToTranslate, setTextToTranslate] = useState("write to translate");

  function translate() {
    Predictions.convert({
      translateText: {
        source: {
          text: textToTranslate,
          language : "en" // defaults configured in aws-exports.js
        },
        targetLanguage: "es"
      }
    }).then(result => setResponse(JSON.stringify(result, null, 2)))
      .catch(err => setResponse(JSON.stringify(err, null, 2)))
  }

  function setText(event) {
    setTextToTranslate(event.target.value);
  }

  return (
    <div className="Text">
      <div>
        <h3>Text Translation</h3>
        <input value={textToTranslate} onChange={setText}></input>
        <button onClick={translate}>Translate</button>
        <p>{response}</p>
      </div>
    </div>
  );
}
 
function TextToSpeech() {
  const [response, setResponse] = useState("...")
  const [textToGenerateSpeech, setTextToGenerateSpeech] = useState("write to speech");
  const [audioStream, setAudioStream] = useState();
  function generateTextToSpeech() {
    setResponse('Generating audio...');
    Predictions.convert({
      textToSpeech: {
        source: {
          text: textToGenerateSpeech,
          language: "es-MX" // default configured in aws-exports.js 
        },
        voiceId: "Mia"
      }
    }).then(result => {
      
      setAudioStream(result.speech.url);
      setResponse(`Generation completed, press play`);
    })
      .catch(err => setResponse(JSON.stringify(err, null, 2)))
  }

  function setText(event) {
    setTextToGenerateSpeech(event.target.value);
  }

  function play() {
    var audio = new Audio();
    audio.src = audioStream;
    audio.play();
  }
  return (
    <div className="Text">
      <div>
        <h3>Text To Speech</h3>
        <input value={textToGenerateSpeech} onChange={setText}></input>
        <button onClick={generateTextToSpeech}>Text to Speech</button>
        <h3>{response}</h3>
        <button onClick={play}>play</button>
      </div>
    </div>
  );
}
 
function App() {
  return (
    <div className="App">
      <TextTranslation />
      <hr />
      <TextToSpeech />
      <hr />
    </div>
  );
}
 
export default App;

In the previous code, the source language for translate is set by default in aws-exports.js. Similarly, the default language is set for text-to-speech in aws-exports.js. You can override these values in your application code.

To add hosting for your application

You can enable static web hosting for our react application on Amazon S3 by running the following command from the root of our application folder:

$ amplify add hosting

To publish the application run:

$ amplify publish

The application is now hosted on the AWS Amplify Console and you can access it at a link that looks like http://my-appXXXXXXXXXXXX-hostingbucket-dev.s3-website-us-XXXXXX.amazonaws.com/

On-demand indexing of images

The “Identify entities” option in Amplify CLI using Amazon Rekognition can detect entities like celebrities by default. However, you can use Amplify to index new entities to auto-update the collection in Amazon Rekognition. This enables you to develop advanced use cases such as uploading a new image and thereafter having the new entities in an input image being recognized if it matches an entry in the collection. Note that Amazon Rekognition does not store any image bytes.

Here is how it works on a high level for reference:

Note, if you delete the image from S3 the entity is removed from the collection.
You easily can setup the indexing feature from the Amplify CLI using the following flow:

$ amplify add predictions
? Please select from of the below mentioned categories Identify
? You need to add auth (Amazon Cognito) to your project in order to add storage for user files. Do you want to add auth now? Yes
? Do you want to use the default authentication and security configuration? Default configuration
? What would you like to identify? Identify Entities
? Provide a friendly name for your resource identifyEntities5a41fcea
? Would you like use the default configuration? Advanced Configuration
? Would you like to enable celebrity detection? Yes
? Would you like to identify entities from a collection of images? Yes
? How many entities would you like to identify 50
? Would you like to allow users to add images to this collection? Yes
? Who should have access? Auth users
? The CLI would be provisioning an S3 bucket to store these images please provide bucket name: myappentitybucket

If you have already setup storage from the Amplify CLI by running `amplify add storage`, the bucket that was created is reused. To upload images for indexing from the CLI, you can run `amplify predictions upload` and it prompts you for a folder location with your images.

After you have setup the backend through the CLI, you can use an Amplify storage object to add images to S3 bucket which will trigger the auto-indexing of images and update the collection in Amazon Rekognition.

In your src/App.js add the following function that uploads image test.jpg to Amazon S3:

function PredictionsUpload() {
  
 function upload(event) {
    const { target: { files } } = event;
    const [file,] = files || [];
    Storage.put('test.jpg', file, {
      level: 'protected',
      customPrefix: {
        protected: 'protected/predictions/index-faces/',
      }
    });
  }

  return (
    <div className="Text">
      <div>
        <h3>Upload to predictions s3</h3>
        <input type="file" onChange={upload}></input>
      </div>
    </div>
  );
}

Next, call the Predictions.identify() function to identify entities in an input image using the following code. Note, that we have to set “collections: true” in the call to identify.

function EntityIdentification() {
  const [response, setResponse] = useState("Click upload for test ")
  const [src, setSrc] = useState("");

  function identifyFromFile(event) {
    setResponse('searching...');
    
    const { target: { files } } = event;
    const [file,] = files || [];

    if (!file) {
      return;
    }
    Predictions.identify({
      entities: {
        source: {
          file,
        },
        collection: true
        celebrityDetection: true
      }
    }).then(result => {
      console.log(result);
      const entities = result.entities;
      let imageId = ""
      entities.forEach(({ boundingBox, metadata: { name, externalImageId } }) => {
        const {
          width, // ratio of overall image width
          height, // ratio of overall image height
          left, // left coordinate as a ratio of overall image width
          top // top coordinate as a ratio of overall image height
        } = boundingBox;
        imageId = externalImageId;
        console.log({ name });
      })
      if (imageId) {
        Storage.get("", {
          customPrefix: {
            public: imageId
          },
          level: "public",
        }).then(setSrc); 
      }
      console.log({ entities });
      setResponse(imageId);
    })
      .catch(err => console.log(err))
  }

  return (
    <div className="Text">
      <div>
        <h3>Entity identification</h3>
        <input type="file" onChange={identifyFromFile}></input>
        <p>{response}</p>
        { src && <img src={src}></img>}
      </div>
    </div>
  );
}

To learn more about the predictions category, visit our documentation.

Feedback

We hope you like these new features! Let us know how we are doing, and submit any feedback in the Amplify Framework Github Repository. You can read more about AWS Amplify on the AWS Amplify website.

from AWS Mobile Blog

Amplify Framework adds support for AWS Lambda Triggers in Auth and Storage categories

Amplify Framework adds support for AWS Lambda Triggers in Auth and Storage categories

The Amplify Framework is an open source project for building cloud-enabled mobile and web applications. Today, we’re happy to announce that you can set up AWS Lambda triggers directly from the Amplify CLI.

Using Lambda triggers, you can call event-based Lambda functions for authentication, database actions, and storage operations from other AWS services like Amazon Simple Storage Service (Amazon S3), Amazon Cognito, and Amazon DynamoDB. Now, the Amplify CLI allows you to enable and configure these triggers. The CLI further simplifies the process by providing you with trigger templates that you can customize to suit your use case.

The Lambda trigger capabilities for Auth category include:

  1. Add Google reCaptcha Challenge: This enables you to add Google’s Captcha implementation to your mobile or web app.
  2. Email verification link with redirect: This trigger enables you to define an email message that can be used for an account verification flow.
  3. Add user to a Amazon Cognito User Pools group: This enables you to add a user to an Amazon Cognito User Pools group upon account registration.
  4. Email domain filtering: This enables you to define email domains that would like to allow or block during sign up.
  5. Custom Auth Challenge Flow: This enables you add custom auth flow to your mobile and web application by providing a basic skeleton which you can edit to achieve custom authentication in your application.

The Lambda trigger for Storage category can be added when creating or updating the storage resource using the Amplify CLI.

Auth Triggers for Authentication with Amazon Cognito

The Lambda triggers for Auth enable you to build custom authentication flows in your mobile and web application.
These triggers can be associated with Cognito User Pool operations such as sign-up, account confirmation, and sign-in. The Amplify CLI provides the template triggers for capabilities listed above which can be customized to suit your use case.

A custom authentication flow using Amazon Cognito User Pools typically comprises of 3 steps:

  1. Define Auth Challenge: Determines the next challenge in the custom auth flow.
  2. Create Auth Challenge: Creates a challenge in the custom auth flow.
  3. Verify Auth Challenge: : Determines if a response is correct in a custom auth flow.

When you add auth to your Amplify project, the CLI asks you if you want to add capabilities for custom authentication. It generates the trigger templates for each step in your custom auth flow depending on the capability chosen. The generated templates can be edited as per your requirements. Once complete, you push your project using ‘amplify push’ command. For more information on these capabilities, refer to our documentation.

Here is an example of how you add one of these custom auth capabilities in your application.

Adding a new user to group in Amazon Cognito

Using Amazon Cognito User Pools, you can create and manage groups, add users to groups, and remove users from groups. With groups, you can create collections of users to manage their permissions or to represent different user types.

You can now use the Amplify CLI to add a Lambda trigger to add a user to a group after they have successfully signed up. Here’s how it works.

Creating the authentication service and configuring the Lambda Trigger

From the CLI, create a new Amplify project with the following command:

amplify init

Next, add authentication with the following command:

amplify add auth

The command line interface then walks you through the following steps for adding authentication:

? Do you want to use the default authentication and security configuration? Default configuration
? How do you want users to be able to sign in? Username
? Do you want to configure advanced settings? Yes, I want to make some additional changes.
? What attributes are required for signing up? Email
? Do you want to enable any of the following capabilities?
 ◯ Add Google reCaptcha Challenge
 ◯ Email Verification Link with Redirect
❯◉ Add User to Group
 ◯ Email Domain Filtering (blacklist)
 ◯ Email Domain Filtering (whitelist)
 ◯ Custom Auth Challenge Flow (basic scaffolding - not for production)
 ? Enter the name of the group to which users will be added. STUDENTS
 ? Do you want to edit the local PostConfirmation lambda function now? No
 ? Do you want to edit your add-to-group function now? Yes

The interface should then open the appropriate Lambda function template, which you can edit in your text editor. The code for the function will be located at amplify/backend/function/<functionname>/src/add-to-group.js.

The Lambda function that you write for this example adds new users to a group called STUDENTS when they have an .edu email address. This function triggers after the signup successfully completes.

Update the Lambda function add-to-group.js with the following code:

const aws = require('aws-sdk');

exports.handler = (event, context, callback) => {
  const cognitoidentityserviceprovider = new aws.CognitoIdentityServiceProvider({ apiVersion: '2016-04-18' });

  const email = event.request.userAttributes.email.split('.')
  const domain = email[email.length - 1]

  if (domain === 'edu') {
    const params = {
      GroupName: 'STUDENTS',
      UserPoolId: event.userPoolId,
      Username: event.userName,
    }
  
    cognitoidentityserviceprovider.adminAddUserToGroup(params, (err) => {
      if (err) { callback(err) }
      callback(null, event);
    })
  } else {
    callback(null, event)
  }
}

To deploy the authentication service and the Lambda function, run the following command:

amplify push

Now, when a user signs up with an .edu email address, they are automatically placed in the STUDENTS group.

Integrating with a client application

Now that you have the authentication service up and running, let’s integrate with a React application that signs the user in and recognizes that the user is part of the STUDENTS group.

First, install the Amplify and Amplify React dependencies:

npm install aws-amplify aws-amplify-react

Next, open src/index.js and add the following code to configure the app to recognize the Amplify project configuration:

import Amplify from 'aws-amplify'
import config from './aws-exports'
Amplify.configure(config)

Next, update src/App.js. The code recognizes the user groups of a user after they have signed in and displays a welcome message if the user is in the STUDENTS group.

// src/App.js
import React, { useEffect, useState } from 'react'
import logo from './logo.svg'
import './App.css'
import { withAuthenticator } from 'aws-amplify-react'
import { Auth } from 'aws-amplify'

function App() {
  const [isStudent, updateStudentInfo] = useState(false)
  useEffect(() => {
    /* Get the AWS credentials for the current user from Identity Pools.  */
    Auth.currentSession()
      .then(cognitoUser => {
        const { idToken: { payload }} = cognitoUser
        /* Loop through the groups that the user is a member of */
        /* Set isStudent to true if the user is part of the STUDENTS group */
        payload['cognito:groups'] && payload['cognito:groups'].forEach(group => {
          if (group === 'STUDENTS') updateStudentInfo(true)
        })
      })
      .catch(err => console.log(err));
  }, [])
  return (
    <div className="App">
      <header className="App-header">
        <img src={logo} className="App-logo" alt="logo" />
        { isStudent && <h1>Welcome, Student!</h1> }
      </header>
    </div>
  );
}

export default withAuthenticator(App, { includeGreetings: true })

Now, if the user is part of the STUDENTS group, they will get a specialized greeting.

Storage Triggers for Amazon S3 and Amazon DynamoDB

With this release, we’ve also enabled the ability to setup Lambda triggers for Amazon S3 and Amazon DynamoDB. This means you can execute a Lambda function on events such as create, update, read, and write. When adding or configuring storage from the Amplify CLI, you now have the option to add and configure a storage trigger.

Resizing an image with AWS Lambda and Amazon S3

Let’s take a look at how to use one of the new triggers to resize an image into a thumbnail after it has been uploaded to an S3 bucket.

From the CLI, create a new Amplify project with the following command:

amplify init

Next, add storage with the following command:

amplify add storage

The interface then walks you through the add storage setup.

? Please select from one of the below mentioned services: Content (Images, audio, video, etc.)
? You need to add auth (Amazon Cognito) to your project in order to add storage for user files. Do you want to add auth now? Yes
? Do you want to use the default authentication and security configuration? Default configuration
? How do you want users to be able to sign in? Username
? Do you want to configure advanced settings? No, I am done.
? Please provide a friendly name for your resource that will be used to label this category in the project: MyS3Example
? Please provide bucket name: <YOUR_UNIQUE_BUCKET_NAME>
? Who should have access: Auth and guest users
? What kind of access do you want for Authenticated users? create/update, read, delete
? What kind of access do you want for Guest users? read
? Do you want to add a Lambda Trigger for your S3 Bucket? Y
? Select from the following options: Create a new function

The CLI then generates a code template for the new Lambda function, which you can modify as needed. It will be located at amplify/backend/function/<functionname>/src/index.js.

Replace the code in index.js with the following code:

const gm = require('gm').subClass({ imageMagick: true })
const aws = require('aws-sdk')
const s3 = new aws.S3()

const WIDTH = 100
const HEIGHT = 100

exports.handler = (event, context, callback) => {
  const BUCKET = event.Records[0].s3.bucket.name

  /* Get the image data we will use from the first record in the event object */
  const KEY = event.Records[0].s3.object.key
  const PARTS = KEY.split('/')

  /* Check to see if the base folder is already set to thumbnails, if it is we return so we do not have a recursive call. */
  const BASE_FOLDER = PARTS[0]
  if (BASE_FOLDER === 'thumbnails') return

  /* Stores the main file name in a variable */
  let FILE = PARTS[PARTS.length - 1]

  s3.getObject({ Bucket: BUCKET, Key: KEY }).promise()
    .then(image => {
      gm(image.Body)
        .resize(WIDTH, HEIGHT)
        .setFormat('jpeg')
        .toBuffer(function (err, buffer) {
          if (err) {
            console.log('error storing and resizing image: ', err)
            callback(err)
          }
          else {
            s3.putObject({ Bucket: BUCKET, Body: buffer, Key: `thumbnails/thumbnail-${FILE}` }).promise()
            .then(() => { callback(null) })
            .catch(err => { callback(err) })
          }
        })
    })
    .catch(err => {
      console.log('error resizing image: ', err)
      callback(err)
    })
}

You can trace the execution of the code above in Amazon CloudWatch Logs on an event such as upload to the S3 bucket.

Next, install the GraphicsMagick library in the Lambda function directory. This ensures that you have the needed dependencies to execute the Lambda function.

cd amplify/backend/function/<functionname>/src

npm install gm

cd ../../../../../

To deploy the services, run the following command:

amplify push

Next, visit the S3 console, open your bucket and upload an image. Once the upload has completed, a folder named thumbnails will be created and the resized image will be stored there.

To learn more about creating storage triggers, check out the documentation.

Feedback

We hope you like these new features! As always, let us know how we’re doing, and submit any requests in the Amplify Framework GitHub Repository. You can read more about AWS Amplify on the AWS Amplify website.

from AWS Mobile Blog

Deploy files stored on Amazon S3, Dropbox, or your Desktop to the AWS Amplify Console

Deploy files stored on Amazon S3, Dropbox, or your Desktop to the AWS Amplify Console

This article was written by Nikhil Swaminathan, Sr. Product Manager, AWS.

AWS Amplify recently launched a manual deploy option, providing you with the ability to host a static web app without connecting to a Git repository. You can deploy files stored on your desktop, Amazon S3, or files stored with any cloud provider.

The Amplify Console offers fully managed hosting with features such as instant cache invalidation, atomic deploys, redirects, and custom domain management. You can now use Amplify hosting with your own CI workflows, or to quickly generate a shareable URL to share a prototype.

This post describes how to deploy files manually from several different locations.

Overview

There are three locations from where you can manually deploy files:

  1. Deploy a folder from your desktop.
  2. Deploy files from S3 – upload files to an S3 bucket to push updates to your site automatically.
  3. Any URL – upload files to your Dropbox account to host a site.

Solution

First, if you have an existing app, run the following command to create an output directory (typically named dist or public):

  1. cd my-app OR create-react-app my-app
  2. npm install
  3. npm run build

Deploy a folder from your desktop

The easiest way to host your site is to drag a folder from your desktop:

  1. Log in to the Amplify Console
  2. Choose Deploy without a Git provider
  3. On the following screen, enter your app name and the name of your environment. Every Amplify app can have multiple environments. For example, you can host both a dev and prod version of your site.
  4. Drag and drop the output folder as shown below and choose Save and Deploy
  5. That’s it! Your site should be live at https://environmentname.appid.amplifyapp.com. Try making some code changes and upload a staging version of your site by choosing Add new environment.

Deploy files from Dropbox

  1. Log in to your Dropbox account and upload your build artifacts zip file to Dropbox.
  2. Create a shared link for the uploaded zip file. The link looks like https://www.dropbox.com/s/a1b2c3d4ef5gh6/example.docx?dl=0. Change the query param at the end of the URL to “dl=1” to force the browser to download the link.
  3. From the Amplify Console, choose Deploy without a Git provider and then choose Any URL. Provide the URL and choose Save and deploy. Your site is now live!

Deploy files from Amazon S3

Many developers use S3 for static hosting. You can continue to use S3 to sync your files while also leveraging the hosting features offered by the Amplify Console. For example, you can automatically trigger updates to your site using the Amplify Console, S3, and AWS Lambda.

Set up an S3 bucket

For this example, set up an S3 bucket to automatically trigger deployments to your site on any update:

1. In the S3 console, select an existing bucket or create a new one

2. Build your app locally and upload a zipped version of your build artifacts. For this example, use the AWS CLI to upload your file to S3 (you can also use the S3 console):

cd myawesomeapp
yarn run build
cd public #build directory
zip -r myapp.zip *
aws s3 cp archive.zip s3://bucketname

3. In the Amplify Console, choose Deploy without a Git provider

4. For Method, choose Amazon S3, and for Bucket, choose the bucket you just created. The zip file that you uploaded should automatically appear in the Zip file list.

5. Choose Save and deploy. Your site should be live at https://environmentname.appid.amplifyapp.com.

Set up an S3 trigger

Now, set up an S3 trigger so that your site is updated automatically every time you push a new change. Use the same setup for a continuous delivery service such as AWS CodePipeline, or for GitLab or BitBucket pipelines.

1. In the Lambda console, create a new function with a new role by choosing Author from scratch

2. Copy the following code into the Lambda function editor:

const appId = ENTER YOUR APP ID;
const branchName = ENTER YOUR BRANCH NAME;

const aws = require("aws-sdk");
const amplify = new aws.Amplify();

exports.handler = async function(event, context, callback) {
    const Bucket = event.Records[0].s3.bucket.name;
    const objectKey = event.Records[0].s3.object.key;
    await amplify.startDeployment({
        appId,
        branchName,
        sourceUrl: `s3://${Bucket}/${objectKey}`
    }).promise();
}

3. Give the Lambda function access to S3 and the Amplify Console.

  • Choose Amazon CloudWatch Logs and then choose Manage these permissions. The IAM Console opens up in a new tab.

  • In the IAM console, on the Permissions tab, choose Attach policies. For Policy name, choose Amazon S3FullAccess.

  • To give the function access to deploy to Amplify Console, choose Add inline policy. On the Create policy screen, under Visual editor, select Amplify and then choose Review policy.

  • Choose Actions, Manual actions, and select the All Amplify actions check box. Under Resources, choose All resources and then save the policy. 

4. In the Lambda console, you should see the designer updated with the correct permissions as follows.

5. Now, add an S3 trigger for the bucket so that any updates to the S3 bucket trigger the Lambda function. On the Add trigger screen, configure the trigger with the following values:

  • Bucket: Select the S3 bucket that you used earlier.
  • Event type: Choose All object create events

Test the setup

Test to make sure that your setup works:

  1. In the S3 console (or at the command line), upload a new zip artifact.
  2. Navigate to the Amplify Console. You should see a new deployment, as shown in the following screenshot.

Success! Use this setup to automatically trigger a deployment every time you push to the S3 bucket, either from your desktop or from the continuous delivery pipeline.

Conclusion

This post showed you how to use the new manual deploy option in the Amplify Console. This gives you the ability to use the Amplify Console to host a static web app without connecting to a Git repository. You can now manually deploy files in three different ways: from your desktop, any URL, or S3. Visit the Amplify Console homepage to learn more.

 

 

 

from AWS Mobile Blog

Deploy a VueJS app with the Amplify Console using AWS CloudFormation

Deploy a VueJS app with the Amplify Console using AWS CloudFormation

This article was written by Simon Thulbourn, Solutions Architect, AWS.

Developers and Operations people love automation. It gives them the power to introduce repeatability into their applications. The provisioning of infrastructure components is no different. Being able to create and manage resources through the use of AWS CloudFormation is a powerful way to run and rerun the same code to create resources in AWS across accounts.

Today, the Amplify Console launched support for AWS CloudFormation resources to give developers the ability to have reliable and repeatable access to the Amplify Console service. The Amplify Console offers three new resources:

  • AWS::Amplify::App
  • AWS::Amplify::Branch
  • AWS::Amplify::Domain

For newcomers, AWS Amplify Console provides a Git-based workflow to enable developers to build, deploy and host web application, whether it’s  Angular, ReactJS, VueJS or something else. These web applications can then consume APIs based on GraphQL or Serverless technologies, enabling fullstack serverless applications on AWS.

Working with CloudFormation

As an example, you’ll deploy the Todo example app from VueJS using Amplify Console and the new CloudFormation resources.

deploy the Todo example app from VueJS

You’ll start by forking the Vue repository on GitHub to your account. You have to fork the repository since Amplify Console will want to add a webhook and clone the repository for future builds.

You’ll also create a new personal access token on GitHub since you’ll need one to embed in the CloudFormation. You can read more about creating personal access tokens on GitHub’s website. The token will need the “repo” OAuth scope.

Note: Personal access tokens should be treated as a secret.

You can deploy the Todo application using the CloudFormation template at the end of this blog post. This CloudFormation template will create an Amplify Console App, Branch & Domain with a TLS certificate and an IAM role. To deploy the CloudFormation template, you can either use the AWS Console or the AWS CLI. In this example, we’re using the AWS CLI:

aws cloudformation deploy \
  --template-file ./template.yaml \
  --capabilities CAPABILITY_IAM \
  --parameter-overrides \
      OAuthToken=<GITHUB PERSONAL ACCESS TOKEN> \
      Repository=https://github.com/sthulb/vue \
      Domain=example.com \
  --stack-name TodoApp

After deploying the CloudFormation template, you need to go into the Amplify Console and trigger a build. The CloudFormation template can provision the resources, but can’t trigger a build since it creates resources but cannot trigger actions.

Diving deeper Into the template

The CloudFormation template needs to be updated to add your forked GitHub URL, and the Oauth token created above, and a custom domain you own. The AmplifyApp resource is your project definition – it is a collection of all the branches (or the AmplifyBranch resource) in your repository. The BuildSpec describes the settings used to build and deploy the branches in your app. In this example, we are deploying an example Todo app, which consists of four files. The Todo app expects a vue.min.js file to be available at: https://a1b2c3.amplifyapp.com/dist/vue.min.js; as a part of the buildspec we made sure the vue.min.js was in the deployment artifact, but not in the right location. We used the CustomRules property to rewrite the URL, transforming the URL from https://a1b2c3.amplifyapp.com/vue.min.js to https://a1b2c3.amplifyapp.com/dist/vue.min.js.

The AmplifyDomain resource allows you to connect your domain (https://yourdomain.com) or a subdomain (https://foo.yourdomain.com) so end users can start visiting your site.

Template

AWSTemplateFormatVersion: 2010-09-09

Parameters:
  Repository:
    Type: String
    Description: GitHub Repository URL

  OauthToken:
    Type: String
    Description: GitHub Repository URL
    NoEcho: true

  Domain:
    Type: String
    Description: Domain name to host application

Resources:
  AmplifyRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - amplify.amazonaws.com
            Action:
              - sts:AssumeRole
      Policies:
        - PolicyName: Amplify
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action: "amplify:*"
                Resource: "*"

  AmplifyApp:
    Type: "AWS::Amplify::App"
    Properties:
      Name: TodoApp
      Repository: !Ref Repository
      Description: VueJS Todo example app
      OauthToken: !Ref OauthToken
      EnableBranchAutoBuild: true
      BuildSpec: |-
        version: 0.1
        frontend:
          phases:
            build:
              commands:
                - cp dist/vue.min.js examples/todomvc/
          artifacts:
            baseDirectory: examples/todomvc/
            files:
              - '*'
      CustomRules:
        - Source: /dist/vue.min.js
          Target: /vue.min.js
          Status: '200'
      Tags:
        - Key: Name
          Value: Todo
      IAMServiceRole: !GetAtt AmplifyRole.Arn

  AmplifyBranch:
    Type: AWS::Amplify::Branch
    Properties:
      BranchName: master
      AppId: !GetAtt AmplifyApp.AppId
      Description: Master Branch
      EnableAutoBuild: true
      Tags:
        - Key: Name
          Value: todo-master
        - Key: Branch
          Value: master

  AmplifyDomain:
    Type: AWS::Amplify::Domain
    Properties:
      DomainName: !Ref Domain
      AppId: !GetAtt AmplifyApp.AppId
      SubDomainSettings:
        - Prefix: master
          BranchName: !GetAtt AmplifyBranch.BranchName

Outputs:
  DefaultDomain:
    Value: !GetAtt AmplifyApp.DefaultDomain

  MasterBranchUrl:
    Value: !Join [ ".", [ !GetAtt AmplifyBranch.BranchName, !GetAtt AmplifyDomain.DomainName ]]

Conclusion

To start using Amplify Console’s CloudFormation resources, visit the CloudFormation documentation page.

Acknowledgements

All of the code in the VueJS repository is licensed under the MIT license and property of Evan You and contributors.

 

from AWS Mobile Blog

Amplify Framework Adds Support for AWS Lambda Functions and Amazon DynamoDB Custom Indexes in GraphQL Schemas

Amplify Framework Adds Support for AWS Lambda Functions and Amazon DynamoDB Custom Indexes in GraphQL Schemas

Written by Kurt Kemple, Sr. Developer Advocate at AWS, Nikhil Dabhade, Sr. Product Manager at AWS, & Me!

The Amplify Framework is an open source project for building cloud-enabled mobile and web applications. Today, we’re happy to announce new features for the Function and API categories in the Amplify CLI.

It’s now possible to add an AWS Lambda function as a data source for your AWS AppSync API using the GraphQL transformer that is included in the Amplify CLI. You can also grant permissions for interacting with AWS resources from the Lambda function. This updates the associated IAM execution role policies without needing you to perform manual IAM policy updates.

The GraphQL transformer also includes a new @key directive that simplifies the syntax for creating custom indexes and performing advanced query operations with Amazon DynamoDB. This streamlines the process of configuring complex key structures to fit various access patterns when using DynamoDB as a data source.

Adding a Lambda function as a data source for your AWS AppSync API

The new @function directive in the GraphQL transform library provides an easy mechanism to call a Lambda function from a field in your AppSync API.  To connect a Lambda data source, add the @function directive to a field in your annotated GraphQL schema that’s managed by the Amplify CLI. You can also create and deploy the Lambda functions by using the Amplify CLI.

Let’s look at how you can use this feature.

What are we building?

In this blog post, we will create a React JavaScript application which uses a Lambda function as a data source for your GraphQL API. The Lambda function writes to storage which in this case will be Amazon DynamoDB. In addition, we will illustrate how you can easily grant create/read/update/delete permissions for interacting with AWS resources such as DynamoDB from a Lambda function.

Setting up the project

Pre-requisites

Download, install and configure the Amplify CLI.

$ npm install -g @aws-amplify/cli 
$ amplify configure

Next, create your project if you don’t already have one. We’re creating a React application here, but you can choose to create a project with any other Amplify-supported framework such as Angular, Vue or Ionic.

$ npx create-react-app my-project

Download, install and configure the Amplify CLI.

$ cd my-project
$ amplify init
$ npm i aws-amplify
$ cd my-project<br />$ amplify init<br />$ npm i aws-amplify

The ‘amplify init’ command initializes the project, sets up deployment resources in the cloud, and makes your project ready for Amplify.

Adding storage to your project

Next, we will setup the backend to add Storage using Amazon DynamoDB for your React JavaScript application.

$ amplify add storage
? Please select from one of the below mentioned services NoSQL Database
Welcome to the NoSQL DynamoDB database wizard
This wizard asks you a series of questions to help determine how to set up your NoSQL database table.

? Please provide a friendly name for your resource that will be used to label this category in the project: teststorage
? Please provide table name: teststorage
You can now add columns to the table.
? What would you like to name this column: id
? Please choose the data type: number
? Would you like to add another column? Yes
? What would you like to name this column: email
? Please choose the data type: string
? Would you like to add another column? Yes
? What would you like to name this column: createdAt
? Please choose the data type: string
? Would you like to add another column? No
Before you create the database, you must specify how items in your table are uniquely organized. You do this by specifying a primary key. The primary key uniquely identifies each item in the table so that no two items can have the same key.
This can be an individual column, or a combination that includes a primary key and a sort key.
To learn more about primary keys, see: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html#HowItWorks.CoreComponents.PrimaryKey
? Please choose partition key for the table: id
? Do you want to add a sort key to your table? No
You can optionally add global secondary indexes for this table. These are useful when you run queries defined in a different column than the primary key.
To learn more about indexes, see: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html#HowItWorks.CoreComponents.SecondaryIndexes
? Do you want to add global secondary indexes to your table? No
Successfully added resource teststorage locally

Adding a function to your project

Next, we will add a Lambda function by using the Amplify CLI. We will also grant permissions for the Lambda function to be able to interact with DynamoDB table which we created in the previous step.

$ amplify add function
Using service: Lambda, provided by: awscloudformation
? Provide a friendly name for your resource to be used as a label for this category in the project: addEntry
? Provide the AWS Lambda function name: addEntry
? Choose the function template that you want to use: Hello world function
? Do you want to access other resources created in this project from your Lambda function? Yes
? Select the category storage
? Select the resources for storage category teststorage
? Select the operations you want to permit for teststorage create, read, update, delete

You can access the following resource attributes as environment variables from your Lambda function
var environment = process.env.ENV
var region = process.env.REGION
var storageTeststorageName = process.env.STORAGE_TESTSTORAGE_NAME
var storageTeststorageArn = process.env.STORAGE_TESTSTORAGE_ARN

? Do you want to edit the local lambda function now? Yes 

This will open the Hello world function template file ‘index.js’ in the editor you selected during the ‘amplify init’ step.

Auto populating environment variables for your Lambda function

The Amplify CLI adds the environment variables, representing the AWS resources that the Lambda interacts with, as comments to your index.js files at the top for ease of reference. In this case, the AWS resource is DynamoDB. We want to have the Lambda function add an entry to the DynamoDB table with the parameters we pass to the GraphQL API. Add the following code to the Lambda function which utilizes the environment variables representing the DynamoDB table and region:

/* Amplify Params - DO NOT EDIT
You can access the following resource attributes as environment variables from your Lambda function
var environment = process.env.ENV
var region = process.env.REGION
var storageTeststorageName = process.env.STORAGE_TESTSTORAGE_NAME
var storageTeststorageArn = process.env.STORAGE_TESTSTORAGE_ARN

Amplify Params - DO NOT EDIT */

var AWS = require('aws-sdk');
var region = process.env.REGION
var storageTeststorageName = process.env.STORAGE_TESTSTORAGE_NAME
AWS.config.update({region: region});
var ddb = new AWS.DynamoDB({apiVersion: '2012-08-10'});
var ddb_table_name = storageTeststorageName
var ddb_primary_key = 'id';

function write(params, context){
    ddb.putItem(params, function(err, data) {
    if (err) {
      console.log("Error", err);
    } else {
      console.log("Success", data);
    }
  });
}
 

exports.handler = function (event, context) { //eslint-disable-line
  
  var params = {
    TableName: ddb_table_name,
    AWS.DynamoDB.Converter.input(event.arguments)
  };
  
  console.log('len: ' + Object.keys(event).length)
  if (Object.keys(event).length > 0) {
    write(params, context);
  } 
}; 

After you replace the function, jump back to the command line and press Enter to continue.

Next, run the ‘amplify push’ command to deploy your changes to the AWS cloud.

$ amplify push

Adding and updating the Lambda execution IAM role for Amplify managed resources

When you run the ‘amplify push’ command, the IAM execution role policies associated with the permissions you granted earlier are updated automatically to allow the Lambda function to interact with DynamoDB.

Setting up the API

After completing the function setup, the next step is to add a GraphQL API to your project:

$ amplify add api
? Please select from one of the below mentioned services GraphQL
? Provide API name: myproject
? Choose an authorization type for the API API key
? Do you have an annotated GraphQL schema? No
? Do you want a guided schema creation? Yes
? What best describes your project: Single object with fields (e.g., “Todo” with ID, name, description)
? Do you want to edit the schema now? Yes

This will open the schema.graphql file in the editor you selected during the ‘amplify init’ step.

Replace the annotated schema template located in your <project-root>/amplify/backend/api/<api-name>/schema.graphql file with the following code:

type Customer @model {
  id: ID!
  name: String!
  createdAt: String
}

type Mutation {
  addEntry(id: Int, email: String, createdAt: String): String @function(name: "addEntry-${env}")
}

Check if the updates to your schema are compiled successfully by running the following command:

$ amplify api gql-compile

Now that your API is configured, run the amplify push command to deploy your changes to create the corresponding AWS backend resources.

When you’re prompted about code generation for your API, choose Yes. You can accept all default options. This generates queries, mutations, subscriptions, and boilerplate code for the Amplify libraries to consume. For more information, see Codegen in the Amplify CLI docs.

Accessing the function from your project

Now that your function and API are configured, you can access them through the API class, which is part of the Amplify JavaScript Library.

Open App.js and add the following import and call to Amplify API as shown below:

import awsconfig from './aws-exports';
import { API, graphqlOperation } from "aws-amplify";
import { addEntry }  from './graphql/mutations';
API.configure(awsconfig);

const entry = {id:“1”, email:“[email protected]”, createdAt:“2019-5-29”}
const data = await API.graphql(graphqlOperation(addEntry, entry))
console.log(data)

Running the app

Now that you have your application code complete, run the application and verify that the API call outputs “Success”.

Setting Amazon DynamoDB custom indexes in your GraphQL schemas

When building an application on top of DynamoDB, it helps to first think about access patterns. The new @key directive, which is a part of the GraphQL transformer in the Amplify CLI, makes it simple to configure complex key structures in DynamoDB that can fit your access patterns.

Let’s say we are using DynamoDB as a backend for your GraphQL API. The initial GraphQL schemas we can use to represent @model types Customer and Item are as shown below:

type Customer @model {
  email: String!
  username: String!
}

type Item @model {
    orderId: ID!
    status: Status!
    createdAt: AWSDateTime!
    name: String!
}

enum Status {
    DELIVERED
    IN_TRANSIT
    PENDING
    UNKNOWN
}

Access Patterns

For example, let’s say this application needs to facilitate the following access patterns:

  • Get customers by email – email is the primary key.
  • Get Items by status and by createdAt – orderId is the primary key

Let’s walkthrough how you would accomplish these use cases and call the APIs for these queries in your React JavaScript application.

Assumption: You completed the pre-requisites and created your React JavaScript application as shown in section 1.

Create an API

First, we will create a GraphQL API using the ‘amplify add api’ command:

$ amplify add api
? Please select from one of the below mentioned services GraphQL
? Provide API name: myproject
? Choose an authorization type for the API API key
? Do you have an annotated GraphQL schema? No
? Do you want a guided schema creation? Yes
? What best describes your project: Single object with fields (e.g., “Todo” with ID, name, description)
? Do you want to edit the schema now? Yes
? Press enter to continue

This will open the schema.graphql file under <myproject>/amplify/backend/api/myproject/schema.graphql

Modifying the schema.graphql file

Let’s dive in to the details with respect to the new @key directive.

Query by primary key

Add the following Customer @model type to your schema.graphql

type Customer @model @key(fields: ["email"]) {
    email: String!
    username: String
}

For Customer @model type, a @key without a name specifies the key for the DynamoDB table’s primary index. Here the hash key for the table’s primary index is email. You can only provide one @key without a name per @model type.

Query by composite keys (one or more fields are sort key)

type Item @model
    @key(fields: ["orderId", "status", "createdAt"])
    @key(name: "ByStatusAndCreatedAt", fields: ["status", "createdAt"], queryField: "itemsByStatusAndCreatedAt")
{
    orderId: ID!
    status: Status!
    createdAt: AWSDateTime!
    name: String!
}

enum Status {
    DELIVERED
    IN_TRANSIT
    PENDING
    UNKNOWN
}

Let’s break down the above Item @model type.

DynamoDB lets you query by at most two attributes. We added three fields to our first key directive ‘@key(fields: [“orderId”, “status”, “createdAt”])‘. The first field ‘orderId; will be the hash key as expected, but the sort key will be the new composite key named status#createdAt that is made of the status and createdAt fields. This enables us to run queries using more than two attributes at a time.

Run the ‘amplify push’ command to deploy your changes to the AWS cloud. Since we have the @key directive, it will create the DynamoDB tables for Customer and Item with the primary indexes, sort keys and generate resolvers that inject composite key values during queries and mutation.

$ amplify push
Current Environment: dev
? Do you want to generate code for your newly created GraphQL API Yes
? Choose the code generation language target javascript
? Enter the file name pattern of graphql queries, mutations and subscriptions src/graphql/**/*.js
? Do you want to generate/update all possible GraphQL operations - queries, mutations and subscriptions Yes
? Enter maximum statement depth [increase from default if your schema is deeply nested] 2

The file <myproject>/src/graphlql/queries.js will contain the auto generated queries for our  intended access patterns “Get customers by email” and “Get Items by status and by createdAt”.

Accessing the API from your application

Now that your API is configured, you can access it through the API class, which is part of the Amplify JavaScript Library. We will call the query for “Get Items by status and by createdAt”

Open App.js and add the following import and call to Amplify API as shown below:

import awsconfig from './aws-exports';
import { API, graphqlOperation } from "aws-amplify";
import { itemsByStatusAndCreatedAt }  from './graphql/queries';
API.configure(awsconfig);

const entry = {status:'PENDING', createdAt: {beginsWith:"2019"}};
const data = await API.graphql(graphqlOperation(itemsByStatusAndCreatedAt, entry))
console.log(data)

To learn more, refer to the documentation here.

Feedback

We hope you like these new features! As always, let us know how we’re doing, and submit any requests in the Amplify Framework GitHub Repository. You can read more about AWS Amplify on the AWS Amplify website.

 

from AWS Mobile Blog

Using multiple authorization types with AWS AppSync GraphQL APIs

Using multiple authorization types with AWS AppSync GraphQL APIs

Written by Ionut Trestian, Min Bi, Vasuki Balasubramaniam, Karthikeyan, Manuel Iglesias, BG Yathi Raj, and Nader Dabit

Today, AWS announced that AWS AppSync now supports configuring more than one authorization type for GraphQL APIs. You can now configure a single GraphQL API to deliver private and public data. Private data requires authenticated access using authorization mechanisms such as IAM, Amazon Cognito User Pools, and OIDC. Public data does not require authenticated access and is delivered through authorization mechanisms such as API Keys.

You can also configure a single GraphQL API to deliver private data using more than one authorization type. For example, you can configure your GraphQL API to authorize some schema fields using OIDC, while other schema fields through Amazon Cognito User Pools or IAM.

AWS AppSync is a managed GraphQL service that simplifies application development. It allows you to create a flexible API to securely access, manipulate, and combine data from one or more data sources.

With today’s launch, you can configure additional authorization types while retaining the authorization settings of your existing GraphQL APIs. To ensure that there are no behavioral changes in your existing GraphQL APIs, your current authorization settings are set as the default.  You can add additional authorization types using the AWS AppSync console, AWS CLI, or AWS CloudFormation templates.

To add more authorization types using the AWS AppSync console, launch the console, choose your GraphQL API, then choose Settings and scroll to the Authorization settings. The snapshot below shows a GraphQL API configured to use API Key as the default authorization type. It also has two Amazon Cognito user pools and AWS IAM as additional authorization types.

  • To add more authorization types using the AWS CLI, see the create-graphql-api section of the AWS CLI Command Reference.
  • To add more authorization types through AWS CloudFormation, see AWS::AppSync::GraphQLApi in the AWS CloudFormation User Guide.

After configuring the authorization types for your GraphQL API, you can use schema directives to set the authorization types for one or more fields in your GraphQL schema. AWS AppSync now supports the following schema directives for authorization:

  • @aws_api_key to—A field uses API_KEY for authorization.
  • @aws_iam—A field uses AWS_IAM for authorization.
  • @aws_oidc—A field uses OPENID_CONNECT for authorization.
  • @aws_cognito_user_pools—A field uses AMAZON_COGNITO_USER_POOLS for authorization.

The following code example shows using schema directives for authorization:

schema {
    query: Query
    mutation: Mutation
}

type Query {
    getPost(id: ID): Post
    getAllPosts(): [Post]
    @aws_api_key
}

type Mutation {
    addPost(
        id: ID!
        author: String!
        title: String!
        content: String!
        url: String!
    ): Post!
}

type Post @aws_api_key @aws_iam {
    id: ID!
    author: String
    title: String
    content: String
    url: String
    ups: Int!
    downs: Int!
    version: Int!
}

Assume that AWS_IAM is the default authorization type for this GraphQL schema. This means that fields without directives are protected using AWS_IAM. An example is the getPost() field in Query.

Next, look at the getAllPosts() field in Query. This field is protected using @aws_api_key, which means that you can access this field using API keys. Directives work at the field level. This means that you must give API_KEY access to the Post type as well. This can be done in two ways:

  • Mark each field in the Post type with a directive.
  • Mark the Post type itself with the @aws_api_key directive.

For this example, I chose the latter option.

Now, to restrict access to fields in the Post type, you can configure directives for individual fields, as shown below. You can add a field called restrictedContent to Post and restrict access to it by using the @aws_iam directive. With this setup, AWS_IAM authenticated requests can access restrictedContent, while requests authenticated with API keys do not have access.

type Post @aws_api_key @aws_iam {
    id: ID!
    author: String
    title: String
    content: String
    url: String
    ups: Int!
    downs: Int!
    version: Int!
    restrictedContent: String!
    @aws_iam
}

Amplify CLI

Amplify CLI version 1.6.8 supports adding AWS AppSync APIs configured with multiple authorization types. To add an API with mixed authorization mode, you can run the following command:

$ amplify add codegen —apiId <API_ID>

✖ Getting API details
✔ Getting API details
Successfully added API to your Amplify project
? Enter the file name pattern of graphql queries, mutations and subscriptions graphql/**/*.graphql
? Do you want to generate/update all possible GraphQL operations - queries, mutations and subscriptions Yes
? Enter maximum statement depth [increase from default if your schema is deeply nested] 2
? Enter the file name for the generated code API.swift
? Do you want to generate code for your newly created GraphQL API Yes
✔ Downloaded the schema
✔ Generated GraphQL operations successfully and saved at graphql
✔ Code generated successfully and saved in file API.swift

Android & iOS client support

AWS also updated the Android and iOS clients to support multiple authorization types. You can enable multiple clients by setting the useClientDatabasePrefix flag to true. The awsconfiguration.json file is generated by the AWS AppSync console, and the Amplify CLI adds an entry in the AWS AppSync section. This is used to separate the caches used for operations such as query, mutation, and subscription.

Important: If you are an existing client, the useClientDatabasePrefix flag has a default value of false.  When you use multiple clients, setting useClientDatabasePrefix to true changes the location of the caches used by the client. You also must migrate any data within the caches to keep.

The following code examples highlight the new values in the awsconfiguration.json and the client code configurations.

awsconfiguration.json

The friendly_name illustrated here is created from a prompt from the Amplify CLI. There are four clients in this configuration that connect to the same API, except that they use different AuthMode and ClientDatabasePrefix settings.

{
  "Version": "1.0",
  "AppSync": {
    "Default": {
      "ApiUrl": "https://xyz.us-west-2.amazonaws.com/graphql",
      "Region": "us-west-2",
      "AuthMode": "API_KEY",
      "ApiKey": "da2-xyz",
      "ClientDatabasePrefix": "friendly_name_API_KEY"
    },
    "friendly_name_AWS_IAM": {
      "ApiUrl": "https://xyz.us-west-2.amazonaws.com/graphql",
      "Region": "us-west-2",
      "AuthMode": "AWS_IAM",
      "ClientDatabasePrefix": "friendly_name_AWS_IAM"
    },
    "friendly_name_AMAZON_COGNITO_USER_POOLS": {
      "ApiUrl": "https://xyz.us-west-2.amazonaws.com/graphql",
      "Region": "us-west-2",
      "AuthMode": "AMAZON_COGNITO_USER_POOLS",
      "ClientDatabasePrefix": "friendly_name_AMAZON_COGNITO_USER_POOLS"
    },
    "friendly_name_OPENID_CONNECT": {
      "ApiUrl": "https://xyz.us-west-2.amazonaws.com/graphql",
      "Region": "us-west-2",
      "AuthMode": "OPENID_CONNECT",
      "ClientDatabasePrefix": "friendly_name_OPENID_CONNECT"
    }
  }
}

 

Android—Java

The useClientDatabasePrefix is added on the client builder, which signals to the builder that the ClientDatabasePrefix value should be used from the AWSConfiguration object (awsconfiguration.json).

AWSAppSyncClient client = AWSAppSyncClient.builder()
   .context(getApplicationContext())
   .awsConfiguration(new AWSConfiguration(getApplicationContext()))
   .useClientDatabasePrefix(true)
   .build();

iOS—Swift

The useClientDatabasePrefix is added to the AWSAppSyncCacheConfiguration, which reads the ClientDatabasePrefix value from the AWSAppSyncServiceConfig object (awsconfiguration.json).

let serviceConfig = try AWSAppSyncServiceConfig()
let cacheConfig = AWSAppSyncCacheConfiguration(useClientDatabasePrefix: true,
                                            appSyncServiceConfig: serviceConfig)
let clientConfig = AWSAppSyncClientConfiguration(appSyncServiceConfig: serviceConfig,
                                                   cacheConfiguration: cacheConfig)

let client = AWSAppSyncClient(appSyncConfig: clientConfig)

Public/private use case example

Here’s an example of how the newly introduced capabilities can be used in a client application.

Android—Java

The following code example creates a client factory to retrieve the client based on the need to operate in public (API_KEY) or private (AWS_IAM) authorization mode.

// AppSyncClientMode.java
public enum AppSyncClientMode {
    PUBLIC,
    PRIVATE
}

// ClientFactory.java
public class ClientFactory {

    private static Map<AWSAppSyncClient> CLIENTS;

    public static AWSAppSyncClient getAppSyncClient(AppSyncClientMode choice) {
        return CLIENTS[choice];
    }

    public static void initClients(final Context context) {
        AWSConfiguration awsConfigPublic = new AWSConfiguration(context);
        CLIENTS[PUBLIC] = AWSAppSyncClient.builder()
                                          .context(context)
                                          .awsConfiguration(awsConfigPublic)
                                          .useClientDatabasePrefix(true)
                                          .build();
        AWSConfiguration awsConfigPrivate = new AWSConfiguration(context);
        awsConfigPrivate.setConfiguration("friendly_name_AWS_IAM");
        CLIENTS[PRIVATE] = AWSAppSyncClient.builder()
                                           .context(context)
                                           .awsConfiguration(awsConfigPrivate)
                                           .useClientDatabasePrefix(true)
                                           .credentialsProvider(AWSMobileClient.getInstance())
                                           .build();
    }
}

This is what the usage would look like.

ClientFactory.getAppSyncClient(AppSyncClientMode.PRIVATE).query(fooQuery).enqueue(...);

iOS—Swift

The following code example creates a client factory to retrieve the client based on the need to operate in public (API_KEY) or private (AWS_IAM) authorization mode.

public enum AppSyncClientMode {
    case `public`
    case `private`
}

public class ClientFactory {
    static var clients: [AppSyncClientMode:AWSAppSyncClient] = [:]

    class func getAppSyncClient(mode: AppSyncClientMode) -> AWSAppSyncClient? {
        return clients[mode];
    }

    class func initClients() throws {
        let serviceConfigAPIKey = try AWSAppSyncServiceConfig()
        let cacheConfigAPIKey = try AWSAppSyncCacheConfiguration(useClientDatabasePrefix: true, appSyncServiceConfig: serviceConfigAPIKey)
        let clientConfigAPIKey = try AWSAppSyncClientConfiguration(appSyncServiceConfig: serviceConfigAPIKey, cacheConfiguration: cacheConfigAPIKey)
        clients[AppSyncClientMode.public] = try AWSAppSyncClient(appSyncConfig: clientConfigAPIKey)

        let serviceConfigIAM = try AWSAppSyncServiceConfig(forKey: "friendly_name_AWS_IAM")
        let cacheConfigIAM = try AWSAppSyncCacheConfiguration(useClientDatabasePrefix: true, appSyncServiceConfig: serviceConfigIAM)
        let clientConfigIAM = try AWSAppSyncClientConfiguration(appSyncServiceConfig: serviceConfigIAM,cacheConfiguration: cacheConfigIAM)
        clients[AppSyncClientMode.private] = try AWSAppSyncClient(appSyncConfig: clientConfigIAM)
    }
}

Conclusion

In this post, we showed how you can use the new multiple authorization type setting in AWS AppSync to allow separate public and private data authorization in your GraphQL API. While your current authorization settings are the default on existing GraphQL APIs, you can add additional authorization types using the AWS AppSync console, AWS CLI, or AWS CloudFormation templates.

from AWS Mobile Blog

Getting more visibility into GraphQL performance with AWS AppSync logs

Getting more visibility into GraphQL performance with AWS AppSync logs

Written by Shankar Raju, SDE at AWS & Nader Dabit, Sr. Developer Advocate at AWS.

Today, we are happy to announce that AWS AppSync now enables you to better understand the performance of your GraphQL requests and usage characteristics of your schema fields. You can easily identify resolvers with large latencies that may be the root cause of a performance issue. You can also identify the most and least frequently used fields in your schema and assess the impact of removing GraphQL fields. Offering support for these capabilities has been one of the top feature requests by our customers.

AWS AppSync is a managed GraphQL service that simplifies application development by letting you create a flexible API to securely access, manipulate, and combine data from one or more data sources. AWS AppSync now emits log events in a fully structured JSON format. This enables seamless integration with log analytics services such as Amazon CloudWatch Logs Insights and Amazon Elasticsearch Service (Amazon ES), and other log analytics solutions.

We have also added new fields to log events to increase your visibility into the performance and health of your GraphQL operations:

  • To search and analyze one or more log types, GraphQL requests, and multiple GraphQL API actions, we added new log fields (logType, requestId, and graphQLAPIId) to every log event that AWS AppSync emits.
  • To quickly identify errors and performance bottlenecks, we added new log fields to the existing request-level logs. These log fields contain information about the HTTP response status code (HTTPStatusResponseCode) and latency of a GraphQL request (latency).
  • To uniquely identify and run queries against any field in your GraphQL schema, we added new log fields to the existing field-level logs. These log fields contain information about the parent (parentType) and name (fieldName) of a GraphQL field.
  • To gain visibility into the time taken to resolve a GraphQL field, we also included the resolver ARN (resolverARN) in the tracing information of GraphQL fields in the field-level logs.

In this post, we show how you can get more visibility into the performance and health of your GraphQL operations using CloudWatch Logs Insights and Amazon ES. As a prerequisite, you must first enable field-level logging for your GraphQL API so that AWS AppSync can emit logs to CloudWatch Logs.

Analyzing your logs with CloudWatch Logs Insights

You can analyze your AWS AppSync logs with CloudWatch Logs Insights to identify performance bottlenecks and the root cause of operational issues. For example, you can find the resolvers with the maximum latency, the most (or least) frequently invoked resolvers, and the resolvers with the most errors.

There is no setup required to get started with CloudWatch Logs Insights. This is because AWS AppSync automatically emits logs into CloudWatch Logs when you enable field-level logging on your GraphQL API.

The following are examples of queries that you can run to get actionable insights into the performance and health of your GraphQL operations. For your convenience, we added these examples as sample queries in the CloudWatch Logs Insights console.

In the CloudWatch console, choose Logs, Insights, select the AWS AppSync log group for your GraphQL API, and then choose Sample queries, AWS AppSync queries.

Find top 10 GraphQL requests with maximum latency

fields requestId, latency
| filter logType = "RequestSummary"
| limit 10
| sort latency desc

Find top 10 resolvers with maximum latency

fields resolverArn, duration
| filter logType = "Tracing"
| limit 10
| sort duration desc

Find the most frequently invoked resolvers

fields ispresent(resolverArn) as isRes
| stats count() as invocationCount by resolverArn
| filter isRes and logType = "Tracing"
| limit 10
| sort invocationCount desc

Find resolvers with most errors in mapping templates

fields ispresent(resolverArn) as isRes
| stats count() as errorCount by resolverArn, logType
| filter isRes and (logType = "RequestMapping" or logType = "ResponseMapping") and fieldInError
| limit 10
| sort errorCount desc

The results of CloudWatch Logs Insights queries can be exported to CloudWatch dashboards. We added a CloudWatch dashboard template for AWS AppSync logs in the AWS Samples GitHub repository. You can import this template into CloudWatch dashboards to have continuous visibility into your GraphQL operations.

Analyzing your logs with Amazon ES

You can search, analyze, and visualize your AWS AppSync logs with Amazon ES to identify performance bottlenecks and root cause of operational issues. Not only can you identify resolvers with the maximum latency and errors, but you can also use Kibana to create dashboards with powerful visualizations. Kibana is an open source, data visualization and exploration tool available in Amazon ES.

To get started with Amazon ES:

  1. Create an Amazon ES cluster, if you don’t have one already.
  2. In the CloudWatch Logs console, select the log group for your GraphQL API.
  3. Choose Actions, Stream to Amazon Elasticsearch Service and select the Amazon ES cluster to which to stream your logs. You can also use a log filter pattern to stream a specific set of logs. The following example is the log filter pattern for streaming log events containing information about the request summary, tracing, and GraphQL execution summary for AWS AppSync logs.
{ ($.logType = "Tracing") || ($.logType = "RequestSummary") || ($.logType = "ExecutionSummary") }

You can create Kibana dashboards to help you identify performance bottlenecks and enable you to continuously monitor your GraphQL operations. For example, to debug a performance issue, start by visualizing the P90 latencies of your GraphQL requests and then drill into individual resolver latencies.

To build a Kibana dashboard containing these visualizations, use the following steps:

  1. Launch Kibana and choose Dashboard, Create new dashboard.
  2. Choose Add. For Visualization type, choose Line.
  3. For the filter pattern to search Elasticsearch indexes, use cwl*. Elasticsearch indexes logs streamed from CloudWatch Logs (including AWS AppSync logs) with a prefix of “cwl-”. To differentiate AWS AppSync logs from other CloudWatch logs sent to Amazon ES, we recommend adding an additional filter expression of graphQLAPIID.keyword=<AWS AppSync GraphQL API ID> to your search.
  4. To get GraphQL request data from AWS AppSync logs, choose Add Filter and use the filter expression logType.keyword=RequestSummary.
  5. Choose Metrics, Y-Axis. For Aggregation, choose Percentile; for Field, choose latency, and for Percents, enter a value of 90. This enables you to view GraphQL request latencies on the Y axis.
  6. Choose Buckets, X-Axis. For Aggregation, choose Date Histogram; for Field, choose @timestamp; and for Interval, choose Minute. This enables you to view GraphQL request latencies aggregated in 1-minute intervals. You can change the aggregation internal to view latencies aggregated at a coarser or finer grained time interval to match your data density.
  7. Save your widget and add it to the Kibana dashboard, as shown below:

  1. To build a widget that visualizes the P90 latency of each resolver, repeat steps 1, 2, 3, and 4 earlier. For step 4, use a filter expression of logType.keyword=Tracing to get resolver latencies from AWS AppSync Logs.
  2. Repeat step 5 using duration as the Field value and then repeat step 6.
  3. Choose Add sub-buckets, Split Series. For Sub Aggregation, use Terms and for Field, choose resolverArn.keyword. This enables you to visualize the latencies of individual resolvers.
  4. Save your widget and add it to the Kibana dashboard, as shown below:

Here’s a Kibana dashboard containing widgets for the P90 request latencies and individual resolver latencies:

Availability
The new logging capabilities are available in the following AWS Regions and you can start analyzing your logs today:

  • US East (N. Virginia)
  • US East (Ohio)
  • US West (Oregon)
  • Europe (Ireland)
  • Europe (Frankfurt)
  • Europe (London)
  • Asia Pacific (Tokyo)
  • Asia Pacific (Mumbai)
  • Asia Pacific (Seoul)
  • Asia Pacific (Sydney)
  • Asia Pacific (Singapore)

Log events emitted on May 8, 2019, or later use the new logging format. To analyze GraphQL requests before May 8, 2019, you can migrate older logs to the new format using a script available in the GitHub sample.

from AWS Mobile Blog

Use the Amplify Console with incoming webhooks to trigger deployments

Use the Amplify Console with incoming webhooks to trigger deployments

Written by Nikhil Swaminathan, Sr. Product Manager (Tech) at AWS.

The Amplify Console recently launched support for incoming webhooks. This feature enables you to use third-party applications such as Contentful and Zapier to trigger deployments in the Amplify Console without requiring a code commit.

You can use headless CMS tools such as Contentful with the Amplify Console incoming webhook feature to trigger a deployment every time content is updated—for example, when a blog author publishes a new post. Modern CMSs are headless in nature, which gives developers the freedom to develop with any technology because the content itself doesn’t have a presentation layer. Content creators get the added benefit of publishing a single instance of the content to both web and mobile devices.

In this blog post, we set up Amplify Console to deploy updates every time new content is published.

1. Create a Contentful account using the Contentful CLI, and follow the steps in the getting started guide. The CLI helps you create a Contentful account, a Contentful project (called a space) with a sample blog content model, and a starter repository that’s downloaded to your local machine.

2. After the CLI creates a Contentful space, log in to your Contentful space at the Contentful website and choose ‘Settings > API Keys’.

3. The API keys were generated when you ran the CLI. Copy the Space ID and Content Delivery API. You’ll need these to trigger content deployments.

4. Push the code to a Git repo of your choice (Amplify Console supports GitHub, BitBucket, GitLab, and CodeCommit).

Log in to the Amplify Console, connect your repo, and pick a branch. On the Build Settings page, enter the CONTENTFUL_DELIVERY_TOKEN and the CONTENTFUL_SPACE_ID into the environment variables section. These tokens are used by your app during the build to authenticate with the Contentful service. Review the changes, and choose Save and deploy. Your app builds and deploys to an amplifyapp.com URL. It should look like this:

5. Create an incoming webhook to publish content updates. Choose App Settings > Build Settings, and then choose Create webhook. This webhook enables you to trigger a build in the Amplify Console on every POST to the HTTP endpoint. After you create the webhook, copy the URL (it looks like  https://webhooks.amplify…)

6. Go back to the Contentful dashboard, and choose Settings > Webhooks. Then choose  Add Webhook. Paste the webhook URL you copied from the Amplify Console into the URL section and update the Content Type to application/json. Choose Save.

 

7. We’re now ready to trigger a new build through a content update! Go to the Content tab on Contentful and add a new entry with the following fields—Name: Deploying to Amplify Console and Content Type: Blog Post. Enter the other required fields, and choose Publish.

8. The Amplify Console will kickoff a new build with the newest post.

You can also use the incoming webhook feature to trigger builds on post-commit Git hooks, and through daily build schedulers. We hope you like this new feature – learn more about the Amplify Console at https://console.amplify.aws.

from AWS Mobile Blog