Tag: Cognito

Visualizing big data with AWS AppSync, Amazon Athena, and AWS Amplify

Visualizing big data with AWS AppSync, Amazon Athena, and AWS Amplify

This article was written by Brice Pelle, Principal Technical Account Manager, AWS

 

Organizations use big data and analytics to extract actionable information from untapped datasets. It can be difficult for you to build an application with access to this trove of data. You want to build great applications quickly and need access to tools that allow you to interact with the data easily.

Presenting data is just as challenging. Tables of numbers and keywords can fail to convey the intended message and make it difficult to communicate insightful observations. Charts, graphs, and images tend to be better at conveying complex ideas and patterns.

This post demonstrates how to use Amazon Athena, AWS AppSync, and AWS Amplify to build an application that interacts with big data. The application is built using React, the AWS Amplify Javascript library, and the D3.js Javascript library to render custom visualizations.

The application code can be found in this GitHub repository. It uses Athena to query data hosted in a public Amazon S3 bucket by the Registry of Open Data on AWS. Specifically, it uses the High Resolution Population Density Maps + Demographic Estimates by CIESIN and Facebook.

This public dataset provides “population data for a selection of countries, allocated to 1 arcsecond blocks and provided in a combination of CSV and Cloud-optimized GeoTIFF files,” and is hosted in the S3 bucket s3://dataforgood-fb-data.

Architecture overview

The Amplify CLI sets up sign-in/sign-up with Amazon Cognito, stands up a GraphQL API for GraphQL operations, and provisions content storage on S3 (the result bucket).

The Amplify CLI Storage Trigger feature provisions an AWS Lambda function (the announcer function) to respond to events in the result bucket. With the CLI, the announcer Lambda function’s permissions are set to allow GraphQL operations on the GraphQL API.

The Amplify CLI supports defining custom resources associated with the GraphQL API using the CustomResources.json AWS CloudFormation template located in the folder amplify/backend/api/YOUR-API-NAME/stacks/ of an Amplify Project. You can use this capability to define via CloudFormation an HTTP data source and AppSync resolvers to interface with Athena, and a None data source and local resolvers to trigger subscriptions in response to mutations from the announcer Lambda function.

Setting up multi-auth on the GraphQL API

AWS AppSync supports multiple modes of authorization that can be used simultaneously to interact with the API. This application’s GraphQL API is configured with the Amazon Cognito User Pool as its default authorization mode.

Users must authenticate with the User Pool before sending GraphQL operations. Upon sign-in, the user receives a JSON Web Token (JWT) that is attached to requests in an authorization header when sending GraphQL operations.

IAM Authorization is another available mode of authorization. The GraphQL API is configured with IAM as an additional authorization mode to recognize and authorize SigV4-signed requests from the announcer Lambda function. The configuration is done using a custom resource backed by a Lambda function. The custom resource is is defined in the CloudFormation template with the AppSyncApiId as a property. When deployed, it uses the UpdateGraphqlApi action to add the additional authorization mode to the API:

"MultiAuthGraphQLAPI": {
  "Type": "Custom::MultiAuthGraphQLAPIResource",
  "Properties": {
    "ServiceToken": { "Fn::GetAtt": ["MultiAuthGraphQLAPILambda", "Arn"] },
    "AppSyncApiId": { "Ref": "AppSyncApiId" }
  },
  "DependsOn": "MultiAuthGraphQLAPILambda"
}

The GraphQL schema must specify which types and fields are supported by the authorization modes (with Amazon Cognito User Pool being the default). The schema is configured with the needed authorization directives:

  •  @aws_iam to specify if a field or type is IAM authorized.
  • @aws_cognito_user_pools to specify if a field or type is Amazon Cognito User Pool authorized.

The announcer Lambda function needs access to the announceQueryResult mutation and the types included in the response. The AthenaQueryResult type is returned by the startQuery query (called from the app), and by announceQueryResult. The type must support both authorization modes.

type AthenaQueryResult @aws_cognito_user_pools @aws_iam {
    QueryExecutionId: ID!
    file: S3Object
}
type S3Object @aws_iam {
    bucket: String!
    region: String!
    key: String!
}
type Query {
    startQuery(input: QueryInput): AthenaQueryResult
}
type Mutation {
    announceQueryResult(input: AnnounceInput!):
      AthenaQueryResult @aws_iam
}

Setting up a NONE data source (Local Resolver) to enable subscriptions

The announcer Lambda function is triggered in response to S3 events and sends a GraphQL mutation to the GraphQL API. The mutation in turn triggers a subscription and sends the mutation selection set to the subscribed app.

The mutation data does not need to be saved. AWS AppSync only needs to forward the results to the application using the triggered subscription. To enable this, a NONE data source is configured and associated with the local resolver announceQueryResult. NONE data sources and local resolvers are very useful to allow publishing real-time subscriptions without triggering a data source call to modify or update data.

"DataSourceNone": {
  "Type": "AWS::AppSync::DataSource",
  "Properties": {
    "ApiId": { "Ref": "AppSyncApiId" },
    "Name": "None",
    "Description": "None",
    "Type": "NONE"
  }
},
"AnnounceQueryResultResolver": {
  "Type": "AWS::AppSync::Resolver",
  "Properties": {
    "ApiId": {"Ref": "AppSyncApiId"},
    "DataSourceName": { "Fn::GetAtt": ["DataSourceNone", "Name"] },
    "TypeName": "Mutation",
    "FieldName": "announceQueryResult",
  }
}

In the schema, the onAnnoucement subscription is associated with the mutation.

type Mutation {
    announceQueryResult(input: AnnounceInput!):
      AthenaQueryResult @aws_iam
}
type Subscription {
    onAnnouncement(QueryExecutionId: ID!): 
      AthenaQueryResult
        @aws_subscribe(mutations: ["announceQueryResult"])
}

Setting up Athena as a data source

AWS AppSync supports HTTP data sources and can be configured to interact securely with AWS service endpoints.

To configure Athena as a data source, the CustomResources.json template defines the role that AWS AppSync assumes to interact with the API: AppSyncAthenaRole.

The role is assigned the managed policy AmazonAthenaFullAccess. The policy provides read and write permissions to S3 buckets with names starting with aws-athena-query-results-. The application uses this format to name the S3 bucket in which the Athena query results are stored. It assigns the AmazonS3ReadOnlyAccess policy to allow Athena to read from the source data bucket.

The resource DataSourceAthenaAPI defines the data source and specifies IAM as the authorization type along with the service role to be used.

"AppSyncAthenaRole": {
  "Type": "AWS::IAM::Role",
  "Properties": {
    "RoleName": {
      "Fn::Join": [
        "-",
        ["appSyncAthenaRole", { "Ref": "AppSyncApiId" }, { "Ref": "env" }]
      ]
    },
    "ManagedPolicyArns": [
      "arn:aws:iam::aws:policy/AmazonAthenaFullAccess",
      "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
    ],
    "AssumeRolePolicyDocument": {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": { "Service": ["appsync.amazonaws.com"] },
          "Action": ["sts:AssumeRole"]
        }
      ]
    }
  }
},
"DataSourceAthenaAPI": {
  "Type": "AWS::AppSync::DataSource",
  "Properties": {
    "ApiId": { "Ref": "AppSyncApiId" },
    "Name": "AthenaAPI",
    "Description": "Athena API",
    "Type": "HTTP",
    "ServiceRoleArn": { "Fn::GetAtt": ["AppSyncAthenaRole", "Arn"] },
    "HttpConfig": {
      "Endpoint": {
        "Fn::Join": [
          ".",
          ["https://athena", { "Ref": "AWS::Region" }, "amazonaws.com/"]
        ]
      },
      "AuthorizationConfig": {
        "AuthorizationType": "AWS_IAM",
        "AwsIamConfig": {
          "SigningRegion": { "Ref": "AWS::Region" },
          "SigningServiceName": "athena"
        }
      }
    }
  }
}

Application Overview

A walkthrough and guide to setting up the application, configuring the subscription, and visualization follows.

Walk-through

Here is how the application works:

  1. Users sign in to the app using Amazon Cognito User Pools. The JWT access token returned at sign-in is sent in an authorization header to AWS AppSync with every GraphQL operation.
  2. A user selects a country from the drop-down list and chooses Query. This triggers a GraphQL query. When the app receives the QueryExecutionId in the response, it subscribes to mutations on that ID.
  3. AWS AppSync makes a SigV4-signed request to the Athena API with the specified query.
  4. Athena runs the query against the specified table. The query returns the sum of the population at recorded longitudes for the selected country along with a count of latitudes at each longitude.
    SELECT longitude, count(latitude) as count, sum(population) as tot_pop
      FROM "default"."hrsl"
      WHERE country='${countryCode.trim()}'
      group by longitude
      order by longitude
  5. The results of the query are stored in the result S3 bucket, under the /protected/athena/ prefix. Signed-in app users can access these results using their IAM credentials.
  6. Putting the query result file in the bucket generates an S3 event and triggers the announcer Lambda function.
  7. The announcer Lambda function sends an announceQueryResult mutation with the S3 bucket and object information.
  8. The mutation triggers a subscription with the mutation’s selection set.
  9. The client retrieves the result file from the S3 bucket and displays the custom visualization.

Setting up the application

The application is a React app that uses the Amplify Javascript library to interact with the Amplify-configured backend services. To get started, install the required libraries.

npm install aws-amplify aws-amplify-react

Then, in the main app file, import the necessary dependencies, including the ./aws-exports.js file containing the backend configuration information.

import React, { useEffect, useState, useCallback } from 'react'
import Amplify, { API, graphqlOperation, Storage } from 'aws-amplify'
import { withAuthenticator } from 'aws-amplify-react'
...
import awsconfig from './aws-exports'
...
Amplify.configure(awsconfig)

To get automatic sign-in, sign-up, and confirm functionality in the app, wrap the main component in the withAuthenticator higher-order component (HOC).

export default withAuthenticator(App, true)

Configuring the subscription

When a user chooses Query, it calls a startQuery callback, and sends a GraphQL query, which returns a QueryExecutionId and updates the queryExecutionId state variable.

const [isSending, setIsSending] = useState(false)
const [QueryExecutionId, setQueryExecutionId] = useState(null)

const startQuery = useCallback(async () => {
  if (isSending) return
  setIsSending(true)
  setFileKey(null)
  try {
    const result = await API.graphql(
      graphqlOperation(queries.startQuery, {
        input: { QueryString: sqlQuery(countryCode) }
      })
    )
    console.log(`Setting sub ID: ${result.data.startQuery.QueryExecutionId}`)
    setIsSending(false)
    setQueryExecutionId(result.data.startQuery.QueryExecutionId)
  } catch (error) {
    setIsSending(false)
    console.log('query failed ->', error)
  }
}, [countryCode, isSending])

Setting the state triggers the following useEffect hook, which creates the subscription. Any time that subscriptionId is changed (for example, set to null), it calls the useEffect return function, which unsubscribes the existing subscription.

const [countryCode, setCountryCode] = useState('')
const [fileKey, setFileKey] = useState(null)

useEffect(() => {
  if (!QueryExecutionId) return

  console.log(`Starting subscription with sub ID ${QueryExecutionId}`)
  const subscription = API.graphql(
    graphqlOperation(subscriptions.onAnnouncement, { QueryExecutionId })
  ).subscribe({
    next: result => {
      console.log('subscription:', result)
      const data = result.value.data.onAnnouncement
      console.log('subscription data:', data)
      setFileKey(data.file.key)
      setQueryExecutionId(null)
    }
  })

  return () => {
    console.log(`Unsubscribe with sub ID ${QueryExecutionId}`, subscription)
    subscription.unsubscribe()
  }
}, [QueryExecutionId])

Visualization

When triggered, the onAnnouncement subscription returns the following data specified in the mutation selection set. This tells the application where to fetch the result file. Signed-in users can read objects in the result bucket starting with the /protected/ prefix. Because Athena saves the results under the /protected/athena/ prefix, authenticated users can retrieve the result files.

QueryExecutionId
    file {
        bucket
        region
        key
}

The key value is passed to the fileKey props in a Visuals component. The application splits the key to extract the level (protected), the identity (Athena), and the object key (\*.csv). The Storage.get function generates a presigned URL with the current IAM credentials, used to retrieve the file with the d3.csv function.

The file is a CSV file with rows of longitude, count, and population. A callback maps the values to x and y (the graph coordinates), and a count property. The application uses the D3.js library along with the d3-hexbin plugin to create the visualization. The d3-hexbin plugin groups the data points in hexagonal-shaped bins based on a defined radius.

const [link, setLink] = useState(null)
useEffect(() => {
  const go = async () => {
    const [level, identityId, _key] = fileKey.split('/')
    const link = await Storage.get(_key, { level, identityId })
    setLink(link)

    const data = Object.assign(
      await d3.csv(link, ({ longitude, tot_pop, count }) => ({
        x: parseFloat(longitude),
        y: parseFloat(tot_pop),
        count: parseInt(count)
      })),
      { x: 'Longitude', y: 'Population', title: 'Pop bins by Longitude' }
    )
    drawChart(data)
  }
  go()
}, [fileKey])

Launching the application

Follow these steps to launch the application.

One-click launch

One Click Deploy to Amplify Console

You can deploy the application directly to the Amplify Console from the public GitHub repository. Both the backend infrastructure and the frontend application are built and deployed. After the application is deployed, follow the remaining steps to configure your Athena database.

Clone and launch

Alternatively, you can clone the repository, deploy the backend with Amplify CLI, and build and serve the frontend locally.

First, install the Amplify CLI and step through the configuration.

$ npm install -g @aws-amplify/cli
$ amplify configure

Next, clone the repository and install the dependencies.

$ git clone https://github.com/aws-samples/aws-appsync-visualization-with-athena-app
$ cd aws-appsync-visualization-with-athena-app
$ yarn

Update the name of the storage bucket (bucketName) in the file ./amplify/backend/storage/sQueryResults/parameters.json then initialize a new Amplify project and push the changes.

$ amplify init
$ amplify push

Finally, launch the application.

$ yarn start

Setting up Athena

The application uses data hosted in S3 by the Registry of Open Data on AWS. Specifically, you use the High Resolution Population Density Maps + Demographic Estimates by CIESIN and Facebook. You can find information on how to set up Athena to query this dataset in the Readme file.

Create a database named `default`.

create database IF NOT EXISTS default;

Create the table in the default database.

CREATE EXTERNAL TABLE IF NOT EXISTS default.hrsl (
  `latitude` double,
  `longitude` double,
  `population` double 
) PARTITIONED BY (
  month string,
  country string,
  type string 
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
  'serialization.format' = '\t',
  'field.delim' = '\t'
) LOCATION 's3://dataforgood-fb-data/csv/'
TBLPROPERTIES ('has_encrypted_data'='false', 'skip.header.line.count'='1');

Recover the partitions.

MSCK REPAIR TABLE hrsl;

When that completes, you should be able to preview the table and see the type of information shown in the following screenshot.

Next, create a new workbook. First, look up the name of your S3 content storage bucket. If you deployed using the one-click launch, search for aws_user_files_s3_bucket in the backend build activity log. If you deployed using the “Clone and Launch” steps, find aws_user_files_s3_bucket in your aws-exports.js file in the src directory. From the Athena console, choose Workgroup in the upper bar, then choose Create workgroup. Provide the workgroup name: appsync. Set Query result location to s3:///protected/athena/. Choose Create workgroup.

Conclusion

This post demonstrated how to use AWS AppSync to interact with the Amazon Athena API and securely render custom visualizations in your front-end application. By combining these services, you can easily create applications that interact directly with big data stored on S3, and render the data in different ways with graphs and charts.

Along with libraries from D3.js, you can develop new innovative ways to interact with data and display information to users. In addition, you can get started quickly, implement core functionality, and deploy instantly using the AWS Amplify Framework.

from AWS Mobile Blog

Integrating alternative data sources with AWS AppSync: Amazon Neptune and Amazon ElastiCache

Integrating alternative data sources with AWS AppSync: Amazon Neptune and Amazon ElastiCache

This article was written by Josh Kahn, Senior Solutions Architect, AWS

 

Since its launch AWS re:Invent 2017, AWS AppSync has grown the number of and types of natively supported data sources. Today, AppSync supports NoSQL (Amazon DynamoDB), search (Amazon Elasticsearch Service), and relational (Amazon Aurora Serverless) data sources among others. AppSync allows customers to quickly build rich, scalable, enterprise-ready backends with multiple security options that can perform complex operations across data sources in a unified GraphQL API.

“Seldom can one database fit the needs of multiple distinct use cases. The days of the one-size-fits-all monolithic database are behind us, and developers are now building highly distributed applications using a multitude of purpose-built databases,” said Werner Vogels, CTO and VP of Amazon.com. For more on this topic, see his post, A one size fits all database doesn’t fit anyone.

In this post, we explore how AWS AppSync can utilize AWS Lambda to integrate with alternative data sources—in other words, those not directly integrated out-of-the-box with AWS AppSync. While we look specifically at Amazon ElastiCache and Amazon Neptune here, you could support other data sources via a similar approach (including forthcoming services such as Amazon QLDB and Amazon Timestream).

To demonstrate the power of AppSync paired with ElastiCache and Neptune, we will build a restaurant finder API (specifically a Hot Dog Restaurant API). We’ll use ElastiCache for Redis to search for nearby restaurants and Neptune to power recommendations, Step Functions will populate the data stores with some sample data.

 

 

The start of our AppSync GraphQL schema is as follows. We will update the schema as we add functionality powered by ElastiCache and Neptune later in the post. For more information about using AWS AppSync, see the AppSync Developer Guide.

type Restaurant {
  id: ID!
  name: String!
  description: String
  address: String!
  city: String!
  state: String!
  zip: Int!
  longitude: Float!
  latitude: Float!
}

type Like {
  user: String!
  restaurant: Restaurant
}

type SearchResult {
  restaurant: Restaurant!
  distance: String
  units: String
}

input GPSInput {
  latitude: Float!
  longitude: Float!
  radius: Float
}

type Query {
  listRestaurants: [Restaurant]
  getRestaurant(id: ID!): Restaurant
}

schema {
  query: Query
}

 

Searching for Nearby Restaurants

We will use Amazon ElastiCache for Redis to search for nearby restaurants. ElastiCache offers super-fast access to data from in-memory data stores. While ElastiCache is often thought of foremost for caching needs, it can be used for numerous other purposes. For example, ElastiCache for Redis can also be used to build leaderboards and session storage.

To perform a geospatial search in Redis, you first need the user’s latitude and longitude. The user’s location is available on most modern devices (with permission) using JavaScript Geolocation API in the browser, CoreLocation on iOS, or Android LocationManager. Building on our earlier GraphQL schema, we can add a new query to search for restaurants by location:

type Query {
  ...
  searchByLocation(location: GPSInput!): [SearchResult]
}

Redis (version 3.2.0 and later) supports geolocation searches using the GEORADIUS command. We can construct a geolocation query in a new Lambda function, passing the user’s latitude and longitude in the payload from the AppSync resolver. After adding a new Lambda data source (we’ll call it ElastiCacheIntegration) to your API, you can configure the resolver for the searchByLocation query with the Invoke operation to call Lambda:

 

Resolver Request Mapping Template:

{
  "version": "2017-02-28",
  "operation": "Invoke",
  "payload": {
    "action": "searchByLocation",
    "arguments":  $utils.toJson($ctx.arguments)
  }
}

Resolver Response Mapping Template:

#if($ctx.result && $ctx.result.error)
  $util.error($ctx.result.error)
#end
$util.toJson($ctx.result)

In this example, the Lambda function performs potentially multiple actions, all related to interacting with ElastiCache. Another valid approach is to create a single function per query.

The integration function using Node.js 10 is as follows:

const Redis = require("ioredis")

const GEO_KEY = process.env.ELASTICACHE_GEO_KEY

let redis = new Redis.Cluster([
  {
    host: process.env.ELASTICACHE_ENDPOINT,
    port: process.env.ELASTICACHE_PORT
  }
])

async function searchByGeo(lat, lon, radius=10, units="mi") {
  try {
    let result = await redis.georadius(
          GEO_KEY,   // key
          lon,       // longitude
          lat,       // longitude
          radius,    // search radius
          units,     // search radius units
          "WITHCOORD",
          "WITHDIST"
        )

    if (!result) { return [] }

    // map from Redis response
    return result.map( (r) => {
      return { id: r[0], dist: r[1], units: units }
    }).sort((a, b) => { return a.dist - b.dist })

  } catch (error) {
    console.error(JSON.stringify(error))
    return { error: error.message }
  }
}

exports.handler = async(event) => {
  switch(event.action) {
    case "searchByLocation":
      let location = event.arguments.location
      let radius = event.arguments.radius
      let result = searchByGeo(location.latitude, location.longitude, radius)
      return result
    default:
      throw("No such method")
  }
}

Before returning the query result to the client, make sure that the shape of the response matches that defined in the schema (for searchByLocation, an array of SearchResults). For this example, we can use an AWS AppSync pipeline resolver to do the following:

  1. Query ElastiCache for distances.
  2. Retrieve restaurant data from DynamoDB using BatchGetItem.
  3. Merge the results to match the SearchResult schema.

Pipeline resolvers make it easy to define a reusable set of functions that can query multiple data sources with a single API call. Performing both queries and manipulating data in a Lambda function is also a valid approach. However, for this example, we preferred the composability of the pipeline resolver, as you use the DynamoDB BatchGetItem function again later.

Now, run a query against our GraphQL API to search for nearby restaurants:

query SearchByLocation {
  searchByLocation(location: {
    latitude: 41.8781,
    longitude: -87.6298
  }) {
    restaurant {
      name
    }
    distance
    units
  }
}

The result looks something like this:

{
  "data": {
    "searchByLocation": [
      {
        "restaurant": {
          "name": "Portillo’s"
        },
        "distance": "1.0694",
        "units": "mi"
      },
      {
        "restaurant": {
          "name": "Fatso’s Last Stand"
        },
        "distance": "3.0622",
        "units": "mi"
      },
      ...   
    ]
  }
}

In this project, we are augmenting our primary data store (DynamoDB) with a purpose-built database (ElastiCache) to take advantage of the latter’s capabilities (super-fast geolocation queries). Because we want data in ElastiCache to stay synchronized with the primary store, enable DynamoDB Streams to push applicable data to ElastiCache.

As shown in the following diagram, we can create a second Lambda function that is invoked by a DynamoDB stream when restaurant data changes. This function will modify our Redis sorted set by adding or updating restaurant data, specifically latitude and longitude.

Generating Recommendations

For our restaurant finder, we also want to recommend restaurants based on likes of others. Amazon Neptune is a fully-managed graph database built to model highly-connected datasets. Neptune is fast, reliable, and easy to work with. By traversing the rich collection of vertices and edges in a graph database, we can realize relationships indicated only by foreign keys in a relational database. For example, imagine a graph database that models a network of friends. While each friend’s name and birthdate are important, so too are the details of their relationship with others (e.g. date we became friends, type of relationship). An Amazon Neptune graph database allows us to model attributes of both the people (“vertices”) and the relationships (“edges”) between them.

In order to find restaurants, the graph will have vertices for both restaurants and people. The graph will also contain one types of edge, “likes”, that models a user liking a particular restaurant. For the purpose of our example, we’ll use a toy graph with fictitious data that looks like the following diagram:

We can now extend our GraphQL API to support the recommendation capability provided by Neptune. Here, pass the name of the user for whom we want to generate recommendations. In a real-world scenario; however, you would likely use a unique identifier for the logged in user instead.

type Query {
  ...
  getRecommendationsFor(user:String!): [Restaurant]
}

The resolver mapping templates for your Neptune integration look quite similar to the earlier template. Again, you could elect to perform more data manipulation in the template than your function.

 

Resolver Request Mapping Template:

{
  "version": "2017-02-28",
  "operation": "Invoke",
  "payload": {
    "action": "getRecommendations",
    "arguments":  $utils.toJson($ctx.arguments)
  }
}

Resolver Response Mapping Template:

#if($ctx.result && $ctx.result.error)
  $util.error($ctx.result.error)
#end
$util.toJson($ctx.result)

Following the same approach for Neptune as with ElastiCache, we’ll build a single function that can make several types of queries against Neptune. Amazon Neptune currently supports two query languages to access data in the graph: Gremlin and SPARQL. Either language could be used, though we’ll use Gremlin. For more on Gremlin, check out the excellent Practical Gremlin.

const gremlin = require('gremlin')
const DriverRemoteConnection = gremlin.driver.DriverRemoteConnection

const Graph = gremlin.structure.Graph
const P = gremlin.process.P
const Order = gremlin.process.order
const Scope = gremlin.process.scope
const Column = gremlin.process.column

const dc = new DriverRemoteConnection(
  `wss://${process.env.NEPTUNE_ENDPOINT}:${process.env.NEPTUNE_PORT}/gremlin`
)
const graph = new Graph()
const g = graph.traversal().withRemote(dc)

async function getRecommendationsFor(userName) {
  try {
    // based on Gremlin recommendation recipe:
    // http://tinkerpop.apache.org/docs/current/recipes/#recommendation
    let result = await g.V()
      .has('User', 'name', userName).as('user')
      .out('likes').aggregate('self')
      .in_('likes').where(P.neq('user'))
      .out('likes').where(P.without('self'))
      .values('id')
      .groupCount()
      .order(Scope.local)
        .by(Column.values, Order.decr)
      .select(Column.keys)
      .next()
    
    return result.value.map( (r) => {
      return { id: r }
    })
  } catch (error) {
    console.error(JSON.stringify(error))
    return { error: error.message }
  }
}

exports.handler = async(event) => {
  switch(event.action) {
    case "getRecommendations":
      return await getRecommendationsFor(event.arguments.user)
    default:
      return { error: "No such method" }
  }
}

With these changes, you can query the GraphQL API for a list of restaurant recommendations for a particular user:

query Recommendations {
  getRecommendationsFor(user: "Joe") {
    id
    name
  }
}

The result of this query looks something like this:

{
  "data": {
    "getRecommendationsFor": [
      {
        "id": "F24B37E4-C89B-48AA-871B-46E5DE47118F",
        "name": "Wolfy's"
      },
      {
        "id": "96B7CB80-6DC4-445F-8925-69316B222DCC",
        "name": "Hot 'G' Dog"
      }
    ]
  }
}

This recommendation algorithm uses an approach called collaborative filtering, which uses the opinions of others to inform a recommendation for a different person. For more information about this algorithm, see the Apache TinkerPop project.

As with ElastiCache, we primarily use Neptune to augment our application’s source of truth in this example — it performs a particular function. In the case of Neptune, we may want to store some data that is not in DynamoDB – for example, whether a user likes a particular restaurant.

In that case, we would perform mutations directly against Neptune. To further our example functionality, let’s add a mutation that creates a new “like” in the graph database. Modeling a “like” in Neptune involves an edge, named “like”, and two vertices, a restaurant and a user.

Because AWS AppSync can support real-time subscriptions for any data source, we can also subscribe to new likes created by a user. In this case, the client application may listen for changes to new likes so that it can query for new recommendations based on new data.

First, the updates to our GraphQL schema:

type Mutation {
  addLike(user: String!, restaurantId: String!): Like
}

type Subscription {
  onLike(user: String!): Like
    @aws_subscribe(mutations: ["addLike"])
}

Next, we enhance the Neptune integration function to support adding a new like to the graph. Use Gremlin again to find the restaurant and user in the graph and then add an edge from the user to the restaurant named “likes”.

...

async function addLike(user, restaurantId) {
  try {
    await g.V()
      .has("Restaurant", "id", restaurantId).as("restaurant")
      .V()
      .has("User", "name", user)
      .addE("likes")
      .to("restaurant")
      .next()
      
    return { user: user, restaurantId: restaurantId }
  } catch (error) {
    console.error(JSON.stringify(error))
    return { error: error.message }
  }
}

exports.handler = async(event) => {
  switch(event.action) {
    case "getRecommendations":
      return await getRecommendationsFor(event.arguments.user)
    case "addLike":
      return await addLike(event.arguments.user, event.arguments.restaurantId)
    default:
      return { error: "No such method" }
  }
}

Now, create a new like for Dorothy using the mutation described earlier:

mutation AddLike {
    addLike(
      user: "Dorothy",
      restaurantId: "EB8941AC-C3AD-4263-B97D-B7A29B36FB5F"
    ) {
      user
    }             
}

You can also subscribe to new likes made by Dorothy:

subscription AddLike {
  onLike(user:"Dorothy") {
    user
  }
}

This subscription would result in the following data being returned to the client. While this example is limited, in a full-scale application, adding a like could trigger the client to retrieve new recommendations or perhaps alert the restaurant:

{
  "data": {
    "onLike": {
      "user": "Joe",
      "__typename": "Like"
    }
  }
}

As noted earlier, AWS AppSync supports real-time data for all data sources, even an alternative data source such as Neptune that is integrated with Lambda. This functionality enables a wide range of use cases across almost any data source that you integrate.

You can find a complete, working example of the project described in this post on GitHub.

Conclusion

While AWS AppSync supports a wide range of data sources out-of-the-box, it can also be extended to support many other data sources using Lambda, including ElastiCache and Neptune. This approach allows you to pick the best database for the job while quickly building new capabilities in your applications.

The AWS AppSync team is eager to see how you take advantage of this capability. Please reach out on the AWS AppSync Forum or the AppSync Community GitHub repository with any feedback.

from AWS Mobile Blog

Building progressive web apps with the Amplify Framework and AWS AppSync

Building progressive web apps with the Amplify Framework and AWS AppSync

This article was written by Rob Costello, Solutions Architect, AWS

Many organizations regularly collect valuable data about employees’ or customers’ experiences or concerns using polls or surveys. For this scenario, a client application should perform the following tasks:

  • Present different types of questions
  • Collect responses from varying sets of users
  • Store data (and metadata) for analysis
  • Authenticate users
  • Secure data appropriately when transmitting to a backend service

Users may see this as intrusive and time-consuming unless the experience is simple and does not present any challenges in completing the questions at their convenience.

This post describes how to build progressive web apps (PWAs) that use the React JavaScript library, GraphQL, AWS AppSync, the Amplify Framework, and other AWS services to form a complete solution.

I explore a pre-built PWA that provides this functionality:

  • Using the Amplify Framework with AWS AppSync to build a backend GraphQL service that uses Amazon Cognito for user authentication and Amazon DynamoDB for storage
  • Using the AppSync SDK to provide offline-first support to a React-based web application
  • Using GraphQL queries, mutations, and subscriptions to interact with AWS AppSync
  • Using Amazon Pinpoint to collect web analytics data
  • Deploying your application using the Amplify Console

What are PWAs (Progressive Web Apps)?

PWAs are web applications that use modern browser and operating system features to provide rich user experiences. PWAs have the following key attributes:

  • They are reliable. PWAs should function for end users regardless of their network connectivity state. They should work when online, offline, and everywhere in between.
  • They are fast. PWAs should load fast and respond quickly to user interactions.
  • They are engaging. PWAs can be installed on mobile and desktop devices (without an app store), and run full screen or “out of browser” on users’ devices, providing an experience similar to that of native applications.
  • They are secure. PWAs must be served over HTTPS to function correctly.

PWAs offer benefits not only to end users, but also to application developers, allowing multi-platform development, simple deployment, and low barriers to entry. Development teams with existing skills in modern web application development can transition to PWAs with minimal new learning required.

Prerequisites

This solution uses several AWS services, so you need an AWS Account with appropriate permissions to create the related resources.

On your development workstation, you need the following:

  • Node.js with npm
  • AWS CLI with output configured as JSON (pip install awscli --upgrade --user)
  • AWS Amplify CLI configured for a Region where AWS AppSync and all other services in use are available
    (npm install -g @aws-amplify/cli)

This article uses the AWS Cloud9 integrated development environment (IDE) for building and running the application. For more information, see Getting Started with AWS Cloud9.

After you have an AWS Cloud9 environment running, use the following snippet to install these prerequisites. In your AWS Cloud9 IDE environment, open the Terminal pane and execute the following command, also shown in the following screenshot:

curl -s -L https://git.io/fjD9R | bash

Downloading and running the sample application

To download and run the sample application, perform the following steps:

1. Clone the repository and navigate to the created folder.

git clone https://github.com/aws-samples/aws-appsync-survey-tool.git
cd aws-appsync-survey-tool

As we’ll use AWS CodeCommit later on, don’t forget to set up the credential helper for git when accessing CodeCommit repositories.

2. Install the required npm modules.

npm install

3. Initialize the directory as an Amplify JavaScript app using the React framework.

amplify init

4. Provision your cloud resources based on the local setup and configured features. When asked to generate code, answer NO (answering YES overwrites the current custom files in the src/graphql folder).

amplify push

5. Run the project in AWS Cloud9.

npm start

Accessing the survey tool PWA

Only authenticated users can access the Survey Tool, so go through the process on the sign-in page to register as a new user. Once you complete the registration, sign in. There should not be any surveys for you to fill in yet.

The Survey Tool uses the add-user-to-group Amazon Cognito Lambda trigger that is part of the Amplify CLI. It runs a Lambda function on registration and performs the following:

  • Checks to see if any Groups exist in the Amazon Cognito Pool
  • Creates the default SurveyAdmins and Users groups if no groups exist
  • Adds the first user to register to the Survey Admins group
  • Adds all subsequent user registrations to the default Users group

To see the Lambda trigger for this action, in your AWS Cloud9 environment, locate and open the /amplify/backend/function/surveypwa1a7615c6PostConfirmation/src/add-to-group.js file.

Now that your new user account has SurveyAdmin privileges, log in to the Survey Tool from a desktop browser.

Choose Profile (in the top right of the page), then choose Admin Portal. The Admin Portal allows you to create new surveys, questionnaires, and questions, as well as manage users and groups and their privileges.

On the Surveys page, choose Import Sample Survey to load a sample survey. Then choose Home to find the sample survey:

Exploring the data model

The underlying data model provides the foundation for how the application interface is laid out, and how data stores are structured. Exploring it helps demonstrate how the application functions.

In your AWS Cloud9 environment, locate and open the /amplify/backend/api/surveypwa/schema.graphql file.

GraphQL defines the query language used in the application, the Type system for how data is accessed, and how data is stored in a backend system, which in this case is AWS AppSync backed by DynamoDB.

The GraphQL schema for the Survey Tool application defines five core types:

  • Survey: Defines a root level object that contains questionnaires to which users respond.
  • Questionnaire: Defines a collection of questions presented to users.
  • Question: Defines an individual question.
  • Responses: Stores responses to individual questions.
  • SurveyEntries: Groups responses when a user completes a questionnaire.

Amplify extends the GraphQL Schema Definition Language (SDL) with the GraphQL Transformer, which allow you to use directives that tell Amplify to deploy and configure AWS services for your application.

Each of the types defined in schema.graphql contains the @model directive, which tells Amplify to generate a DynamoDB table for that type. It also tells AWS AppSync to generate a data source and corresponding CRUDL logic on resolvers to allow clients to interact with the DynamoDB table.

Additionally, on the Survey, Responses, and SurveyEntries Types there is an @auth directive, which tells AWS AppSync to enforce Amazon Cognito-based authorization on actions against those types. In the case of the Survey Tool, the Survey Type allows the SurveyAdmins group in the Amazon Cognito User Pool to perform all operations on Survey objects. However, it only allows standard users to read the Survey object if their group membership matches with a group name defined in a Survey’s attributes.

type Survey
  @model
  @auth (rules: [
    {allow: groups, groups: ["SurveyAdmins"]},
    {allow: groups, groupsField: "groups", operations: [read]}
    ])
{
  id: ID!
  name: String!
  description: String!
  image: AWSURL
  preQuestionnaire: Questionnaire @connection
  mainQuestionnaire: Questionnaire @connection
  postQuestionnaire: Questionnaire @connection
  archived: Boolean
  groups: [String]!
}

How client applications interact using GraphQL

The data model defined in the schema.graphql file allows Amplify to generate the AWS backend resources required for the application, but how does the client application know how to communicate with them?

Amplify comes to the rescue here. When you run the amplify push or amplify codegen commands, Amplify uses a GraphQL feature called introspection. This feature pulls details of the AWS AppSync API you created, which generates a set of client-side code to integrate into the application.

Now, go to the src/graphql directory and find a set of JavaScript files containing the details of GraphQL API calls to use in the application. With this done automatically, import the GraphQL definition objects from these files to help ensure consistency and speed up development.

GraphQL requests made to the backend AWS AppSync API are performed over HTTPS. In REST APIs requests are structured based on URI paths and sent over a variety of HTTP methods. In contrast, GraphQL communicates with a single endpoint using only the HTTP POST method.

To abstract the complexity of creating and sending requests, and then receiving and processing responses, use the AppSync SDK. The AppSync SDK uses the Apollo Client and provides a rich set of client-side features for interacting with AWS AppSync. The key features used in the Survey Tool are:

  • ApolloProvider: The React Higher Order Component (HOC) that integrates with a React application.
  • graphql(): The core function used to execute interactions with a GraphQL backend.
  • compose(): The function used to define multiple enhancers (graphql() operations) in a single component.

The Apollo provider takes an Apollo Client client object as its only property. A client defines how Apollo interacts with a GraphQL backend, along with details of how it manages client-side caching. In your AWS Cloud9 environment, locate and open the /src/index.js file and find the following code block:

const awsAppSyncLink = createAppSyncLink({
    url: awsexports.aws_appsync_graphqlEndpoint,
    region: awsexports.aws_appsync_region,
    auth: {
        type: AUTH_TYPE.AMAZON_COGNITO_USER_POOLS,
        jwtToken: async () => (await Auth.currentSession()).getIdToken().getJwtToken()
    },
    complexObjectsCredentials: () => Auth.currentCredentials()
});

This block defines how the Apollo Client is configured to connect to the GraphQL endpoint and authenticate with Amazon Cognito.

Now, in your AWS Cloud9 environment, locate and open the /src/components/home/index.js file and find the following code block:

const Home = compose(
    graphql(gql(listSurveys), {
        options: (props) => ({
            errorPolicy: 'all',
            fetchPolicy: 'cache-and-network',
        }),
        props: (props) => {
            return {
                listSurveys: props ? props : [],
            }
        }
    })
)(HomePart)

This demonstrates how the Survey Tool calls the listSurveys GraphQL query using the graphql() and compose() features of the Apollo Client. The fetchPolicy is set to cache-and-network, which tells the Apollo Client to return previously cached results if they are available for a fast response to the user. At the same time, it performs a parallel query to the backend to validate the cached dataset. The errorPolicy parameter is set to all, which ensures errors are returned to the client so they can be interpreted and act on accordingly.

Results to the query, along with any errors and other useful metadata, are returned through the props object of the query.

The Survey Tool application also uses extended features of Apollo, such as apollo-link and apollo-link-state, to allow the application to manage both the remote and local React states. (Their use is outside the scope of this guide.)

Refining the Survey Tool

Follow these procedures to make sure that Survey Tool is reliable, fast, and engaging.

Making the Survey Tool PWA reliable

The first requirement stated in the definition of a PWA is that a PWA must be reliable: it should function for end users regardless of their network connectivity state. It should work when online, offline, and everywhere in between.

Using the ApolloProvider inherits its caching features, which allow it to not only manage a normalized cache of GraphQL query responses, but also the ability to execute mutations against the AWS AppSync API, even when offline.

This offline feature is achieved by caching a mutation operation until network connectivity is restored, or is more reliable. It returns an optimistic response in anticipation of a successful response from the AWS AppSync API. Users can interact with the PWA even if they are flying on a plane without internet access, driving through the Australian outback with intermittent and unreliable connectivity, or sitting at home with high-speed internet access.

Making the Survey Tool PWA fast

Assuming a reliable method for interacting with data sources, how do you ensure the application itself can quickly load and run while offline or in poor connectivity scenarios?

A service worker (web worker) is a modern browser function that allows web applications to offload tasks to a separate thread that persists while the browser is running (either actively or in the background). This allows tasks such as listening for asynchronous push notifications from backend infrastructure, responding to browser events, and pre-caching content without blocking the main UI thread.

The service worker included with PWAs created using the Create React App command caches all non-cross-origin content for your PWA. To validate this, use the developer tools of your browser of choice to check the cache storage.

The following screenshot shows an example of the content cached by Chrome for the Survey Tool app. The service worker has cached the HTML, CSS, and JS assets required for the PWA to function.

Making the Survey Tool PWA engaging

A key attribute of any PWA is the ability to have it install. For a mobile device, this means allowing the user to add the app to their home screen. For a desktop device, it means being able to install the app for access from the macOS Launchpad or the Windows Start Menu.

To achieve this, manually add or install the PWA on a mobile device by choosing the Share icon, then choosing Add to Home Screen.

From a desktop browser, locate the browser settings and choose Install Survey Tool.

The Survey Tool also uses a PWA feature, the manifest.json file in the root of the application, to tell a browser that it is installable. When this file is present and configured correctly, the browser fires an event that allows the application to present the user an option to install.

Integrating Amazon Pinpoint Web Analytics

To help understand the usage characteristics of the Survey Tool, use the Web Analytics feature of Amazon Pinpoint to collect data as users interact with the application.

This data is aggregated by Amazon Pinpoint and allows you to visualize usage patterns and stream the event data using Amazon Kinesis to other systems for more detailed analytics. The other systems could include Amazon S3 with Amazon Athena, or Amazon Elasticsearch Service.

Amplify simplifies the process for integrating Amazon Pinpoint Web Analytics into the application. In your AWS Cloud9 environment, locate and open the /src/index.js file.

Note the named import of the Analytics component from aws-amplify.

import { Auth, Analytics } from 'aws-amplify';

This allows you to enable and configure tracking modes at the root of the application so they are applied globally. Further down, note the Analytics.authTrack calls, which configure the three types of tracking in your application: Session, Page View, and Event.

Session tracking allows Amazon Pinpoint to collect metrics on when a user has an active session with the application. In your application, all you have to do is enable this feature with the following code:

Analytics.autoTrack('session', {
    enable: true,
    provider: 'AWSPinpoint'
});

This allows you to visualize the total number of sessions in Amazon Pinpoint, sessions per user, and sessions per device.

Page view tracking allows Amazon Pinpoint to collect metrics of which pages in the application were accessed:

Analytics.autoTrack('pageView', {
    enable: true,
    eventName: 'pageView',
    type: 'SPA',
    provider: 'AWSPinpoint',
    getUrl: () => {
        return window.location.origin + window.location.pathname;
    }
});

Finally, event tracking lets you tag elements in your pages that fire events to record data to Amazon Pinpoint. In this example application, you track click events for DOM elements that have the specified selectorPrefix:

Analytics.autoTrack('event', {
    enable: true,
    events: ['click'],
    selectorPrefix: 'data-amplify-analytics-',
    provider: 'AWSPinpoint'
});

As an example, in your AWS Cloud9 environment, locate and open the /src/components/addentry/index.js file to find a button element tagged with the selectorPrefix. This provides the hook for the Amplify Analytics module to respond to click events and send a set of attributes with the event to Amazon Pinpoint.

<Button variant="contained" color="primary" className={classes.button} onClick={handleAdd.bind(this)} 
    data-amplify-analytics-on='click' 
    data-amplify-analytics-name='click' 
    data-amplify-analytics-attrs={`addEntry:click,questionnaire:${getQuestionnaire.id}`}>
    <AddIcon />
    Add Entry
</Button>

Deploying the solution using the Amplify Console

The Amplify Console simplifies the deployment of the application, allowing you to connect to a code repository of your choice, such as AWS CodeCommit, GitHub, GitLab, or BitBucket. The frontend and backend of the application are deployed atomically in a single workflow.

To get started, you must push the application code to a repository of your own. If you choose to use CodeCommit, you can follow the CodeCommit guidance to create a new repository, and then connect your local repo to the new repository.

Now that the application is in CodeCommit, open the Amplify Console and choose Get Started under the Deploy section of the homepage.

Now, select AWS CodeCommit as your Git Provider and choose Continue.

On the Configure build settings screen, fill in the required details, as shown in the following screenshot. To grant the Amplify Console the ability to deploy and manage the Survey Tool’s backend components, select an existing Service Role value with the required IAM privileges. You could also select Create new role to have the Amplify Console step you through the role creation.

Review the Build Settings section and note that there is both a Frontend and Backend deployment section. Choose Next to continue to the Review screen, then choose Save and Deploy to complete the wizard.

After the Amplify Console has completed the Provision, Build, Deploy, Verify steps, you can access the PWA using the link provided.

Cleaning up

When you’ve finished exploring the Survey Tool and would like to remove all resources deployed into your AWS account, run the following command in the Terminal window of your AWS Cloud9 environment:

amplify delete

To remove the resources created by the Amplify Console, locate your app in the Amplify Console window, and choose Actions, Delete App.

Conclusion

This post introduced the Survey Tool sample app and explored some of the core technologies used to create a functional progressive web application, including:

  • Amplify Framework
  • AWS AppSync
  • Amazon Pinpoint
  • AWS Amplify Console

Using this example, you can see how to combine cloud-native services and modern development approaches to create engaging and useful experiences for your users.

from AWS Mobile Blog

Developing and testing GraphQL APIs, Storage and Functions with Amplify Framework Local Mocking features

Developing and testing GraphQL APIs, Storage and Functions with Amplify Framework Local Mocking features

This article was written by Ed Lima, Sr. Solutions Architect, AWS and Sean Grove, OneGraph

In fullstack application development, iteration is king. At AWS, we’re constantly identifying steps in the process of shipping product that slow iteration, or sap developer productivity and happiness, and work to shorten it. To that end, we’ve provided cloud APIs, serverless functions, databases, and storage capabilities so that the final steps of deploying, scaling, and monitoring applications are as instantaneous as possible.

Today, we’re taking another step further in shortening feedback cycles by addressing a critical stage in the application cycle: local development.

Working closely with developers, we’ve seen the process of delivering new product features to production:

  1. Prototyping changes locally
  2. Committing and pushing to the cloud resources
  3. Mocking/testing/debugging the updates
  4. Returning to step 1 if there are any fixes to incorporate

In some cases, this can be an incredibly tight loop, executed dozens or hundreds of times by a developer, before new features are ready to ship. It can be a tedious process, and tedious processes make unhappy developers.

AWS AppSync gives developers easy, and convenient access to exactly the right data they need at a global scale via its flexible GraphQL APIs. These APIs, among other data sources, can be backed by Amazon DynamoDB for a scalable key-value and document database that delivers single-digit millisecond performance at any scale. Applications can also use Amazon Simple Storage Service (S3) for an object storage service that offers industry-leading scalability, data availability, security, and performance. On top of it, developers can run their code without provisioning or managing servers with AWS Lambda. All of these services live in the cloud, which is great for production – highly available, fault tolerant, scaling to meet any demand, running in multiple availability zones in different AWS regions around the planet.

In order to optimize and streamline the feedback loop between local and cloud resources earlier in the development process, we talked to many customers to understand their requirements for local development:

  • NoSQL data access via a robust GraphQL API
  • Serverless functions triggered for customized business logic from any GraphQL type or operation
  • Developer tooling, including a GraphiQL IDE fully pre-integrated with open-source plugins such as those from OneGraph, customized for your AppSync API
  • Simulated object storage
  • Instantaneous feedback on changes
  • Debugging GraphQL resolver mapping templates written in Velocity Template Language (VTL)
  • Ability to use custom directives and code generation with the GraphQL Transformer
  • Ability to mock JWT tokens from Amazon Cognito User Pools to test authorization rules locally
  • Work with web and mobile platforms (iOS and Android)
  • And, the ability to work offline

With the above customer requirements in mind we’re happy to launch the new Local Mocking and Testing features in the Amplify Framework.

As a developer using Amplify, you’ll immediately see the changes you make locally to your application, speeding up your development process and removing interruptions to your workflow. No waiting for cloud services to be deployed – just develop, test, debug, model your queries, and generate code locally until you’re happy with your product, then deploy your changes to the scalable, highly available backend services in the cloud as you’ve always done.

Getting Started

To get started, install the latest version of the Amplify CLI by following these steps, and follow along with our example below. Use a boilerplate React app created with create-react-app and initialize an Amplify project in the app folder with the default options by executing the amplify init command. Note, the local mocking and testing features in the Amplify CLI will also work with iOS and Android apps.

Next, we add a GraphQL API using the command amplify add api with API Key authorization and the sample schema for single object with fields (Todo):

When defining a GraphQL schema you can use directives from the GraphQL Transformer in local mocking as well as local code generation from the schema for GraphQL operations. The following directives are currently supported in the local mocking environment:

  • @model
  • @auth
  • @key
  • @connection
  • @versioned
  • @function

The sample GraphQL schema generated by the Amplify CLI has a single “Todo” type defined with @model which means the GraphQL Transformer will automatically create a GraphQL API with an extended schema containing queries, mutations, and subscriptions with built-in CRUDL logic to access a DynamoDB table, also automatically deployed. It basically creates a fully-fledged API backend in seconds:

type Todo @model {
  id: ID!
  name: String!
  description: String
}

At this point, your API is ready for some local development! Fire up your local AppSync and DynamoDB resources by executing either the command  amplify mock to test all supported local resources or amplify mock api to test specifically the GraphQL API and watch as a local mock endpoint starts up. Code will be automatically generated and validated for queries, mutations, subscriptions and a local AppSync mock endpoint will start up:

Collaborating with the Open Source community is always special, it has allowed us to improve and better understand the use cases that customers want to tackle with local mocking and testing. In order to move fast and ensure that we were releasing a valuable feature, we worked for several months with a few community members. We want to give a special thanks to Conduit Ventures for creating the AWS-Utils package, as well as allowing us to fork it for this project and integrate with the Amplify new local mocking environment.

Prototyping API calls with an enhanced local GraphiQL IDE

The mock endpoint runs on localhost and simulates an AWS AppSync API connected to a DynamoDB table (defined at the GraphQL schema with the @model directive), all implemented locally on your developer machine.

We also ship tools to explore and interact with your GraphQL API locally. In particular, the terminal will print out a link to an instance of the GraphiQL IDE, where you can introspect the schema types, lookup documentation on any field or type, test API calls, and prototype your queries and mutations:

We’ve enhanced the stock GraphiQL experience with an open-source plugin that OneGraph have created to make your developer experience even nicer. In the Amplify GraphiQL Explorer, you’ll notice an UI generated for your specific GraphQL API that allows to quickly and easily explore, build GraphQL queries, mutations, or even subscriptions by simply navigating checkboxes. You can create, delete, update, read, or list data from your local DynamoDB tables in seconds.

With this new tooling, you can go from exploring your new GraphQL APIs locally to a full running application in a few minutes. Amplify is leveraging the power of open source to integrate the new local mocking environment with tools such as AWS-Utils and the GraphiQL Explorer to streamline the development experience and tighten the iteration cycle even further. If you’re interested in learning more about how and why the explorer was built, check out OneGraph’s blog on how they on-board users who are new to GraphQL.

What if you need to test and prototype real-time subscriptions? They also work seamlessly in the local environment. While amplify mock api is running, open another terminal window and execute yarn add aws-amplify to install some client dependencies then run yarn start.  In order to test, paste the code bellow to the src/App.js file in the React project, replacing the existing boilerplate code generated by the create-react-app command:

import React, { useEffect, useReducer } from "react";
import Amplify from "@aws-amplify/core";
import { API, graphqlOperation } from "aws-amplify";
import { createTodo } from "./graphql/mutations";
import { listTodos } from "./graphql/queries";
import { onCreateTodo, onUpdateTodo } from "./graphql/subscriptions";

import config from "./aws-exports";
Amplify.configure(config); // Configure Amplify

const initialState = { todos: [] };
const reducer = (state, action) => {
  switch (action.type) {
    case "QUERY":
      return { ...state, todos: action.todos };
    case "SUBSCRIPTION":
      return { ...state, todos: [...state.todos, action.todo] };
    default:
      return state;
  }
};

async function createNewTodo() {
  const todo = { name: "Use AppSync", description: "Realtime and Offline" };
  await API.graphql(graphqlOperation(createTodo, { input: todo }));
}
function App() {
  const [state, dispatch] = useReducer(reducer, initialState);

  useEffect(() => {
    getData();
    const subscription = API.graphql(graphqlOperation(onCreateTodo)).subscribe({
      next: eventData => {
        const todo = eventData.value.data.onCreateTodo;
        dispatch({ type: "SUBSCRIPTION", todo });
      }
    });
    return () => {
      subscription.unsubscribe();
    };
  }, []);

  async function getData() {
    const todoData = await API.graphql(graphqlOperation(listTodos));
    dispatch({ type: "QUERY", todos: todoData.data.listTodos.items });
  }

  return (
    <div>
      <div className="App">
        <button onClick={createNewTodo}>Add Todo</button>
      </div>
      <div>
        {state.todos.map((todo, i) => (
          <p key={todo.id}>
            {todo.name} : {todo.description}
          </p>
        ))}
      </div>
    </div>
  );
}
export default App;

Open two browser windows, one with the local GraphiQL instance and another one with the React App. As you can see in the following animation, you’ll be able to create items, see the mutations automatically triggering subscriptions and displaying the changes in the web app with no need to reload the browser:

 

If you want to access your NoSQL local data directly, as DynamoDB Local uses SQLite internally you can also access the data in the tables by using your IDE extension of choice:

Seamless transition between local and cloud environments

In the screenshot above you’ll notice the GraphQL API is in a “Create” state in the terminal section at the bottom, which means the backend resources are not deployed to the cloud yet. If we check the local “aws_exports.js” file generated by Amplify, which contains the identifiers of the resources created in different categories, you’ll notice the API endpoint is accessed locally and we’re using a fake API Key to authorize calls:

const awsmobile = {
    "aws_project_region": "us-east-1",
    "aws_appsync_graphqlEndpoint": "http://localhost:20002/graphql",
    "aws_appsync_region": "us-east-1",
    "aws_appsync_authenticationType": "API_KEY",
    "aws_appsync_apiKey": "da2-fakeApiId123456"
};

export default awsmobile;

What about testing more refined authentication requirements? You can still authenticate against a Cognito User Pool. The local testing server will honor the JWT tokens generated by Amazon Cognito and the rules defined by the @auth directive in your GraphQL schema. However, as Cognito is not running locally, you need to execute the command amplify push first to create the user pool and easily test users access with, for instance, the Amplify withAuthenticator higher order component on React. After that you can move back to the local environment with the command amplify mock api and authenticate calls with the generated JWT tokens. If you want to test directly from GraphiQL, after your API is configured to use Cognito, the Amplify GraphiQL Explorer provides a way to mock and change the username, groups, and email for a user and generate a local JWT token just by clicking the “Auth” button. The mocked values are used by the GraphQL Transformer @auth directive and any access rules:

After pushing and deploying the changes to the cloud with amplify push, the “aws_exports.js” file will be updated accordingly to point to the appropriate resources:

const awsmobile = {
    "aws_project_region": "us-east-1",
    "aws_appsync_graphqlEndpoint": "https://eriicnzxxxxxxxxxxxxx.appsync-api.us-east-1.amazonaws.com/graphql",
    "aws_appsync_region": "us-east-1",
    "aws_appsync_authenticationType": "API_KEY",
    "aws_appsync_apiKey": "da2-gttjhle72nf3pbfzfil2jy54ne"
};

export default awsmobile;

You can easily move back and forth between local and cloud environments as the identifiers in the exports file are updated automatically.

Local Debugging and Customizing VTL Resolvers

The local mocking environment also allows to easily customize and debug AppSync resolvers. You can edit VTL templates locally and check if they contain errors, including the line numbers causing problems, before pushing to AppSync. In order to do so, with the local API running, navigate to the folder amplify/backend/api/<your API name>/resolvers. You will see a list of resolver templates that the GraphQL Transformer automatically generated. You can modify any of them and, after saving changes, they are immediately loaded into the locally running API service with a message Mapping template change detected. Reloading. If you inject an error, for instance adding an extra curly brace, you will see a meaningful description of the problem and the line where the error was detected as shown below:

In case you stop the mock endpoint, for instance to push your changes to the cloud, all of the templates in the amplify/backend/api/<your API name>/resolvers folder will be removed except for any that you modified. When you subsequently push to the cloud these local changes will be automatically merged with your AppSync API.

As you are developing your app, you can always update the GraphQL schema located at amplify/backend/api/<your API name>/schema.graphql. You can add additional types and any of the supported GraphQL Transform directives then save your changes while the local server is still running. Any updates to the schema will be automatically detected and validated, then immediately hot reloaded into the local API. Whenever you’re happy with the backend, pushing and deploying the changes to the cloud is just one CLI command away.

Integrating Lambda Functions

Today you can already create and invoke Lambda functions written in Node.js locally with the Amplify CLI. Now how can you go even further and integrate lambda functions with GraphQL APIs in the new local mocking environment? It’s very easy to test customized business logic implemented with Lambda in your local API. Let’s start by creating a lambda function for your Amplify project with the command amplify add function to create a function called “factOfTheDay” as follows:

The function calls an external API to retrieve a fact related to the current date. Here’s the code:

const axios = require("axios");
const moment = require("moment");

exports.handler = function(event, _, callback) {
  let apiUrl = `http://numbersapi.com/`;
  let day = moment().format("D");
  let month = moment().format("M");
  let factOfTheDay = apiUrl + month + "/" + day;

  axios
    .get(factOfTheDay)
    .then(response => callback(null, response.data))
    .catch(err => callback(err));
};

Since the function above uses both the axios and moment libraries, we need to install them in the function folder amplify/backend/function/factOfTheDay/src by executing either npm install axios moment or yarn add axios moment. We can also test the function locally with the command amplify mock function factOfTheDay:

In our API we’ll add a field to the “Todo” type so every time we read or create records the Lambda function will be invoked to retrieve the facts of the current day. In order to do that we’ll take advantage of the GraphQL Transformer @function directive and point it to our lambda function by editing the file amplify/backend/api/localdev/schema.graphql:

type Todo @model {
  id: ID!
  name: String!
  description: String
  factOfTheDay: String @function(name: "factOfTheDay-${env}")
}

In order to test, we execute amplify mock to test locally all the mocked categories (in this case, API and Function) and access the local instance of the GraphiQL IDE in the browser:

As you can see, the GraphQL query is successfully invoking the local lambda function as well as retrieving data from the local DynamoDB table with a single call. In order to commit the changes and create the lambda function in the cloud, it’s just a matter of executing amplify push.

Integrating S3 storage

Most apps need access to some sort of content such as audio, video, images, PDFs and S3 is the best way to store these assets. How can we easily bring S3 to our local development environment?

First, let’s add storage to our amplify project with amplify add storage. If you have not previously added the “Auth” category in your project, the “Storage” category will also ask you to set this up and it is OK to do so. While this doesn’t impact local mocking as there are no authorization checks at this time for the Storage category, you must configure it first for cloud deployment to make sure the S3 bucket is secured according to your application requirements:

To start testing, execute amplify mock. Alternatively, you can run amplify mock storage to only mock the Storage category. If you have not pushed Auth resources to the cloud, you’ll need to do so by executing amplify auth push to create/update the Cognito resources as they’ll be needed to secure access to the actual S3 bucket.

You can use any of the storage operations provided by the Amplify library in your application code such as put, get, remove or list as well as use UI components to sign-up/sign-in users and interact with the local content. Files will be saved to your local Amplify project folder under amplify/mock-data/S3. When ready, execute amplify push to create the S3 bucket in the cloud.

Conclusion

With the new local mocking environment, we want to deliver a great experience to developers using the Amplify Framework. Now you can quickly spin up local resources, test, prototype, debug and generate code with open source tools, work on the front-end and create your fullstack serverless application in no time. On top of that, after you’re done and happy with your local development results, you can commit the code to GitHub and link your repository to the AWS Amplify Console which will provide a built-in CI/CD workflow. The console detects changes to the repository and automatically triggers builds to create your Amplify project backend cloud resources in multiple environments as well as publish your front-end web application to a content delivery network. Fullstack local development, testing, debugging, CI/CD, code builds and web publishing made much easier and faster for developers.

It’s just Day 1 for local development, mocking and testing on Amplify, what else would you like to see in our local mocking environment? Let us know if you have any ideas, feel free to create a feature request in our GitHub repository. Our team constantly monitors the repository and we’re always listening to your requests. Go build (now locally in your laptop)!

from AWS Mobile Blog

Supporting backend and internal processes with AWS AppSync multiple authorization types

Supporting backend and internal processes with AWS AppSync multiple authorization types

Imagine a scenario where you created a mobile or web application that uses a GraphQL API built on top of AWS AppSync and Amazon DynamoDB tables. Another backend or internal process such as an AWS Lambda function now needs to update data in the backend tables. A new feature in AWS AppSync lets you grant the Lambda function access to make secure GraphQL API calls through the unified AppSync API endpoint.

This post explores how to use the multiple authorization type feature to accomplish that goal.

Overview

In your application, you implemented the following:

  1. Users authenticate through Amazon Cognito user pools.
  2. Users query the AWS AppSync API to view your data in the app.
  3. The data is stored in DynamoDB tables.
  4. GraphQL subscriptions reflect changes to the data back to the user.

Your app is great. It works well. However, you may have another backend or internal process that wants to update the data in the DynamoDB tables behind the scenes, such as:

  • An external data-ingestion process to an Amazon S3 bucket
  • Real-time data gathered through Amazon Kinesis Data Streams
  • An Amazon SNS message responding to an outside event

For each of these scenarios, you want to use a Lambda function to go through a unified API endpoint to update data in the DynamoDB tables. AWS AppSync can serve as an appropriate middle layer to provide this functionality.

Walkthrough

An Amazon Cognito user pool authenticates and authorizes your API. Keep this in mind when considering the best way to grant the Lambda function access to make secure AWS AppSync API calls.

Choosing an authorization mode

AWS AppSync supports four different authorization types:

  • API_KEY: For using API keys
  • AMAZON_COGNITO_USER_POOLS: For using an Amazon Cognito user pool
  • AWS_IAM: For using IAM permissions
  • OPENID_CONNECT: For using your OpenID Connect provider

Before the launch of the multiple authorization type feature, you could only use one of these authorization types at a time. Now, you can mix and match them to provide better levels of access control.

To set additional authorization types, use the following schema directives:

  • @aws_api_key — A field uses API_KEY for authorization.
  • @aws_cognito_user_pools — A field uses AMAZON_COGNITO_USER_POOLS for authorization.
  • @aws_iam — A field uses AWS_IAM for authorization.
  • @aws_oidc — A field uses OPENID_CONNECT for authorization.

The AWS_IAM type is ideal for the Lambda function because the Lambda function is bound to an IAM execution role where you can specify the permissions this Lambda function can have. Do not use the API_KEY authorization mode; API keys are only recommended for development purposes or for use cases where it’s safe to expose a public API.

Understanding the architecture

Suppose that you have a log viewer web app that lets you view logging data:

  • It authenticates its users using an Amazon Cognito user pool and accesses an AWS AppSync API endpoint for data reads from a “Log” DynamoDB table.
  • Some backend processes publish log events and details to an SNS topic.
  • A Lambda function subscribes to the topic and invokes the AWS AppSync API to update the backend data store.

The following diagram shows the web app architecture.

The following code is your AWS AppSync GraphQL schema, with no authorization directives:

type Log {
  id: ID!
  event: String
  detail: String
}

input CreateLogInput {
  id: ID
  event: String
  detail: String
}

input UpdateLogInput {
  id: ID!
  event: String
  detail: String
}

input DeleteLogInput {
  id: ID!
}

type ModelLogConnection {
  items: [Log]
  nextToken: String
}

type Mutation {
  createLog(input: CreateLogInput!): Log
  updateLog(input: UpdateLogInput!): Log
  deleteLog(input: DeleteLogInput!): Log
}

type Query {
  getLog(id: ID!): Log
  listLogs: ModelLogConnection
}

type Subscription {
  onCreateLog: Log
    @aws_subscribe(mutations: ["createLog"])
  onUpdateLog: Log
    @aws_subscribe(mutations: ["updateLog"])
  onDeleteLog: Log
    @aws_subscribe(mutations: ["deleteLog"])
}

Configuring the AWS AppSync API

First, configure your AWS AppSync API to add the new authorization mode:

  • In the AWS AppSync console, select your API.
  • Under the name of your API, choose Settings.
  • For Default authorization mode, make sure it is set to Amazon Cognito user pool.
  • To the right of Additional authorization providers, choose New.
  • For Authorization mode, choose AWS Identity and Access Management (IAM), Submit.
  • Choose Save.

Now that you’ve set up an additional authorization provider, modify your schema to allow AWS_IAM authorization by adding @aws_iam to the createLog mutation. The new schema looks like the following code:

input CreateLogInput {
  id: ID
  event: String
  detail: String
}

input UpdateLogInput {
  id: ID!
  event: String
  detail: String
}

input DeleteLogInput {
  id: ID!
}

type ModelLogConnection {
  items: [Log]
  nextToken: String
}

type Mutation {
  createLog(input: CreateLogInput!): Log
    @aws_iam
  updateLog(input: UpdateLogInput!): Log
  deleteLog(input: DeleteLogInput!): Log
}

type Query {
  getLog(id: ID!): Log
  listLogs: ModelLogConnection
}

type Subscription {
  onCreateLog: Log
    @aws_subscribe(mutations: ["createLog"])
  onUpdateLog: Log
    @aws_subscribe(mutations: ["updateLog"])
  onDeleteLog: Log
    @aws_subscribe(mutations: ["deleteLog"])
}

type Log @aws_iam {
  id: ID!
  event: String
  detail: String
}

The @aws_iam directive is now authorizing the createLog mutation. Add the directive to the log type. Because directives work at the field level, also give AWS_IAM access to the log type. To do this, either mark each field in the log type with a directive or mark the log type with the @aws_iam directive.

You don’t have to explicitly specify the @aws_cognito_user_pools directive, because it is the default authorization type. Fields that are not marked by other directives are protected using the Amazon Cognito user pool.

Creating a Lambda function

Now that the AWS AppSync backend is set up, create a Lambda function. The function is triggered by an event published to an SNS topic, which contains logging event and detail information in the message body.

The following code example shows how the Lambda function is written in Node.js:

require('isomorphic-fetch');
const AWS = require('aws-sdk/global');
const AUTH_TYPE = require('aws-appsync/lib/link/auth-link').AUTH_TYPE;
const AWSAppSyncClient = require('aws-appsync').default;
const gql = require('graphql-tag');

const config = {
  url: process.env.APPSYNC_ENDPOINT,
  region: process.env.AWS_REGION,
  auth: {
    type: AUTH_TYPE.AWS_IAM,
    credentials: AWS.config.credentials,
  },
  disableOffline: true
};

const createLogMutation =
`mutation createLog($input: CreateLogInput!) {
  createLog(input: $input) {
    id
    event
    detail
  }
}`;

const client = new AWSAppSyncClient(config);

exports.handler = (event, context, callback) => {

  // An expected payload has the following format:
  // {
  //   "event": "sample event",
  //   "detail": "sample detail"
  // }

  const payload = event['Records'][0]["Sns"]['Message'];

  if (!payload['event']) {
    callback(Error("event must be provided in the message body"));
    return;
  }

  const logDetails = {
    event: payload['event'],
    detail: payload['detail']
  };

  (async () => {
    try {
      const result = await client.mutate({
        mutation: gql(createLogMutation),
        variables: {input: logDetails}
      });
      console.log(result.data);
      callback(null, result.data);
    } catch (e) {
      console.warn('Error sending mutation: ',  e);
      callback(Error(e));
    }
  })();
};

The Lambda function uses the AWS AppSync SDK to make a createLog mutation call, using the AWS_IAM authorization type.

Defining the IAM role

Now, define the IAM role that this Lambda function can assume. Grant the Lambda function appsync:GraphQL permissions for your API, as well as Amazon CloudWatch Logs permissions. You also must allow the Lambda function to be triggered by an SNS topic.

You can view the full AWS CloudFormation template that deploys the Lambda function, its IAM permissions, and supporting resources:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Parameters:
  GraphQLApiEndpoint:
    Type: String
    Description: The https endpoint of an AppSync API
  GraphQLApiId:
    Type: String
    Description: The id of an AppSync API
  SnsTopicArn:
    Type: String
    Description: The ARN of the SNS topic that can trigger the Lambda function
Resources:
  AppSyncSNSLambda:
    Type: 'AWS::Serverless::Function'
    Properties:
      Description: A Lambda function that invokes an AppSync API endpoint
      Handler: index.handler
      Runtime: nodejs8.10
      MemorySize: 256
      Timeout: 10
      CodeUri: ./
      Role: !GetAtt AppSyncLambdaRole.Arn
      Environment:
        Variables:
          APPSYNC_ENDPOINT: !Ref GraphQLApiEndpoint

  AppSyncLambdaRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            Service: lambda.amazonaws.com
          Action: sts:AssumeRole
      Policies:
      - PolicyName: AppSyncLambdaPolicy
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Resource: arn:aws:logs:*
            Action:
            - logs:CreateLogGroup
            - logs:CreateLogStream
            - logs:PutLogEvents
          - Effect: Allow
            Resource:
            - !Sub 'arn:aws:appsync:${AWS::Region}:${AWS::AccountId}:apis/${GraphQLApiId}*'
            Action:
            - appsync:GraphQL

  SnsSubscription:
    Type: AWS::SNS::Subscription
    Properties:
      Endpoint: !GetAtt AppSyncSNSLambda.Arn
      Protocol: Lambda
      TopicArn: !Ref SnsTopicArn

  LambdaInvokePermission:
    Type: AWS::Lambda::Permission
    Properties:
      FunctionName: !Ref AppSyncSNSLambda
      Action: lambda:InvokeFunction
      Principal: sns.amazonaws.com
      SourceArn: !Ref SnsTopicArn

Deploying the AWS CloudFormation template

Use the following two commands to deploy the AWS CloudFormation template. Make sure to replace all the CAPS fields with values specific to your AWS account:

aws cloudformation package --template-file "cloudformation.yaml" \
  --s3-bucket "<YOUR S3 BUCKET>" \
  --output-template-file "out.yaml"

aws cloudformation deploy --template-file out.yaml \
    --stack-name appsync-lambda \
    --s3-bucket "<YOUR S3 BUCKET>" \
    --parameter-overrides GraphQLApiEndpoint="<YOUR GRAPHQL ENDPOINT>" \
      GraphQLApiId="<YOUR GRAPHQL API ID>" \
      SnsTopicArn="<YOUR SNS TOPIC ARN>" \
    --capabilities CAPABILITY_IAM

Testing the solution

After both commands succeed, and your AWS CloudFormation template deploys, do the following:

1. Open the console and navigate to the SNS topic that you specified earlier.
2. Choose Publish message.
3. For the raw message body, enter the following:

{
   "event": "sample event",
   "detail": "sample detail"
}

4. Choose Publish message.

Navigate to the Log DynamoDB table that is your AWS AppSync API’s data source. You should see a new “sample event” record created using the CreateLog mutation.

Conclusion

With its new feature, AWS AppSync can now support multiple authorization types. This ability demonstrates how an AWS AppSync API serves as a powerful middle layer between multiple processes while being a secure API for end users.

As always, AWS welcomes feedback. Please submit comments or questions below.

Jane Shen is a cloud application architect in AWS Professional Services based in Toronto, Canada.

 

 

 

 

from AWS Mobile Blog

Announcing the new Predictions category in Amplify Framework

Announcing the new Predictions category in Amplify Framework

The Amplify Framework is an open source project for building cloud-enabled mobile and web applications. Today, AWS announces a new category called “Predictions” in the Amplify Framework.

Using this category, you can easily add and configure AI/ML uses cases for your web and mobile application using few lines of code. You can accomplish these use cases with the Amplify CLI and either the Amplify JavaScript library (with the new Predictions category) or the generated iOS and Android SDKs for Amazon AI/ML services. You do not need any prior experience with machine learning or AI services to use this category.

Using the Amplify CLI, you can setup your backend by answering simple questions in the CLI flow. In addition, you can orchestrate advanced use cases such as on-demand indexing of images to auto-update a collection in Amazon Rekognition. The actual image bytes are not stored by Amazon Rekognition. For example, this enables you to securely upload new images using an Amplify storage object which triggers an auto-update of the collection. You can then identify the new entities the next time you make inference calls using the Amplify library. You can also setup or import a SageMaker endpoint by using the “Infer” option in the CLI.

The Amplify JavaScript library with Predictions category includes support for the following use cases:

1. Translate text to a target language.
2. Generate speech from text.
3. Identify text from an image.
4. Identify entities from an image. (for example, celebrity detection).
5. Label real world entities within an image/document. (for example, recognize a scene, objects and activity in an image).
6. Interpret text to find insights and relationships in text.
7. Transcribe text from audio.
8. Indexing of images with Amazon Rekognition.

The supported uses cases leverage the following AI/ML services:

  • Amazon Rekognition
  • Amazon Translate
  • Amazon Polly
  • Amazon Transcribe
  • Amazon Comprehend
  • Amazon Textract

The iOS and Android SDKs now include support for SageMaker runtime which you can use to call inference on your custom models hosted on SageMaker. You can also extract text and data from scanned documents using the newly added support for Amazon Textract in the Android SDK. These services add to the list of existing AI services supported in iOS and Android SDKs.

In this post, you build and host a React.js web application that uses text in English language as an input and translates it to Spanish language. In addition, you can convert the translated text to speech in the Spanish language. For example, this type of use case can be added to a travel application, where you can type text in English and playback the translated text in a language of your choice. To build this app you use two capabilities from the Predictions category: Text translation and Generate speech from text.

Secondly, we go through the flow of indexing images to update a collection from the Amplify CLI and an application when using Amazon Rekognition.

Building the React.js Application

Prerequisites:

Install Node.js and npm if they are not already installed on your machine.

Steps

To create a new React.js app

Create a new React.js application using the following command:

$ npx create-react-app my-app

To set up your backend

Install and configure the Amplify CLI using the following command:

$ npm install -g @aws-amplify/cli
$ amplify configure

To create a new Amplify project

Run the following command from the root folder of your React.js application:

$ amplify init

Choose the following default options as shown below:

? Enter a name for the project: my-app
? Enter a name for the environment: dev
? Choose your default editor: Visual Studio Code
? Choose the type of app that you're building: javascript
? What javascript framework are you using: react
? Source Directory Path:  src
? Distribution Directory Path: build
? Build Command:  npm run-script build
? Start Command: npm run-script start
? Do you want to use an AWS profile? Yes
? Please choose the profile you want to use: default

To add text translation

Add the new Predictions category to your Amplify project using the following command:

$ amplify add predictions

The command line interface asks you simple questions to add AI/ML uses cases. There are 4 option: Identify, Convert, Interpret, and Infer.

  • Choose the “Convert” option.
  • When prompted, add authentication if you do not have one.
  • Select the following options in CLI:
? Please select from of the below mentioned categories: Convert
? You need to add auth (Amazon Cognito) to your project in order to add storage for user files. Do you want to add auth now? Yes
? Do you want to use the default authentication and security configuration? Default configuration
? How do you want users to be able to sign in? Username
? Do you want to configure advanced settings? No, I am done.
? What would you like to convert? Convert text into a different language
? Provide a friendly name for your resource: translateText6c4601e3
? What is the source language? English
? What is the target language? Spanish
? Who should have access? Auth and Guest users

To add text to speech

Run the following command to add text to speech capability to your project:

$ amplify add predictions
? Please select from of the below mentioned categories: Convert
? What would you like to convert? Convert text to speech
? Provide a friendly name for your resource: speechGeneratorb05d231c
? What is the source language? Mexican Spanish
? Select a speaker Mia - Female
? Who should have access? Auth and Guest users

To integrate the predictions library in a React.js application

Now that you set up the backend, integrate the Predictions library in your React.js application.

The application UI shows “Text Translation” and “Text to Speech” with a separate button for each functionality. The output of the text translation is the translated text in JSON format. The output of Text to Speech is an audio file that can be played from the application.

First, install the Amplify and Amplify React dependencies using the following command:

$ npm install aws-amplify aws-amplify-react

Next, open src/App.js and add the following code

import React, { useState } from 'react';
import './App.css';
import Amplify from 'aws-amplify';
import Predictions, { AmazonAIPredictionsProvider } from '@aws-amplify/predictions';
 
import awsconfig from './aws-exports';
 
Amplify.addPluggable(new AmazonAIPredictionsProvider());
Amplify.configure(awsconfig);
 
 
function TextTranslation() {
  const [response, setResponse] = useState("Input text to translate")
  const [textToTranslate, setTextToTranslate] = useState("write to translate");

  function translate() {
    Predictions.convert({
      translateText: {
        source: {
          text: textToTranslate,
          language : "en" // defaults configured in aws-exports.js
        },
        targetLanguage: "es"
      }
    }).then(result => setResponse(JSON.stringify(result, null, 2)))
      .catch(err => setResponse(JSON.stringify(err, null, 2)))
  }

  function setText(event) {
    setTextToTranslate(event.target.value);
  }

  return (
    <div className="Text">
      <div>
        <h3>Text Translation</h3>
        <input value={textToTranslate} onChange={setText}></input>
        <button onClick={translate}>Translate</button>
        <p>{response}</p>
      </div>
    </div>
  );
}
 
function TextToSpeech() {
  const [response, setResponse] = useState("...")
  const [textToGenerateSpeech, setTextToGenerateSpeech] = useState("write to speech");
  const [audioStream, setAudioStream] = useState();
  function generateTextToSpeech() {
    setResponse('Generating audio...');
    Predictions.convert({
      textToSpeech: {
        source: {
          text: textToGenerateSpeech,
          language: "es-MX" // default configured in aws-exports.js 
        },
        voiceId: "Mia"
      }
    }).then(result => {
      
      setAudioStream(result.speech.url);
      setResponse(`Generation completed, press play`);
    })
      .catch(err => setResponse(JSON.stringify(err, null, 2)))
  }

  function setText(event) {
    setTextToGenerateSpeech(event.target.value);
  }

  function play() {
    var audio = new Audio();
    audio.src = audioStream;
    audio.play();
  }
  return (
    <div className="Text">
      <div>
        <h3>Text To Speech</h3>
        <input value={textToGenerateSpeech} onChange={setText}></input>
        <button onClick={generateTextToSpeech}>Text to Speech</button>
        <h3>{response}</h3>
        <button onClick={play}>play</button>
      </div>
    </div>
  );
}
 
function App() {
  return (
    <div className="App">
      <TextTranslation />
      <hr />
      <TextToSpeech />
      <hr />
    </div>
  );
}
 
export default App;

In the previous code, the source language for translate is set by default in aws-exports.js. Similarly, the default language is set for text-to-speech in aws-exports.js. You can override these values in your application code.

To add hosting for your application

You can enable static web hosting for our react application on Amazon S3 by running the following command from the root of our application folder:

$ amplify add hosting

To publish the application run:

$ amplify publish

The application is now hosted on the AWS Amplify Console and you can access it at a link that looks like http://my-appXXXXXXXXXXXX-hostingbucket-dev.s3-website-us-XXXXXX.amazonaws.com/

On-demand indexing of images

The “Identify entities” option in Amplify CLI using Amazon Rekognition can detect entities like celebrities by default. However, you can use Amplify to index new entities to auto-update the collection in Amazon Rekognition. This enables you to develop advanced use cases such as uploading a new image and thereafter having the new entities in an input image being recognized if it matches an entry in the collection. Note that Amazon Rekognition does not store any image bytes.

Here is how it works on a high level for reference:

Note, if you delete the image from S3 the entity is removed from the collection.
You easily can setup the indexing feature from the Amplify CLI using the following flow:

$ amplify add predictions
? Please select from of the below mentioned categories Identify
? You need to add auth (Amazon Cognito) to your project in order to add storage for user files. Do you want to add auth now? Yes
? Do you want to use the default authentication and security configuration? Default configuration
? What would you like to identify? Identify Entities
? Provide a friendly name for your resource identifyEntities5a41fcea
? Would you like use the default configuration? Advanced Configuration
? Would you like to enable celebrity detection? Yes
? Would you like to identify entities from a collection of images? Yes
? How many entities would you like to identify 50
? Would you like to allow users to add images to this collection? Yes
? Who should have access? Auth users
? The CLI would be provisioning an S3 bucket to store these images please provide bucket name: myappentitybucket

If you have already setup storage from the Amplify CLI by running `amplify add storage`, the bucket that was created is reused. To upload images for indexing from the CLI, you can run `amplify predictions upload` and it prompts you for a folder location with your images.

After you have setup the backend through the CLI, you can use an Amplify storage object to add images to S3 bucket which will trigger the auto-indexing of images and update the collection in Amazon Rekognition.

In your src/App.js add the following function that uploads image test.jpg to Amazon S3:

function PredictionsUpload() {
  
 function upload(event) {
    const { target: { files } } = event;
    const [file,] = files || [];
    Storage.put('test.jpg', file, {
      level: 'protected',
      customPrefix: {
        protected: 'protected/predictions/index-faces/',
      }
    });
  }

  return (
    <div className="Text">
      <div>
        <h3>Upload to predictions s3</h3>
        <input type="file" onChange={upload}></input>
      </div>
    </div>
  );
}

Next, call the Predictions.identify() function to identify entities in an input image using the following code. Note, that we have to set “collections: true” in the call to identify.

function EntityIdentification() {
  const [response, setResponse] = useState("Click upload for test ")
  const [src, setSrc] = useState("");

  function identifyFromFile(event) {
    setResponse('searching...');
    
    const { target: { files } } = event;
    const [file,] = files || [];

    if (!file) {
      return;
    }
    Predictions.identify({
      entities: {
        source: {
          file,
        },
        collection: true
        celebrityDetection: true
      }
    }).then(result => {
      console.log(result);
      const entities = result.entities;
      let imageId = ""
      entities.forEach(({ boundingBox, metadata: { name, externalImageId } }) => {
        const {
          width, // ratio of overall image width
          height, // ratio of overall image height
          left, // left coordinate as a ratio of overall image width
          top // top coordinate as a ratio of overall image height
        } = boundingBox;
        imageId = externalImageId;
        console.log({ name });
      })
      if (imageId) {
        Storage.get("", {
          customPrefix: {
            public: imageId
          },
          level: "public",
        }).then(setSrc); 
      }
      console.log({ entities });
      setResponse(imageId);
    })
      .catch(err => console.log(err))
  }

  return (
    <div className="Text">
      <div>
        <h3>Entity identification</h3>
        <input type="file" onChange={identifyFromFile}></input>
        <p>{response}</p>
        { src && <img src={src}></img>}
      </div>
    </div>
  );
}

To learn more about the predictions category, visit our documentation.

Feedback

We hope you like these new features! Let us know how we are doing, and submit any feedback in the Amplify Framework Github Repository. You can read more about AWS Amplify on the AWS Amplify website.

from AWS Mobile Blog

Amplify Framework adds support for AWS Lambda Triggers in Auth and Storage categories

Amplify Framework adds support for AWS Lambda Triggers in Auth and Storage categories

The Amplify Framework is an open source project for building cloud-enabled mobile and web applications. Today, we’re happy to announce that you can set up AWS Lambda triggers directly from the Amplify CLI.

Using Lambda triggers, you can call event-based Lambda functions for authentication, database actions, and storage operations from other AWS services like Amazon Simple Storage Service (Amazon S3), Amazon Cognito, and Amazon DynamoDB. Now, the Amplify CLI allows you to enable and configure these triggers. The CLI further simplifies the process by providing you with trigger templates that you can customize to suit your use case.

The Lambda trigger capabilities for Auth category include:

  1. Add Google reCaptcha Challenge: This enables you to add Google’s Captcha implementation to your mobile or web app.
  2. Email verification link with redirect: This trigger enables you to define an email message that can be used for an account verification flow.
  3. Add user to a Amazon Cognito User Pools group: This enables you to add a user to an Amazon Cognito User Pools group upon account registration.
  4. Email domain filtering: This enables you to define email domains that would like to allow or block during sign up.
  5. Custom Auth Challenge Flow: This enables you add custom auth flow to your mobile and web application by providing a basic skeleton which you can edit to achieve custom authentication in your application.

The Lambda trigger for Storage category can be added when creating or updating the storage resource using the Amplify CLI.

Auth Triggers for Authentication with Amazon Cognito

The Lambda triggers for Auth enable you to build custom authentication flows in your mobile and web application.
These triggers can be associated with Cognito User Pool operations such as sign-up, account confirmation, and sign-in. The Amplify CLI provides the template triggers for capabilities listed above which can be customized to suit your use case.

A custom authentication flow using Amazon Cognito User Pools typically comprises of 3 steps:

  1. Define Auth Challenge: Determines the next challenge in the custom auth flow.
  2. Create Auth Challenge: Creates a challenge in the custom auth flow.
  3. Verify Auth Challenge: : Determines if a response is correct in a custom auth flow.

When you add auth to your Amplify project, the CLI asks you if you want to add capabilities for custom authentication. It generates the trigger templates for each step in your custom auth flow depending on the capability chosen. The generated templates can be edited as per your requirements. Once complete, you push your project using ‘amplify push’ command. For more information on these capabilities, refer to our documentation.

Here is an example of how you add one of these custom auth capabilities in your application.

Adding a new user to group in Amazon Cognito

Using Amazon Cognito User Pools, you can create and manage groups, add users to groups, and remove users from groups. With groups, you can create collections of users to manage their permissions or to represent different user types.

You can now use the Amplify CLI to add a Lambda trigger to add a user to a group after they have successfully signed up. Here’s how it works.

Creating the authentication service and configuring the Lambda Trigger

From the CLI, create a new Amplify project with the following command:

amplify init

Next, add authentication with the following command:

amplify add auth

The command line interface then walks you through the following steps for adding authentication:

? Do you want to use the default authentication and security configuration? Default configuration
? How do you want users to be able to sign in? Username
? Do you want to configure advanced settings? Yes, I want to make some additional changes.
? What attributes are required for signing up? Email
? Do you want to enable any of the following capabilities?
 ◯ Add Google reCaptcha Challenge
 ◯ Email Verification Link with Redirect
❯◉ Add User to Group
 ◯ Email Domain Filtering (blacklist)
 ◯ Email Domain Filtering (whitelist)
 ◯ Custom Auth Challenge Flow (basic scaffolding - not for production)
 ? Enter the name of the group to which users will be added. STUDENTS
 ? Do you want to edit the local PostConfirmation lambda function now? No
 ? Do you want to edit your add-to-group function now? Yes

The interface should then open the appropriate Lambda function template, which you can edit in your text editor. The code for the function will be located at amplify/backend/function/<functionname>/src/add-to-group.js.

The Lambda function that you write for this example adds new users to a group called STUDENTS when they have an .edu email address. This function triggers after the signup successfully completes.

Update the Lambda function add-to-group.js with the following code:

const aws = require('aws-sdk');

exports.handler = (event, context, callback) => {
  const cognitoidentityserviceprovider = new aws.CognitoIdentityServiceProvider({ apiVersion: '2016-04-18' });

  const email = event.request.userAttributes.email.split('.')
  const domain = email[email.length - 1]

  if (domain === 'edu') {
    const params = {
      GroupName: 'STUDENTS',
      UserPoolId: event.userPoolId,
      Username: event.userName,
    }
  
    cognitoidentityserviceprovider.adminAddUserToGroup(params, (err) => {
      if (err) { callback(err) }
      callback(null, event);
    })
  } else {
    callback(null, event)
  }
}

To deploy the authentication service and the Lambda function, run the following command:

amplify push

Now, when a user signs up with an .edu email address, they are automatically placed in the STUDENTS group.

Integrating with a client application

Now that you have the authentication service up and running, let’s integrate with a React application that signs the user in and recognizes that the user is part of the STUDENTS group.

First, install the Amplify and Amplify React dependencies:

npm install aws-amplify aws-amplify-react

Next, open src/index.js and add the following code to configure the app to recognize the Amplify project configuration:

import Amplify from 'aws-amplify'
import config from './aws-exports'
Amplify.configure(config)

Next, update src/App.js. The code recognizes the user groups of a user after they have signed in and displays a welcome message if the user is in the STUDENTS group.

// src/App.js
import React, { useEffect, useState } from 'react'
import logo from './logo.svg'
import './App.css'
import { withAuthenticator } from 'aws-amplify-react'
import { Auth } from 'aws-amplify'

function App() {
  const [isStudent, updateStudentInfo] = useState(false)
  useEffect(() => {
    /* Get the AWS credentials for the current user from Identity Pools.  */
    Auth.currentSession()
      .then(cognitoUser => {
        const { idToken: { payload }} = cognitoUser
        /* Loop through the groups that the user is a member of */
        /* Set isStudent to true if the user is part of the STUDENTS group */
        payload['cognito:groups'] && payload['cognito:groups'].forEach(group => {
          if (group === 'STUDENTS') updateStudentInfo(true)
        })
      })
      .catch(err => console.log(err));
  }, [])
  return (
    <div className="App">
      <header className="App-header">
        <img src={logo} className="App-logo" alt="logo" />
        { isStudent && <h1>Welcome, Student!</h1> }
      </header>
    </div>
  );
}

export default withAuthenticator(App, { includeGreetings: true })

Now, if the user is part of the STUDENTS group, they will get a specialized greeting.

Storage Triggers for Amazon S3 and Amazon DynamoDB

With this release, we’ve also enabled the ability to setup Lambda triggers for Amazon S3 and Amazon DynamoDB. This means you can execute a Lambda function on events such as create, update, read, and write. When adding or configuring storage from the Amplify CLI, you now have the option to add and configure a storage trigger.

Resizing an image with AWS Lambda and Amazon S3

Let’s take a look at how to use one of the new triggers to resize an image into a thumbnail after it has been uploaded to an S3 bucket.

From the CLI, create a new Amplify project with the following command:

amplify init

Next, add storage with the following command:

amplify add storage

The interface then walks you through the add storage setup.

? Please select from one of the below mentioned services: Content (Images, audio, video, etc.)
? You need to add auth (Amazon Cognito) to your project in order to add storage for user files. Do you want to add auth now? Yes
? Do you want to use the default authentication and security configuration? Default configuration
? How do you want users to be able to sign in? Username
? Do you want to configure advanced settings? No, I am done.
? Please provide a friendly name for your resource that will be used to label this category in the project: MyS3Example
? Please provide bucket name: <YOUR_UNIQUE_BUCKET_NAME>
? Who should have access: Auth and guest users
? What kind of access do you want for Authenticated users? create/update, read, delete
? What kind of access do you want for Guest users? read
? Do you want to add a Lambda Trigger for your S3 Bucket? Y
? Select from the following options: Create a new function

The CLI then generates a code template for the new Lambda function, which you can modify as needed. It will be located at amplify/backend/function/<functionname>/src/index.js.

Replace the code in index.js with the following code:

const gm = require('gm').subClass({ imageMagick: true })
const aws = require('aws-sdk')
const s3 = new aws.S3()

const WIDTH = 100
const HEIGHT = 100

exports.handler = (event, context, callback) => {
  const BUCKET = event.Records[0].s3.bucket.name

  /* Get the image data we will use from the first record in the event object */
  const KEY = event.Records[0].s3.object.key
  const PARTS = KEY.split('/')

  /* Check to see if the base folder is already set to thumbnails, if it is we return so we do not have a recursive call. */
  const BASE_FOLDER = PARTS[0]
  if (BASE_FOLDER === 'thumbnails') return

  /* Stores the main file name in a variable */
  let FILE = PARTS[PARTS.length - 1]

  s3.getObject({ Bucket: BUCKET, Key: KEY }).promise()
    .then(image => {
      gm(image.Body)
        .resize(WIDTH, HEIGHT)
        .setFormat('jpeg')
        .toBuffer(function (err, buffer) {
          if (err) {
            console.log('error storing and resizing image: ', err)
            callback(err)
          }
          else {
            s3.putObject({ Bucket: BUCKET, Body: buffer, Key: `thumbnails/thumbnail-${FILE}` }).promise()
            .then(() => { callback(null) })
            .catch(err => { callback(err) })
          }
        })
    })
    .catch(err => {
      console.log('error resizing image: ', err)
      callback(err)
    })
}

You can trace the execution of the code above in Amazon CloudWatch Logs on an event such as upload to the S3 bucket.

Next, install the GraphicsMagick library in the Lambda function directory. This ensures that you have the needed dependencies to execute the Lambda function.

cd amplify/backend/function/<functionname>/src

npm install gm

cd ../../../../../

To deploy the services, run the following command:

amplify push

Next, visit the S3 console, open your bucket and upload an image. Once the upload has completed, a folder named thumbnails will be created and the resized image will be stored there.

To learn more about creating storage triggers, check out the documentation.

Feedback

We hope you like these new features! As always, let us know how we’re doing, and submit any requests in the Amplify Framework GitHub Repository. You can read more about AWS Amplify on the AWS Amplify website.

from AWS Mobile Blog

Deploy files stored on Amazon S3, Dropbox, or your Desktop to the AWS Amplify Console

Deploy files stored on Amazon S3, Dropbox, or your Desktop to the AWS Amplify Console

This article was written by Nikhil Swaminathan, Sr. Product Manager, AWS.

AWS Amplify recently launched a manual deploy option, providing you with the ability to host a static web app without connecting to a Git repository. You can deploy files stored on your desktop, Amazon S3, or files stored with any cloud provider.

The Amplify Console offers fully managed hosting with features such as instant cache invalidation, atomic deploys, redirects, and custom domain management. You can now use Amplify hosting with your own CI workflows, or to quickly generate a shareable URL to share a prototype.

This post describes how to deploy files manually from several different locations.

Overview

There are three locations from where you can manually deploy files:

  1. Deploy a folder from your desktop.
  2. Deploy files from S3 – upload files to an S3 bucket to push updates to your site automatically.
  3. Any URL – upload files to your Dropbox account to host a site.

Solution

First, if you have an existing app, run the following command to create an output directory (typically named dist or public):

  1. cd my-app OR create-react-app my-app
  2. npm install
  3. npm run build

Deploy a folder from your desktop

The easiest way to host your site is to drag a folder from your desktop:

  1. Log in to the Amplify Console
  2. Choose Deploy without a Git provider
  3. On the following screen, enter your app name and the name of your environment. Every Amplify app can have multiple environments. For example, you can host both a dev and prod version of your site.
  4. Drag and drop the output folder as shown below and choose Save and Deploy
  5. That’s it! Your site should be live at https://environmentname.appid.amplifyapp.com. Try making some code changes and upload a staging version of your site by choosing Add new environment.

Deploy files from Dropbox

  1. Log in to your Dropbox account and upload your build artifacts zip file to Dropbox.
  2. Create a shared link for the uploaded zip file. The link looks like https://www.dropbox.com/s/a1b2c3d4ef5gh6/example.docx?dl=0. Change the query param at the end of the URL to “dl=1” to force the browser to download the link.
  3. From the Amplify Console, choose Deploy without a Git provider and then choose Any URL. Provide the URL and choose Save and deploy. Your site is now live!

Deploy files from Amazon S3

Many developers use S3 for static hosting. You can continue to use S3 to sync your files while also leveraging the hosting features offered by the Amplify Console. For example, you can automatically trigger updates to your site using the Amplify Console, S3, and AWS Lambda.

Set up an S3 bucket

For this example, set up an S3 bucket to automatically trigger deployments to your site on any update:

1. In the S3 console, select an existing bucket or create a new one

2. Build your app locally and upload a zipped version of your build artifacts. For this example, use the AWS CLI to upload your file to S3 (you can also use the S3 console):

cd myawesomeapp
yarn run build
cd public #build directory
zip -r myapp.zip *
aws s3 cp archive.zip s3://bucketname

3. In the Amplify Console, choose Deploy without a Git provider

4. For Method, choose Amazon S3, and for Bucket, choose the bucket you just created. The zip file that you uploaded should automatically appear in the Zip file list.

5. Choose Save and deploy. Your site should be live at https://environmentname.appid.amplifyapp.com.

Set up an S3 trigger

Now, set up an S3 trigger so that your site is updated automatically every time you push a new change. Use the same setup for a continuous delivery service such as AWS CodePipeline, or for GitLab or BitBucket pipelines.

1. In the Lambda console, create a new function with a new role by choosing Author from scratch

2. Copy the following code into the Lambda function editor:

const appId = ENTER YOUR APP ID;
const branchName = ENTER YOUR BRANCH NAME;

const aws = require("aws-sdk");
const amplify = new aws.Amplify();

exports.handler = async function(event, context, callback) {
    const Bucket = event.Records[0].s3.bucket.name;
    const objectKey = event.Records[0].s3.object.key;
    await amplify.startDeployment({
        appId,
        branchName,
        sourceUrl: `s3://${Bucket}/${objectKey}`
    }).promise();
}

3. Give the Lambda function access to S3 and the Amplify Console.

  • Choose Amazon CloudWatch Logs and then choose Manage these permissions. The IAM Console opens up in a new tab.

  • In the IAM console, on the Permissions tab, choose Attach policies. For Policy name, choose Amazon S3FullAccess.

  • To give the function access to deploy to Amplify Console, choose Add inline policy. On the Create policy screen, under Visual editor, select Amplify and then choose Review policy.

  • Choose Actions, Manual actions, and select the All Amplify actions check box. Under Resources, choose All resources and then save the policy. 

4. In the Lambda console, you should see the designer updated with the correct permissions as follows.

5. Now, add an S3 trigger for the bucket so that any updates to the S3 bucket trigger the Lambda function. On the Add trigger screen, configure the trigger with the following values:

  • Bucket: Select the S3 bucket that you used earlier.
  • Event type: Choose All object create events

Test the setup

Test to make sure that your setup works:

  1. In the S3 console (or at the command line), upload a new zip artifact.
  2. Navigate to the Amplify Console. You should see a new deployment, as shown in the following screenshot.

Success! Use this setup to automatically trigger a deployment every time you push to the S3 bucket, either from your desktop or from the continuous delivery pipeline.

Conclusion

This post showed you how to use the new manual deploy option in the Amplify Console. This gives you the ability to use the Amplify Console to host a static web app without connecting to a Git repository. You can now manually deploy files in three different ways: from your desktop, any URL, or S3. Visit the Amplify Console homepage to learn more.

 

 

 

from AWS Mobile Blog

Deploy a VueJS app with the Amplify Console using AWS CloudFormation

Deploy a VueJS app with the Amplify Console using AWS CloudFormation

This article was written by Simon Thulbourn, Solutions Architect, AWS.

Developers and Operations people love automation. It gives them the power to introduce repeatability into their applications. The provisioning of infrastructure components is no different. Being able to create and manage resources through the use of AWS CloudFormation is a powerful way to run and rerun the same code to create resources in AWS across accounts.

Today, the Amplify Console launched support for AWS CloudFormation resources to give developers the ability to have reliable and repeatable access to the Amplify Console service. The Amplify Console offers three new resources:

  • AWS::Amplify::App
  • AWS::Amplify::Branch
  • AWS::Amplify::Domain

For newcomers, AWS Amplify Console provides a Git-based workflow to enable developers to build, deploy and host web application, whether it’s  Angular, ReactJS, VueJS or something else. These web applications can then consume APIs based on GraphQL or Serverless technologies, enabling fullstack serverless applications on AWS.

Working with CloudFormation

As an example, you’ll deploy the Todo example app from VueJS using Amplify Console and the new CloudFormation resources.

deploy the Todo example app from VueJS

You’ll start by forking the Vue repository on GitHub to your account. You have to fork the repository since Amplify Console will want to add a webhook and clone the repository for future builds.

You’ll also create a new personal access token on GitHub since you’ll need one to embed in the CloudFormation. You can read more about creating personal access tokens on GitHub’s website. The token will need the “repo” OAuth scope.

Note: Personal access tokens should be treated as a secret.

You can deploy the Todo application using the CloudFormation template at the end of this blog post. This CloudFormation template will create an Amplify Console App, Branch & Domain with a TLS certificate and an IAM role. To deploy the CloudFormation template, you can either use the AWS Console or the AWS CLI. In this example, we’re using the AWS CLI:

aws cloudformation deploy \
  --template-file ./template.yaml \
  --capabilities CAPABILITY_IAM \
  --parameter-overrides \
      OAuthToken=<GITHUB PERSONAL ACCESS TOKEN> \
      Repository=https://github.com/sthulb/vue \
      Domain=example.com \
  --stack-name TodoApp

After deploying the CloudFormation template, you need to go into the Amplify Console and trigger a build. The CloudFormation template can provision the resources, but can’t trigger a build since it creates resources but cannot trigger actions.

Diving deeper Into the template

The CloudFormation template needs to be updated to add your forked GitHub URL, and the Oauth token created above, and a custom domain you own. The AmplifyApp resource is your project definition – it is a collection of all the branches (or the AmplifyBranch resource) in your repository. The BuildSpec describes the settings used to build and deploy the branches in your app. In this example, we are deploying an example Todo app, which consists of four files. The Todo app expects a vue.min.js file to be available at: https://a1b2c3.amplifyapp.com/dist/vue.min.js; as a part of the buildspec we made sure the vue.min.js was in the deployment artifact, but not in the right location. We used the CustomRules property to rewrite the URL, transforming the URL from https://a1b2c3.amplifyapp.com/vue.min.js to https://a1b2c3.amplifyapp.com/dist/vue.min.js.

The AmplifyDomain resource allows you to connect your domain (https://yourdomain.com) or a subdomain (https://foo.yourdomain.com) so end users can start visiting your site.

Template

AWSTemplateFormatVersion: 2010-09-09

Parameters:
  Repository:
    Type: String
    Description: GitHub Repository URL

  OauthToken:
    Type: String
    Description: GitHub Repository URL
    NoEcho: true

  Domain:
    Type: String
    Description: Domain name to host application

Resources:
  AmplifyRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - amplify.amazonaws.com
            Action:
              - sts:AssumeRole
      Policies:
        - PolicyName: Amplify
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action: "amplify:*"
                Resource: "*"

  AmplifyApp:
    Type: "AWS::Amplify::App"
    Properties:
      Name: TodoApp
      Repository: !Ref Repository
      Description: VueJS Todo example app
      OauthToken: !Ref OauthToken
      EnableBranchAutoBuild: true
      BuildSpec: |-
        version: 0.1
        frontend:
          phases:
            build:
              commands:
                - cp dist/vue.min.js examples/todomvc/
          artifacts:
            baseDirectory: examples/todomvc/
            files:
              - '*'
      CustomRules:
        - Source: /dist/vue.min.js
          Target: /vue.min.js
          Status: '200'
      Tags:
        - Key: Name
          Value: Todo
      IAMServiceRole: !GetAtt AmplifyRole.Arn

  AmplifyBranch:
    Type: AWS::Amplify::Branch
    Properties:
      BranchName: master
      AppId: !GetAtt AmplifyApp.AppId
      Description: Master Branch
      EnableAutoBuild: true
      Tags:
        - Key: Name
          Value: todo-master
        - Key: Branch
          Value: master

  AmplifyDomain:
    Type: AWS::Amplify::Domain
    Properties:
      DomainName: !Ref Domain
      AppId: !GetAtt AmplifyApp.AppId
      SubDomainSettings:
        - Prefix: master
          BranchName: !GetAtt AmplifyBranch.BranchName

Outputs:
  DefaultDomain:
    Value: !GetAtt AmplifyApp.DefaultDomain

  MasterBranchUrl:
    Value: !Join [ ".", [ !GetAtt AmplifyBranch.BranchName, !GetAtt AmplifyDomain.DomainName ]]

Conclusion

To start using Amplify Console’s CloudFormation resources, visit the CloudFormation documentation page.

Acknowledgements

All of the code in the VueJS repository is licensed under the MIT license and property of Evan You and contributors.

 

from AWS Mobile Blog

Amplify Framework Adds Support for AWS Lambda Functions and Amazon DynamoDB Custom Indexes in GraphQL Schemas

Amplify Framework Adds Support for AWS Lambda Functions and Amazon DynamoDB Custom Indexes in GraphQL Schemas

Written by Kurt Kemple, Sr. Developer Advocate at AWS, Nikhil Dabhade, Sr. Product Manager at AWS, & Me!

The Amplify Framework is an open source project for building cloud-enabled mobile and web applications. Today, we’re happy to announce new features for the Function and API categories in the Amplify CLI.

It’s now possible to add an AWS Lambda function as a data source for your AWS AppSync API using the GraphQL transformer that is included in the Amplify CLI. You can also grant permissions for interacting with AWS resources from the Lambda function. This updates the associated IAM execution role policies without needing you to perform manual IAM policy updates.

The GraphQL transformer also includes a new @key directive that simplifies the syntax for creating custom indexes and performing advanced query operations with Amazon DynamoDB. This streamlines the process of configuring complex key structures to fit various access patterns when using DynamoDB as a data source.

Adding a Lambda function as a data source for your AWS AppSync API

The new @function directive in the GraphQL transform library provides an easy mechanism to call a Lambda function from a field in your AppSync API.  To connect a Lambda data source, add the @function directive to a field in your annotated GraphQL schema that’s managed by the Amplify CLI. You can also create and deploy the Lambda functions by using the Amplify CLI.

Let’s look at how you can use this feature.

What are we building?

In this blog post, we will create a React JavaScript application which uses a Lambda function as a data source for your GraphQL API. The Lambda function writes to storage which in this case will be Amazon DynamoDB. In addition, we will illustrate how you can easily grant create/read/update/delete permissions for interacting with AWS resources such as DynamoDB from a Lambda function.

Setting up the project

Pre-requisites

Download, install and configure the Amplify CLI.

$ npm install -g @aws-amplify/cli 
$ amplify configure

Next, create your project if you don’t already have one. We’re creating a React application here, but you can choose to create a project with any other Amplify-supported framework such as Angular, Vue or Ionic.

$ npx create-react-app my-project

Download, install and configure the Amplify CLI.

$ cd my-project
$ amplify init
$ npm i aws-amplify
$ cd my-project<br />$ amplify init<br />$ npm i aws-amplify

The ‘amplify init’ command initializes the project, sets up deployment resources in the cloud, and makes your project ready for Amplify.

Adding storage to your project

Next, we will setup the backend to add Storage using Amazon DynamoDB for your React JavaScript application.

$ amplify add storage
? Please select from one of the below mentioned services NoSQL Database
Welcome to the NoSQL DynamoDB database wizard
This wizard asks you a series of questions to help determine how to set up your NoSQL database table.

? Please provide a friendly name for your resource that will be used to label this category in the project: teststorage
? Please provide table name: teststorage
You can now add columns to the table.
? What would you like to name this column: id
? Please choose the data type: number
? Would you like to add another column? Yes
? What would you like to name this column: email
? Please choose the data type: string
? Would you like to add another column? Yes
? What would you like to name this column: createdAt
? Please choose the data type: string
? Would you like to add another column? No
Before you create the database, you must specify how items in your table are uniquely organized. You do this by specifying a primary key. The primary key uniquely identifies each item in the table so that no two items can have the same key.
This can be an individual column, or a combination that includes a primary key and a sort key.
To learn more about primary keys, see: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html#HowItWorks.CoreComponents.PrimaryKey
? Please choose partition key for the table: id
? Do you want to add a sort key to your table? No
You can optionally add global secondary indexes for this table. These are useful when you run queries defined in a different column than the primary key.
To learn more about indexes, see: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html#HowItWorks.CoreComponents.SecondaryIndexes
? Do you want to add global secondary indexes to your table? No
Successfully added resource teststorage locally

Adding a function to your project

Next, we will add a Lambda function by using the Amplify CLI. We will also grant permissions for the Lambda function to be able to interact with DynamoDB table which we created in the previous step.

$ amplify add function
Using service: Lambda, provided by: awscloudformation
? Provide a friendly name for your resource to be used as a label for this category in the project: addEntry
? Provide the AWS Lambda function name: addEntry
? Choose the function template that you want to use: Hello world function
? Do you want to access other resources created in this project from your Lambda function? Yes
? Select the category storage
? Select the resources for storage category teststorage
? Select the operations you want to permit for teststorage create, read, update, delete

You can access the following resource attributes as environment variables from your Lambda function
var environment = process.env.ENV
var region = process.env.REGION
var storageTeststorageName = process.env.STORAGE_TESTSTORAGE_NAME
var storageTeststorageArn = process.env.STORAGE_TESTSTORAGE_ARN

? Do you want to edit the local lambda function now? Yes 

This will open the Hello world function template file ‘index.js’ in the editor you selected during the ‘amplify init’ step.

Auto populating environment variables for your Lambda function

The Amplify CLI adds the environment variables, representing the AWS resources that the Lambda interacts with, as comments to your index.js files at the top for ease of reference. In this case, the AWS resource is DynamoDB. We want to have the Lambda function add an entry to the DynamoDB table with the parameters we pass to the GraphQL API. Add the following code to the Lambda function which utilizes the environment variables representing the DynamoDB table and region:

/* Amplify Params - DO NOT EDIT
You can access the following resource attributes as environment variables from your Lambda function
var environment = process.env.ENV
var region = process.env.REGION
var storageTeststorageName = process.env.STORAGE_TESTSTORAGE_NAME
var storageTeststorageArn = process.env.STORAGE_TESTSTORAGE_ARN

Amplify Params - DO NOT EDIT */

var AWS = require('aws-sdk');
var region = process.env.REGION
var storageTeststorageName = process.env.STORAGE_TESTSTORAGE_NAME
AWS.config.update({region: region});
var ddb = new AWS.DynamoDB({apiVersion: '2012-08-10'});
var ddb_table_name = storageTeststorageName
var ddb_primary_key = 'id';

function write(params, context){
    ddb.putItem(params, function(err, data) {
    if (err) {
      console.log("Error", err);
    } else {
      console.log("Success", data);
    }
  });
}
 

exports.handler = function (event, context) { //eslint-disable-line
  
  var params = {
    TableName: ddb_table_name,
    AWS.DynamoDB.Converter.input(event.arguments)
  };
  
  console.log('len: ' + Object.keys(event).length)
  if (Object.keys(event).length > 0) {
    write(params, context);
  } 
}; 

After you replace the function, jump back to the command line and press Enter to continue.

Next, run the ‘amplify push’ command to deploy your changes to the AWS cloud.

$ amplify push

Adding and updating the Lambda execution IAM role for Amplify managed resources

When you run the ‘amplify push’ command, the IAM execution role policies associated with the permissions you granted earlier are updated automatically to allow the Lambda function to interact with DynamoDB.

Setting up the API

After completing the function setup, the next step is to add a GraphQL API to your project:

$ amplify add api
? Please select from one of the below mentioned services GraphQL
? Provide API name: myproject
? Choose an authorization type for the API API key
? Do you have an annotated GraphQL schema? No
? Do you want a guided schema creation? Yes
? What best describes your project: Single object with fields (e.g., “Todo” with ID, name, description)
? Do you want to edit the schema now? Yes

This will open the schema.graphql file in the editor you selected during the ‘amplify init’ step.

Replace the annotated schema template located in your <project-root>/amplify/backend/api/<api-name>/schema.graphql file with the following code:

type Customer @model {
  id: ID!
  name: String!
  createdAt: String
}

type Mutation {
  addEntry(id: Int, email: String, createdAt: String): String @function(name: "addEntry-${env}")
}

Check if the updates to your schema are compiled successfully by running the following command:

$ amplify api gql-compile

Now that your API is configured, run the amplify push command to deploy your changes to create the corresponding AWS backend resources.

When you’re prompted about code generation for your API, choose Yes. You can accept all default options. This generates queries, mutations, subscriptions, and boilerplate code for the Amplify libraries to consume. For more information, see Codegen in the Amplify CLI docs.

Accessing the function from your project

Now that your function and API are configured, you can access them through the API class, which is part of the Amplify JavaScript Library.

Open App.js and add the following import and call to Amplify API as shown below:

import awsconfig from './aws-exports';
import { API, graphqlOperation } from "aws-amplify";
import { addEntry }  from './graphql/mutations';
API.configure(awsconfig);

const entry = {id:“1”, email:“[email protected]”, createdAt:“2019-5-29”}
const data = await API.graphql(graphqlOperation(addEntry, entry))
console.log(data)

Running the app

Now that you have your application code complete, run the application and verify that the API call outputs “Success”.

Setting Amazon DynamoDB custom indexes in your GraphQL schemas

When building an application on top of DynamoDB, it helps to first think about access patterns. The new @key directive, which is a part of the GraphQL transformer in the Amplify CLI, makes it simple to configure complex key structures in DynamoDB that can fit your access patterns.

Let’s say we are using DynamoDB as a backend for your GraphQL API. The initial GraphQL schemas we can use to represent @model types Customer and Item are as shown below:

type Customer @model {
  email: String!
  username: String!
}

type Item @model {
    orderId: ID!
    status: Status!
    createdAt: AWSDateTime!
    name: String!
}

enum Status {
    DELIVERED
    IN_TRANSIT
    PENDING
    UNKNOWN
}

Access Patterns

For example, let’s say this application needs to facilitate the following access patterns:

  • Get customers by email – email is the primary key.
  • Get Items by status and by createdAt – orderId is the primary key

Let’s walkthrough how you would accomplish these use cases and call the APIs for these queries in your React JavaScript application.

Assumption: You completed the pre-requisites and created your React JavaScript application as shown in section 1.

Create an API

First, we will create a GraphQL API using the ‘amplify add api’ command:

$ amplify add api
? Please select from one of the below mentioned services GraphQL
? Provide API name: myproject
? Choose an authorization type for the API API key
? Do you have an annotated GraphQL schema? No
? Do you want a guided schema creation? Yes
? What best describes your project: Single object with fields (e.g., “Todo” with ID, name, description)
? Do you want to edit the schema now? Yes
? Press enter to continue

This will open the schema.graphql file under <myproject>/amplify/backend/api/myproject/schema.graphql

Modifying the schema.graphql file

Let’s dive in to the details with respect to the new @key directive.

Query by primary key

Add the following Customer @model type to your schema.graphql

type Customer @model @key(fields: ["email"]) {
    email: String!
    username: String
}

For Customer @model type, a @key without a name specifies the key for the DynamoDB table’s primary index. Here the hash key for the table’s primary index is email. You can only provide one @key without a name per @model type.

Query by composite keys (one or more fields are sort key)

type Item @model
    @key(fields: ["orderId", "status", "createdAt"])
    @key(name: "ByStatusAndCreatedAt", fields: ["status", "createdAt"], queryField: "itemsByStatusAndCreatedAt")
{
    orderId: ID!
    status: Status!
    createdAt: AWSDateTime!
    name: String!
}

enum Status {
    DELIVERED
    IN_TRANSIT
    PENDING
    UNKNOWN
}

Let’s break down the above Item @model type.

DynamoDB lets you query by at most two attributes. We added three fields to our first key directive ‘@key(fields: [“orderId”, “status”, “createdAt”])‘. The first field ‘orderId; will be the hash key as expected, but the sort key will be the new composite key named status#createdAt that is made of the status and createdAt fields. This enables us to run queries using more than two attributes at a time.

Run the ‘amplify push’ command to deploy your changes to the AWS cloud. Since we have the @key directive, it will create the DynamoDB tables for Customer and Item with the primary indexes, sort keys and generate resolvers that inject composite key values during queries and mutation.

$ amplify push
Current Environment: dev
? Do you want to generate code for your newly created GraphQL API Yes
? Choose the code generation language target javascript
? Enter the file name pattern of graphql queries, mutations and subscriptions src/graphql/**/*.js
? Do you want to generate/update all possible GraphQL operations - queries, mutations and subscriptions Yes
? Enter maximum statement depth [increase from default if your schema is deeply nested] 2

The file <myproject>/src/graphlql/queries.js will contain the auto generated queries for our  intended access patterns “Get customers by email” and “Get Items by status and by createdAt”.

Accessing the API from your application

Now that your API is configured, you can access it through the API class, which is part of the Amplify JavaScript Library. We will call the query for “Get Items by status and by createdAt”

Open App.js and add the following import and call to Amplify API as shown below:

import awsconfig from './aws-exports';
import { API, graphqlOperation } from "aws-amplify";
import { itemsByStatusAndCreatedAt }  from './graphql/queries';
API.configure(awsconfig);

const entry = {status:'PENDING', createdAt: {beginsWith:"2019"}};
const data = await API.graphql(graphqlOperation(itemsByStatusAndCreatedAt, entry))
console.log(data)

To learn more, refer to the documentation here.

Feedback

We hope you like these new features! As always, let us know how we’re doing, and submit any requests in the Amplify Framework GitHub Repository. You can read more about AWS Amplify on the AWS Amplify website.

 

from AWS Mobile Blog