Category: Media

New This Is My Architecture: Media Analysis Solution

New This Is My Architecture: Media Analysis Solution

Join us to discuss a turn-key solution created by AWS for Media Analysis. The Media Analysis Solution uses advanced ML services such as Amazon Transcribe, Amazon Comprehend, Amazon Recognition and others to understand and interpret what is happening in a video clip. The analysis of this content then leads to a set of time-coded meta data that can be automatically generated and used to build a comprehensive media library searchable by dialogue, celebrities, and more. The workflow is implemented with AWS Lambda and step functions. This solution can be launched directly from the AWS Solutions page.

from AWS Media Blog

Insys Cloud Video Recorder: Launch a Cloud-based DVR Service on AWS

Insys Cloud Video Recorder: Launch a Cloud-based DVR Service on AWS

Authored by Piotr Czekała, Co-Founder and CTO of Insys Video Technologies. The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

Governments, public agencies, educational institutions, and other non-media organizations have a need to stream live events and record video content for their end users to stream later. There is a popular misconception that streaming live events, and particularly recording live events for on-demand streaming, is too technically complex for non-media organizations, or that the upfront capital costs and time to build such a streaming solution are cost prohibitive. The reality is, launching a live streaming solution with the ability to create video files for later on demand viewing does not have to be complicated, time-consuming, or require a lot of upfront capital investment.

Insys Cloud Video Recorder from Insys Video Technologies simplifies the process for recording virtually unlimited live video streams, storing them in the cloud, and distributing them to nearly any video playback device. Insys Cloud Video Recorder enables customers to analyze and catalog the recordings and stream the content whenever and wherever viewers want. Other key features include advance scheduling of recordings, automated audio transcription, and automatic identification of recognizable people such as politicians within recorded content.

Customer Needs and Challenges

Insys Cloud Video Recorder is an ideal solution for companies or organizations that want to record any live stream, store it in the cloud, and share via the Internet. This solution supports three primary customer use cases:

Government: for government or public agencies that need to record, archive, and publish live streams from city council meetings, legislative sessions, public hearings, or judicial proceedings.

Education: for educational institutions, universities, student associations, or even study groups that want to archive and publish lectures or any type of educational events, including academic conferences, inauguration ceremonies, or matriculation ceremonies.

Live Events: for event organizers who want to stream any live event, such as a concert or a sporting event, conference, workshop, or other presentations in order to share it to viewers via the Internet.

With Insys Cloud Video Recorder, media and non-media customers can quickly launch a recording solution tailored to their needs. Insys Cloud Video Recorder eliminates the high upfront CAPEX cost of hardware for recording and archiving live video, while reducing the complexity of adding live video recording into existing workflows. Since Insys Cloud Video Recorder is a fully managed SaaS solution, customers can record a live stream and distribute it to users.

Insys Cloud Video Recorder delivers limitless scalability, from one to millions of users, recordings, or hours of stored content. Content may optionally be processed by machine learning tools such as Amazon Transcribe, Amazon Rekognition, and Amazon SageMaker.

Solution Architecture

For the video layer, the Insys Cloud Video Recorder solution uses the entire family of AWS Elemental Media Services. Customers use AWS Elemental MediaLive to ingest and encode live video streams and produce adaptive bit rate (ABR) outputs. AWS Elemental MediaPackage prepares HLS and MPEG-DASH outputs for streaming to multiscreen devices, and optionally encrypts content using Insys Multi DRM, while also creating recorded outputs from live streams. Recorded content is kept on Amazon S3 for highly scalable storage. Customers may also use AWS Elemental MediaConvert to create existing video assets and transcode them into the desired output formats for end users to download.

In addition to AWS Elemental Media Services, customers can leverage machine learning using Amazon Rekognition to create automated metadata of people, objects, scenes, or activities present in the recordings and Amazon Transcribe to create automated speech-to-text transcriptions that can be added to recordings.

On top of the video stack, Insys Cloud Video Recorder provides a flexible API and administration panel GUI that runs on scalable cloud components, such as Amazon EC2 Auto Scaling, Amazon Relational Database Service (RDS), Amazon DynamoDB, and serverless components using AWS Lambda.

Customer Benefits

Customers value the ability to quickly start recording and archiving live streams, with automated storage of recordings in Amazon S3. Customers also appreciate the flexibility to quickly and easily convert content to multiple formats and store it on the AWS cloud.

Key benefits:

  • Start recording immediately
  • Virtually “unlimited” scalability (from one to millions of users, recordings, or hours of stored content)
  • Monthly subscription – pay only for actual consumption
  • Reduce OPEX costs for support and maintenance
  • Avoid costly upfront CAPEX investment (encoders, NAS storage, servers etc.)
  • Re-use recorded content to create a new revenue stream
  • Increase user loyalty and customer base through an innovative live and live-to-VOD service


Whether you are a media company that understands video processing and streaming or a non-media entity that just needs a simple, flexible, and cost-effective solution for streaming live content and recording it for later online viewing, Insys Cloud Video Recorder gives you a quick, flexible, economical option to record video content in the cloud. Recordings are analyzed, cataloged, and archived at any scale. Your end users can stream live events or view recorded content whenever and wherever they want. Insys Cloud Video Recorder leverages AWS services to bring live and on-demand streaming video to media and non-media content owners alike in one easy-to-use, highly scalable solution.

To learn more about the technical features of Insys Cloud Video Recorder, watch this technical webinar.

Insys Video Technologies is an AWS Select Consulting Partner. To learn more about our OTT white label solutions and Insys Cloud Video Recorder, or to arrange a demo or schedule a proof of concept, contact [email protected].

About Insys Video Technologies

WE DO OTT! Our technology for recorded and live channel streaming and video on demand works with AWS Media Services and AWS Elemental appliances and software. Customers can use ready to launch in the cloud, end-to-end white label OTT solutions from Insys Video Technologies and go to market within a maximum of seven days. We can also help integrate AWS Media Services with online video platforms. Insys Video Technologies is based in Switzerland and Poland.s3

from AWS Media Blog

Record and Store Live Video Streams on AWS with Insys Cloud Video Recorder

Record and Store Live Video Streams on AWS with Insys Cloud Video Recorder

Authored by Krzysztof Bartkowski, Co-Founder and CEO of Insys Video Technologies. The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

Launching a cloud-based DVR service is seemingly complex and time-consuming, but it is possible to launch a complete, professional cloud-based solution quickly and cost-effectively. Cloud-based DVR solutions can be deployed with traditional on-premises cable or IPTV headends, or with video workloads running entirely on AWS.

Insys Cloud Video Recorder is a cloud-based solution that enables customers to record unlimited live streams and recordings; archive recordings with scalable storage; analyze and catalog the recordings; and play out the content whenever and wherever viewers want. Other key features include advance scheduling of recordings, automated audio transcription, and automatic identification of recognizable people within recorded content. In addition to being a feature-rich addition to a video operator’s service offering, Insys Cloud Video Recorder can also serve as a flexible solution for government or public sector entities that need to record, archive, and publish video content, or for educational institutions that want to archive and publish lectures or other learning content.

AWS Workflow Integration

Insys Cloud Video Recorder utilizes API integration with Amazon Rekognition, a deep learning powered video analysis service that detects activities; understands the movement of people in frame; and recognizes people, objects, celebrities, and inappropriate content from video stored in Amazon S3, an object storage service that offers industry-leading scalability, data availability, security, and performance. Insys Cloud Video Recorder uses Amazon Rekognition to automatically identify people such as celebrities and politicians within recorded content. The audio transcription feature in Insys Cloud Video Recorder is powered by API integration with Amazon Transcribe. Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to their applications. Insys Video Technologies also orchestrates the entire video workflow including Amazon Transcribe, Amazon Rekognition, Amazon S3, as well as Amazon SageMaker to quickly and easily build and train machine learning models; AWS Elemental MediaLive, a broadcast-grade live video processing service; and AWS Elemental MediaPackage to reliably prepare and protect video for delivery over the Internet.

Insys Cloud Video Recorder is a fully managed multi-tenant SaaS solution, which means customers can launch a cloud DVR solution cost-effectively in a matter of days and without the complexity of managing on-premises hardware and CAPEX investment. Customers also have the flexibility to incorporate the Insys Cloud Video Recorder into on-premises, cloud-based, and hybrid workflows. In addition to providing automated recording and flexible playout options, Insys Cloud Video Recorder provides full DVR capabilities to subscribers, including catch-up TV, restart TV, series recording, access rights, and compliance. Additionally, operators have the option to set user storage quotas and recording retention rules, and have support for both single and shared copy. Operators may also apply multi-DRM and administer blackouts, correct or substitute existing recordings, automate recording cleanup, and integrate with the operator’s BSS/OSS systems.

Get Started Today

To learn more about Insys Cloud Video recorder, watch this webinar to understand the breadth and depth of available features and capabilities.

Insys Video Technologies is an AWS Select Consulting Partner. To learn more about our OTT white label solutions and Insys Cloud Video Recorder, or to arrange a demo or schedule a proof of concept, contact [email protected].

About Insys Video Technologies

WE DO OTT! Our technology for recorded and live channel streaming and video on demand works with AWS Media Services and AWS Elemental appliances and software. Customers can use ready to launch in the cloud, end-to-end white label OTT solutions from Insys Video Technologies and go to market within a maximum of seven days. We can also help integrate AWS Media Services with online video platforms. Insys Video Technologies is based in Switzerland and Poland.

from AWS Media Blog

In the news: Graham Media Group signs with Arc Publishing platform built on AWS

In the news: Graham Media Group signs with Arc Publishing platform built on AWS

Arc Publishing, the Washington Post-owned video content management system, has signed Graham Media Group. Arc Publishing enables broadcasters to distribute videos in real time across different digital platforms to reach the broadest possible audience. The Arc Publishing platform is built on AWS, including using several AWS Elemental Media Services, to host, package, and deliver video from the AWS Cloud for broadcast and OTT. Graham Media Group is comprised of 7 local media hubs, Graham Digital and Social News Desk delivering local news, programming, advertising solutions and digital media tools for television, online, mobile, OTT, podcasts and audio devices.

Read more about Graham Media’s use of Arc on TVTechnology

Learn more about Arc Publishing’s use of AWS services, including AWS Elemental Media Services:


from AWS Media Blog

AWS Thinkbox Releases Deadline 10.1

AWS Thinkbox Releases Deadline 10.1

I’m happy to announce that AWS Thinkbox has rolled out Deadline 10.1, the latest version of our production-proven render management software offering new performance enhancements and improved scalability. Deadline 10.1 is a major step forward and reflects our ongoing investment in performance and scale. Among the things I am most excited about for this release of Deadline are the improvements in scaling up the number of workers running in parallel, providing you with the ability to go bigger and faster than ever to complete your projects.

We’ve also made it easier to use Deadline on AWS. Deadline has always been free to use when you pay for AWS resources (such as EC2 instances and EFS storage) for rendering, though it required a few extra steps to get credit. With Deadline 10.1, an improved billing process provides AWS Thinkbox Deadline customers with a more streamlined experience by removing the upfront payment requirement. Additionally, Deadline can now automatically detect if it is running on AWS and will no longer require any licensing setup of any kind.

I think users will be excited to learn about support we added for widely used content production tools. SideFX Houdini can now be used in AWS Portal in Deadline, which is a set of Deadline features that enables you to more easily extend your on-premises rendering to AWS, and usage-based licensing for SideFX Houdini Engine is now available on the AWS Thinkbox Marketplace (with Mantra support to follow in an upcoming release). Additionally, we have added incremental Deadline support for Autodesk Maya, Foundry’s Nuke and Modo, ftrack, Maxon Cinema 4D, OTOY Octane, and many others.

Another big part of this release includes massive changes under the hood, with the removal of Mono as a dependency to leverage a .NET core framework across MacOS, Windows, and Linux. This standardization enables more frequent Deadline updates while supporting a consistent, highly scalable cross-platform user experience.

We’ve launched Deadline 10.1 alongside our annual Deadline Day promo. Through 11:59pm PT on October 10, 2019, AWS Thinkbox is offering:

  • $10 discount on each annual subscription license of Deadline purchased ($38 USD per node)
  • 10% discount on all AWS Thinkbox Marketplace products
  • 10% discount on each annual Thinkbox 3D tool license

Interested in learning more about Deadline 10.1? For details, check out the release notes or visit the downloads page.

from AWS Media Blog

Entitlements in AWS Elemental MediaConnect Boost IP Video Transport Adoption in the Cloud

Entitlements in AWS Elemental MediaConnect Boost IP Video Transport Adoption in the Cloud

Where Does Cloud Video Transport Fit In Today’s Broadcast Workflows?

The use of private fiber networks in conjunction with satellite facilities is common in today’s broadcast workflows. Anyone who has lit up a dark fiber circuit or booked a slice on a satellite transponder knows the pain points of these processes, especially given their manual nature; however, the resulting redundancy to protect video flows has always made those efforts worthwhile.

As adoption of IP-based workflows increased, the case for video transport over IP was initially unclear. Security for video transport was confusing or nonexistent, and bandwidth considerations forced compression to be performed on-premises, which limited the amount of additional processing that could be done to feeds further down the chain. This scenario has changed dramatically in the past year with the advent of AWS Elemental MediaConnect.

AWS Elemental MediaConnect acts as a “cloud router” for mezzanine-quality live video. By allowing customers to contribute and distribute 80 Mbps transport streams with full end-to-end AES256 encryption into AWS using highly reliable protocols like Zixi, MediaConnect dramatically expands the ways in which IP transport can function in broadcast workflows. Configuring secure, high-bandwidth transport of a video signal from point A to point B can now be achieved in just a few minutes. The real value for many MediaConnect customers, though, lies with the entitlements feature.

What are entitlements? Acting as your main cloud hub, MediaConnect lets you share your live video with other AWS accounts who subscribe to your content via the entitlements mechanism. Instead of handing off an HLS or DASH stream transcoded before delivery with specified settings, MediaConnect allows your entitled partners to process the video as they see fit, either back in their data centers, inside their own Amazon VPCs, or using other AWS Elemental services including AWS Elemental MediaLive to create adaptive bitrate outputs that match existing OTT specs.

Overview of an entitled flow in MediaConnect shared from a content owner (originator) in one region to a subscriber in another region

Building Trust in IP Cloud Video Transport

Many workflows continue to rely on satellite and fiber for video transport. Remote locations often require a satellite truck, especially when there is no power or fiber connectivity available at the location. Some venues have fiber circuits installed that are directly connected to their final destinations or a Multi-Protocol Label Switching (MPLS) network. It may not make sense to use anything else in these use cases. MediaConnect will not replace all video transport with IP; rather, it is a future-facing tool that will work for 95% of use cases and offers increased flexibility and reliability. The future of transport is finding the right balance between available options to ensure that live video is distributed successfully.

Start with MediaConnect as a Backup

By building out a single flow in MediaConnect, you create a new option for delivering broadcast quality video anywhere in the world for a fraction of the cost of satellite or private fiber installation. MediaConnect is a perfect backup feed for existing single-pipeline workflows. Taking this a step further, you can create two isolated flows in MediaConnect and create the redundancy needed to handle broadcast-grade transmissions using a cloud-first technique. This approach is becoming more and more relevant.

Monitoring with MediaConnect

If you can’t monitor your video flow, then what’s the point? MediaConnect provides a series of tools that give you deep insight into your flow at all times, including source disconnects, output disconnects, ingress and egress bitrates, and TR 101 290 specification metrics. With these tools, you can build custom dashboards in Amazon CloudWatch to monitor your flows with minimal latency.

The Benefits of Choosing Zixi Pull or RIST, and Using Entitlements

While MediaConnect allows you to create multiple outputs and push UDP directly to other MediaConnect accounts using a public IP address, there are benefits of using other methods as well. This includes using entitlements, and, newly supported in MediaConnect, Reliable Internet Stream Transport (RIST) and the Zixi pull protocol.

Zixi Push

When pushing a flow to a receiving IP address with Zixi, you’re able to configure the maximum latency allowed. For ultra-low latency workflows, this becomes a critical component for the handoff. For example, we have tested 100 ms latency between two AWS accounts in the same region without experiencing dropouts in the video. Keep in mind that since this is a UDP push, if the MediaConnect flow that you are pushing to is in a “Standby” state, the primary account still accrues charges, as the push is still technically occurring blindly because it is stateless.

Zixi Pull

This option is great for three reasons. First, when you aren’t pulling the feed, you aren’t being billed for the data since there isn’t any video flowing. Second, you can have your receiver behind a firewall without a public IP address, which is key for security reasons and because it limits the amount of IT resources needed to set up a new workflow. Lastly, Zixi protocol is supported widely among IRD (Integrated Receiver/Decoder) providers, which means many off-the-shelf devices can pull the feed down directly from a MediaConnect flow with no additional setup.


RIST was specifically designed to support content traversing an unmanaged network — like the public internet — by a group of broadcast experts with the intent of becoming an accepted RFC standard. This benefits hardware manufacturers and end-users as the format can be published and a unified version can be adopted without the fear of additional licensing requirements or patent concerns. RIST support was recently added to MediaConnect, and is perfect for sending streams back to devices on-premises that support it.

Entitlements in MediaConnect

Entitlements using MediaConnect are always handed off in the same region. This means you only pay for transfer at $.01 per GB to deliver your content to your customers. If a subscriber to a flow needs to long-haul the feed across the world, their account is billed for that portion. This makes estimating costs much easier and more straightforward for both parties.

Content owners can also now specify the percentage of data transfer costs assigned to themselves and to subscribers. With this flexibility, owners can simplify billing by choosing to pay 100% of the data transfer themselves, splitting the cost with subscribers, or assigning subscribers to pay 100% of the cost. This means that sending a flow to a partner can now be done at no cost to the owner.

Entitlements have a built-in two second latency in order to ensure resiliency. If this is acceptable for your use case, then you can take advantage of the benefit of the managed connection. Unlike Zixi push, which continues to send video regardless of the connection state, entitlements ensure that the connection is established before the transfer begins. This avoids the unwanted billing scenario that can occur if you aren’t in tight communication with the receiving end. Since each flow can have up to 50 entitlements granted, there is a greater opportunity to distribute your content widely using this method.


Regardless of how you transport your live video between two MediaConnect instances, you can rest assured that your content is traveling on the AWS backbone and never hits the public internet. Combine this with AES256 encryption and you are ready to use MediaConnect for your next professional-grade live broadcast, no matter which underlying protocol you use. Finally, you can increase your flexibility and grow your audience globally using entitlements to share your video with partners.

from AWS Media Blog

Detect Silent Audio Tracks in VOD Content with AWS Elemental MediaConvert

Detect Silent Audio Tracks in VOD Content with AWS Elemental MediaConvert


In file-based video processing workflows, content providers often standardize on the mezzanine file format used to share content with their customers and partners. When sharing video-on-demand (VOD) content, produced mezzanine files often have same number of audio tracks regardless of how many are in the actual content. For example, MXF media assets may be produced with eight mono audio tracks for content that contains only 2.0 audio, so only two tracks are actually used while the remaining tracks are filled with silent audio.

To build an automated video processing workflow that makes intelligent decisions based on audio, we need to detect silent audio tracks and their position in the source mezzanine assets. From the example above, if six audio tracks are identified as silent, the asset would be processed as a stereo 2.0 asset. On the other hand, if six tracks have audio and two are silent, the asset should be sent to a 5.1 workflow for processing.

In this post, we will create a workflow using AWS Elemental MediaConvert to analyze audio tracks of media assets and measure their loudness. The workflow is automated by AWS Lambda functions and triggered by file uploads to Amazon S3.

Loudness measurement in MediaConvert

The Audio Normalization feature of MediaConvert makes it easy to correct and measure audio loudness levels, supporting the ITU-R BS.1770-1, -2, -3, and -4 standard algorithms. MediaConvert configuration allows the selected algorithm to only produce loudness measurements. It also allows logging loudness levels and storing these logs in S3.

We will be using MediaConvert to analyze loudness levels of the first 60s of media source assets. The workflow is flexible, so we can analyze more than 60s if required. The produced loudness logs provide the Input Max Loudness value for each audio track that we will compare to a threshold of -50 dB. If the Input Max Loudness value of an audio track is lower than -50 dB, that track is considered silent audio.

Solution overview

The workflow diagram for this solution is shown below:


The workflow works as follows:

  1. A new source asset is uploaded to an Amazon S3 bucket.
  2. S3 triggers the first AWS Lambda function (LambdaMediaConvert).
  3. LambdaMediaConvert first runs MediaInfo on the source in order to determine the number of audio tracks, pushes a new job to MediaConvert, and stores job data in Amazon DynamoDB.
  4. MediaConvert measures the Loudness of each audio track and saves Loudness logs into an S3 output bucket. As soon as the job completes, MediaConvert sends a “MediaConvert Event” to Amazon CloudWatch.
  5. Using an Events rule, CloudWatch filters on “MediaConvert Event” and triggers the second Lambda function (LambdaLoudnessResults).
  6. LambdaLoudnessResults collects Loudness logs from S3 to determine silent audio tracks by comparing Loudness values to a specific threshold, then updates DynamoDB with the results.

All steps required to build this workflow on AWS are explained in the post.

Step 1: Create a MediaConvert job template

The job template will be used by the function we will create called “LambdaMediaConvert” to build loudness measurement job settings. To create the MediaConvert template, log into the AWS Management Console, go to the MediaConvert service, and select the region you would like to work in.

Note: All AWS services in this workflow must run in the same region.

  • In the MediaConvert console, chose Job templates from the menu on the left and click Create template.
  • Enter a template name. This name will be referenced in LambdaMediaConvert (Step 6).
  • In Inputs section, click on Add button to add new inputs:
    • Under Video selector -> Timecode source, select “start at 0”
    • In Audio selector 1 -> Selector type, select “Track” and enter “1”

    • Click on Add input clip and enter “00:01:00:00” for End timecode. This timecode corresponds to 1 minute, which means only the first 60 seconds of the source file will be processed by the template. You can adjust the End timecode value for your needs, or skip the Input clips configuration in order to analyze the loudness of the complete source file.

  • In Output groups, click on Add and select File group.
    • Click on Output 1 and set:
      • Name modifier: “_240p”
      • Extension: “mp4”
      • Please note that we have to output a video in the MediaConvert job, so we will configure low resolution and bitrate video settings to reduce processing time and cost.
      • Under Encoding settings -> Video, set Resolution (w x h) to 320×240 and Bitrate (bits/s) to “200k”.
      • Under Encoding settings, click on Audio 1 and expand Advanced.
      • Enable Audio normalization and set:
        • Algorithm: “ITU-R BS.1770-2: EBU R-128”
        • Algorithm control: “Measure only”
        • Loudness logging: “Log”
        • Peak calculation: “None”


  • In Output groups click on Add and select File group
    • Click on Output 1 and set:
      • Name modifier: “_240p”
      • Extension: “mp4”
      • Please note that we have to output a video in the MediaConvert job, so we will configure low resolution and bitrate video settings to reduce processing time and cost.
      • Under Encoding settings -> Video, set Resolution (w x h) to 320×240 and Bitrate (bits/s) to “200k”.
      • Under Encoding settings, click on Audio 1 and expand Advanced.
      • Enable Audio normalization and set:
        • Algorithm: “ITU-R BS.1770-2: EBU R-128”
        • Algorithm control: “Measure only”
        • Loudness logging: “Log”
        • Peak calculation: “None”

  • Click Create at the bottom of the page to create the job template.

Note: The job template is created intentionally with one audio input/output only. In the LambdaMediaConvert code, we will duplicate the audio configuration to match the number of audio tracks available in the source media asset.

Step 2: Create DynamoDB table

Open DynamoDB service console, expand the left-side menu, select Tables and click Create table:

  • Enter a Table name. This name will be referenced in LambdaMediaConvert.
  • For Primary key, enter “assetid”.
  • Click Create to create the table.

Step 3: Create S3 buckets

Open the Amazon S3 service console and create two buckets in the region you have chosen for the workflow. One bucket will be used for ingest and the second as the destination. Choose unique names for the ingest and destination buckets:

  • Click + Create bucket
  • Enter a bucket name for ingest bucket.
  • Select the “Region”.
  • Click the Create button.
  • Repeat for destination bucket.

Note: Keep these bucket names handy as we will need them when we configure the Lambda functions.

Step 4: Create Lambda layers

An AWS Lambda Layer is a ZIP archive that can contain libraries, a custom runtime, or other dependencies for Lambda functions. For this workflow, we will create one layer for Python 3.7 boto3 SDK and another for MediaInfo.

In order to package a layer, we need to make sure the dependency is placed in the right folder inside the ZIP file, as stated in Including Library Dependencies in a Layer. During run time, layers are extracted to “/opt” directory in the function execution environment.

4.1 Create Python 3.7 boto3 SDK layer

It is highly recommended that you use the latest version of the SDK when building Lambda functions to access the most recent features and API commands added to AWS services. In fact, the boto3 version included in Lambda Python Runtime is usually slightly behind. For more details, you can double check the AWS Lambda Runtimes page.

In order to create the boto3 layer ZIP file, the required steps are similar to the ones explained in AWS Lambda Deployment Package in Python. The only difference is that we are going to package dependencies (boto3) without the function code. In addition, boto3 will be placed under “/python” folder inside the ZIP file. Below are the “Bash” commands we will use to create the boto3 layer package.

Note: If you are a Mac OSX user who installed Python 3.7 using Homebrew, there is a known issue that might prevent “pip” from installing packages. A potential workaround is available on Stack overflow here. Pay special attention to delete the suggested “~/.pydistutils.cfg” file after boto3 is installed as it might break regular pip operations.

# make sure to use the right version of pip
$ pip3 --version
pip 19.1 from /usr/local/lib/python3.7/site-packages/pip (python 3.7)

# create working folders
$ mkdir -p boto3/python
$ cd boto3/python

# for OSX and Homebrew users, create ~/.pydistutils.cfg before running following command and delete it afterwards
$ pip3 install boto3 --target ./

# verify the installed version of boto3
$ ls

# create the zip file in parent folder
$ cd ..
$ zip -r ../ .

After the layer ZIP file is ready, we will create the Lambda layer. Open Lambda service console in the working AWS region, navigate to the left-side menu, and click on “Layers”. Click on “Create layer”, then upload the layer ZIP file created earlier and enter additional details. For the license, use the location “”. Finally, click on the “Create” button.

4.2 Create MediaInfo layer

MediaInfo is used to extract the number of audio tracks to be analyzed in the source file. We are using the code and procedure introduced in Extracting Video Metadata using Lambda and MediaInfo post. However, in this case, we will package MediaInfo as a dependency in a Lambda layer. Here is how to proceed:

Note: If you are using an older pre-compiled version of MediaInfo, make sure it has support of “JSON” output format. Otherwise, please consider compiling a new one.

# compiled mediainfo is placed in current working directory 

$ mkdir -p mediainfodir/bin
$ mv mediainfo mediainfodir/bin/
$ cd mediainfodir
$ zip -r ../ .

Next, create the layer in Lambda console. You can use following link location for the license “”.

Step 5: Create IAM roles for MediaConvert and Lambda functions

When creating IAM roles, provide the minimum rights required for a service to perform its job.

5.1 Create IAM role for MediaConvert

All steps required to create IAM role for MediaConvert are explained in the user guide here. After the role is created, note the role ARN as it will be required below.

5.2 Create IAM role for Lambda

For simplicity, we will create one IAM role for both Lambda functions. The permissions required include: Read objects from the ingest S3 bucket, read and delete objects in the destination S3 bucket, store logs to CloudWatch logs, create job and get job template from MediaConvert, store metadata in DynamoDB table, and Lambda with IAM permission to pass a role to MediaConvert.


  • Open the IAM service console, click on Roles from left-side menu, and choose Create role.
  • Choose AWS service role type, and then choose Lambda service that will use the role and click on Next: Permissions.
  • Under Attach permissions policies, choose Create policy and select JSON tab. Note that Create policy will open a new tab in the browser.
  • Copy the below policy JSON in the editor with following modifications:
    • Replace “INGEST_BUCKET” with S3 ingest bucket name you created in step 3.
    • Replace “MC_ROLE_ARN” with the ARN of MediaConvert role created in 5.1.
    • Replace “DYNAMODB_TABLE_ARN” with the ARN of DynamoDB table created in step 2.
    • Replace “DEST_BUCKET” with S3 destination bucket name you created in step 3.
    "Version": "2012-10-17",
    "Statement": [
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
            "Resource": "*"
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": [
            "Resource": "arn:aws:s3:::DEST_BUCKET/*"
  • Click Review policy.
  • Enter a name and description for the policy, then choose Create policy.
  • Switch back to the previous tab in the browser where Create role / Attach permissions policies is still open.
  • Choose Filter policies and select Customer managed from the drop-down box.

  • Click on Filter policies to hide the drop-down box and select the policy name that we created above. (You can also search for the policy name using the Search area.)
  • Select Next: Tags, where you can optionally add a tag for workflow name.
  • Select Next: Review and enter Role Name. Here, we can optionally update the description before choosing Create Role.
  • Note the role name you used, as it will be required later.

Step 6: Create the first Lambda function: LambdaMediaConvert

6.1 Create Lambda Function

  • Open Lambda service console and choose Create a function. Alternatively, you can expand the left-side menu, choose Functions and click the Create function button.
  • Choose Author from scratch, input Function name and select “Python 3.7” for Runtime.
  • For Permissions, expand Choose or create an execution role. Select Use an existing role under Execution role, then select the role we created in 5.2 under Existing role. Finally, choose Create function.

6.2 Add Layers

  • In the Designer area, choose Layers under the function name.

  • Under Designer, find the Layers section. Choose Add a layer.
  • Select the boto3 layer created in 4.1 under Compatible layers, select the Version, then choose Add.
  • Repeat the same steps to add the MediaInfo layer.

  • Click on the Save button in upper righthand corner to save changes to the function.

6.3 Add S3 trigger

  • In the Designer area and on the left side under Add triggers, look up S3 and choose it.
  • The Configure triggers section will show up under the Designer area. Select the ingest bucket name for Bucket.
  • For Event type, select All object create events.
  • You can optionally control which files to trigger the workflow by specifying a Prefix for subfolder path within the bucket, as well as file extension (i.e. .mxf) under Suffix.
  • Make sure Enable trigger is checked and choose Add.
  • Save changes with the Save button.

6.4 Add the function code

  • In Designer area, choose the function name to show the Function code section and other configuration sections

  • In Function code editor, override the default code with the following code:
import uuid
import os
import subprocess
import boto3
import logging
import json
import copy

# The number of seconds that the Signed URL is valid

logger = logging.getLogger('boto3')

def lambda_handler(event, context):

    assetID = str(uuid.uuid4())

    sourceS3Bucket = event['Records'][0]['s3']['bucket']['name']
    sourceS3Key = event['Records'][0]['s3']['object']['key']
    sourceS3 = 's3://'+ sourceS3Bucket + '/' + sourceS3Key"S3 object source URI: {}".format(sourceS3))

    # Mediainfo
    signedUrl = get_signed_url(SIGNED_URL_EXPIRATION, \
        sourceS3Bucket, sourceS3Key)
    logger.debug("S3 Signed URL: {}".format(signedUrl))

    miOut = subprocess.check_output(
        ["/opt/bin/mediainfo", "--full", "--output=JSON", signedUrl]
    mi = json.loads(miOut.decode('utf8'))
    logger.debug("Media Info Output: {}".format(json.dumps(mi)))

    # Audio silent detection using Mediaconvert
    audioCount = int( mi['media']['track'][0]['AudioCount'] )"Number of Audio tracks: {}".format(audioCount))

    if audioCount == 0:
        logger.warning("The source file has no audio tracks. Exiting...")
        return 0

    dest = 's3://' + os.environ['DESTINATION_BUCKET'] + \
        '/audio_logging/' + assetID + '/audio'"Destination path: {}".format(dest))

    # DynamoDB table name
    tableName = os.environ['DYNAMODB_TABLE_NAME']

        # Get MediaConvert endpoint and push the job
        region = os.environ['AWS_DEFAULT_REGION']
        mc_client = boto3.client('mediaconvert', region_name=region)
        endpoints = mc_client.describe_endpoints()

        # Load MediaConvert client for the specific endpoiont
        client = boto3.client('mediaconvert', region_name=region,
            endpoint_url=endpoints['Endpoints'][0]['Url'], verify=False)

        # Get Job_Template Settings
        jobTemplate = client.get_job_template(

        jobSettings = build_mediaconvert_job_settings(
            sourceS3, dest, audioCount, jobTemplate["JobTemplate"]["Settings"]
        logger.debug("job settings are: {}".format(json.dumps(jobSettings)))

        mediaConvertRole = os.environ['MC_ROLE_ARN']
        jobMetadata = {
            "AssetID": assetID,
            "Workflow": "SilentDetection",
            'AudioCount': str(audioCount),
            "Source": sourceS3,
            "DynamoTable": tableName

        # Push the job to MediaConvert service
        job = client.create_job(Role=mediaConvertRole, \
            UserMetadata=jobMetadata, Settings=jobSettings)
        logger.debug("Mediaconvert create_job() response: {}".format( \
            json.dumps(job, default=str)))

    except Exception as e:
       logger.error("Exception: {}".format(e))
       return 0

    # Store Asset ID and Media Info in DynamoDB
    dynamo = boto3.resource("dynamodb")
    dynamoTable = dynamo.Table(tableName)

    return 1

def get_signed_url(expires_in, bucket, obj):
    Generate a signed URL
    :param expires_in:  URL Expiration time in seconds
    :param bucket:      S3 Bucket
    :param obj:         S3 Key name
    :return:            Signed URL
    s3_cli = boto3.client("s3")
    presigned_url = s3_cli.generate_presigned_url(
        'get_object', Params={'Bucket': bucket, 'Key': obj}, \

    return presigned_url

def build_mediaconvert_job_settings(source, destination, audio_count, \
    Build MediaConvert job settings based on provided template by replicating
    audio input selectors and output configuration for audio_count times
    :param source:          S3 source
    :param destination:     S3 destination where loudness logs will be stored
    :param audio_count:     The number of audio tracks to analyze loadness for
    :param template_settings: The MediaConvert template used to analyze audio
    :return:                MediaConvert job settings
    job_settings = template_settings
    job_settings["Inputs"][0]["FileInput"] = source
    job_settings["OutputGroups"][0]["OutputGroupSettings"] \
        ["FileGroupSettings"]["Destination"] = destination

    input_audio_selector = copy.deepcopy(
        job_settings["Inputs"][0]["AudioSelectors"]["Audio Selector 1"]

    output_audio_description = copy.deepcopy(

    job_settings["Inputs"][0]["AudioSelectors"] = {}
    job_settings["OutputGroups"][0]["Outputs"][0]["AudioDescriptions"] = []

    #for each audio track, create the input selector and the output description
    for ii in range(1,audio_count+1):
        selector_name = "Audio Selector " + str(ii)

        ias = copy.deepcopy(input_audio_selector)
        ias["Tracks"][0] = ii
        job_settings["Inputs"][0]["AudioSelectors"][selector_name] = ias

        oad = copy.deepcopy(output_audio_description)
        oad["AudioSourceName"] = selector_name
        job_settings["OutputGroups"][0]["Outputs"][0] \

    return job_settings
  • Save changes.

6.5 Other configurations

  • Scroll down to the Environment variables section and add the following variables with their corresponding values:
    • DESTINATION_BUCKET: name of the destination bucket (step 3)
    • DYNAMODB_TABLE_NAME: name of DynamoDB table (step 2)
    • MC_LOUDNESS_TEMPLATE: name of MediaConvert job template (step 1)
    • MC_ROLE_ARN: ARN of MediaConvert execution role (step 5.1)
  • Here we can add an optional workflow name tag in Tags section.
  • In Basic settings section, update the Timeout to 1 min 0 sec.
  • Save changes.

Step 7: Create the second Lambda function: LambdaLoudnessResults

  • From the menu, choose Functions and click Create function button.
  • Choose Author from scratch.
  • Input Function name.
  • Select Python 3.7 for Runtime.
  • For Permissions, select the role created in 5.2, as we did for the first function.
  • Choose Create Function.
  • Repeat the steps from 6.2 to add boto3 layer only. The MediaInfo layer is not required for the second function.
  • Click on function name to show the Function code section and replace the code in the editor with the following code:
import os
import boto3
import logging
import json

SILENT_THRESHOLD = -50 # silent audio has max loudness lower than threshold

logger = logging.getLogger('boto3')

def lambda_handler(event, context):"Cloudwatch event: {}".format(json.dumps(event)))

    #read job details from the event
    assetID = event["detail"]["userMetadata"]["AssetID"]
    sourceS3 = event["detail"]["userMetadata"]["Source"]
    audioCount = int(event["detail"]["userMetadata"]["AudioCount"])
    destinationVideo = event["detail"]["outputGroupDetails"][0] \
    tableName = event["detail"]["userMetadata"]["DynamoTable"]

    #loudness logs will be stored in same folder as destinationVideo
    #loudness logs name pattern: <videoFileName>_loudness.<trackNumber>.csv
    videoBaseName = os.path.basename(destinationVideo) # ex: video_1.mp4
    ext = videoBaseName.split('.')[-1] # -> mp4
    videoFileName = videoBaseName[:-1-len(ext)] # -> video_1

    # destinationVideo ~= s3://bucket/path/to/file.ext
    tt = destinationVideo.split('/')
    bucket = tt[2]
    s3Path = os.path.dirname(destinationVideo)[destinationVideo.index(tt[3]):]

    # get loudness logs from S3
    s3 = boto3.client('s3')

    # clean up temporary video file from S3
    videoKey = s3Path + '/' + videoBaseName
    s3.delete_object(Bucket=bucket, Key=videoKey)

    maxLoudness = []
    loudnessMap = []

    for ii in range(audioCount):
        suffix = '.' + str(ii+1)
        if audioCount == 1:
            suffix = ''
        s3Key = s3Path + '/' + videoFileName + '_loudness' + suffix + '.csv'
        localFile = '/tmp/loudness' + suffix + '.csv'
        s3.download_file(bucket, s3Key, localFile)

        f = open(localFile)
        inputMaxLoudness = float( lines[-1].split(',')[6] ) 'Input Max Loudness reading for track ' + str(ii+1) + \
            ' is: ' + str(inputMaxLoudness) )
        loud = 0
        if inputMaxLoudness > SILENT_THRESHOLD:
            loud = 1

        #clean it up!
        s3.delete_object(Bucket=bucket, Key=s3Key)"Input Max Loudness: {}".format(maxLoudness))"Input Loudness Map: {}".format(loudnessMap))

    #Update DynamoDB table with loudness results
    dynamo = boto3.resource("dynamodb")
    dynamoTable = dynamo.Table( tableName )
        Key = {'assetid': assetID},
            'maxloudness' : {
                'Value' : json.dumps(maxLoudness),
                'Action': 'PUT'
            'loudnessmap' : {
                'Value' : json.dumps(loudnessMap),
                'Action': 'PUT'

    return 1
  • Increase the Timeout to 1 min 0 sec.
  • Save changes.

Step 8: Configure CloudWatch Events rule to trigger LambdaLoudnessResults

  • Open CloudWatch service console and choose Rules from the left-side menu (located under Events).
  • Choose Create rule.
  • In the Event Source section,
    • Choose Event Pattern.
    • Click on Build event pattern … and select Custom event pattern.
    • Copy the following JSON code into the editor:
  "source": [
  "detail": {
    "status": [
    "userMetadata": {"Workflow": ["SilentDetection"]}
  • In the Targets section,
    • Click on Add target.
    • Lambda function will be selected as target by default. For Function, select the name of second Lambda function from the drop-down list.

  • Choose Configure details.
  • Enter Rule name and click Create rule.

Running the workflow and showing results

To trigger the workflow, I uploaded a video source file that includes 8 mono audio tracks to the S3 ingest bucket. The results are:

  • The duration of the first Lambda function call was 15,259 ms, as shown in the CloudWatch logs.
  • MediaConvert took 14s to complete the job (2s in queue and 12s in transcoding). Keep in mind that we processed only the first 60s of the source.
  • The duration of the second Lambda function call was 3,839 ms.
  • The results stored in the DynamoDB table are:
    • audiocount: 8 (tracks)
    • loudnessmap: [1, 1, 0, 0, 0, 0, 0, 0] (showing that first 2 tracks have audio and the remaining tracks are silent)
    • maxloudness: [-12.766523, -14.114044, -143.268118, -143.268118, -143.268118, -143.268118, -143.268118, -143.268118] (max loudness values)

The MediaInfo and file source URI were also stored in the table.


In this post, we learned how to create an automated workflow to analyze audio tracks of source media assets and detect silent tracks using MediaConvert and Lambda. The results are stored in DynamoDB, making it simple to integrate with other workflows such as the Video on Demand on AWS solution or the Media Analysis Solution.

If you have comments or suggestions, please submit them in the comments section below.


from AWS Media Blog

Media & Entertainment Guide to AWS re:Invent 2019

Media & Entertainment Guide to AWS re:Invent 2019

From December 2 to December 6, over 60,000 attendees will spread out across six venues on the Las Vegas Strip, which promises to make re:Invent 2019 the biggest re:Invent yet.

For M&E attendees looking to get the most of their experience, you can follow the below steps:

  • Register for re:Invent
  • Learn more about event logistics
  • You can start planning your schedule now. Seat reservations open Oct 15
  • Looking for M&E-related sessions? With over 45 M&E-related sessions across breakouts, workshops, chalk talks, and builder sessions, we have something for media business leaders, technologists, and developers alike. Start planning your schedule now. To get the most up-to-date list filter by Topic: Media Solutions, or Industry: Media & Entertainment. Some highlights below:

Breakout Sessions spotlight:

  • CMP203  Studio in the cloud: Content production on AWS
  • MDS201  Latest Media & Entertainment industry news from AWS
  • MDS202-R  Optimizing live video feeds to the cloud and the consumer
  • MDS301-R  NBC’s Hybrid AVID deployment and AWS Cloud-based Video Edit Stations
  • MDS311-R  Live broadcasting on AWS
  • MDS313-R  Hotstar – Live streaming at record scale

Workshops spotlight:

  • MDS402-R  Media analysis evolved
  • MDS403  Launch a live video channel in minutes
  • MDS404  Automate, accelerate, and appreciate your VOD workflows
  • MDS405-R  UnicornFlix: Building a video-on-demand app with AWS

Chalk Talks spotlight:

  • MDS305  Breaking news: Deploy global content distribution in minutes
  • MDS306-R  Building resilient live streaming video workflows
  • MDS307-R  Are you well-architected? Best practices for media workloads
  • MDS401-R  Building resilient live streaming video workflows
  • MDS408  Building and refining AI Models with Human-in-the-loop workflows

Builders Sessions spotlight:

  • MDS308-R  Create multi-language video with automated subtitling
  • MDS309-R  Build basic live video workflows
  • MDS310-R  Build basic video-on-demand workflows
  • MDS406-R  Extract value from content archives with Media Insights Engine
  • MDS407-R  Migrate media assets with Media2Cloud

from AWS Media Blog

AWS at “vETC | The Grand Convergence 2019”: Modern MAM & Supply Chain Optimization for IMF

AWS at “vETC | The Grand Convergence 2019”: Modern MAM & Supply Chain Optimization for IMF

The Entertainment Technology Center at USC hosted “vETC | The Grand Convergence 2019: Innovation and Integration“, their 5th annual virtual conference covering emerging technologies and their impact on the M&E industry. Jack Wenzinger, AWS Partner Solutions Architects in the Media & Entertainment vertical, provided an overview of Media Asset Management and key factors to consider in your MAM strategy; taking advantage of cloud architectures like content lake, machine learning for improved search, and leveraging industry standards like the Interoperable Mastering Format (IMF).  The webinar breaks down the foundations of MAM into six key categories and discusses solutions for improving the overall operation of each within AWS.  Jack goes specifically on how AWS is supporting media companies with a serverless ingest solution called Media2Cloud.

Click here to read the latest blog post on the Media2Cloud solution

Click here to get started on the Media2Cloud solution

Click here to see all available M&E solutions from AWS

from AWS Media Blog

New – This Is My Architecture: AWS VOD Solution

New – This Is My Architecture: AWS VOD Solution

Tom from our very own AWS Solutions Builder team walks us through an end-to-end solution that he built for video on demand (VOD) on AWS. Customers are already using this solution to run over 60,000 encoding jobs every month. You’ll learn how Tom used Step Functions for the orchestration layer, Lambda for Node.js microservices, Elemental Media Convert to generate videos in a variety of file formats, and many more services to complete the solution, including S3, CloudFront, CloudWatch, DynamoDB and CloudFormation.

Click here to get started on the VOD solution

Click here to see see all available M&E solutions from AWS

Click here to see all M&E-related This Is My Architecture Videos

from AWS Media Blog