Category: Media

Pac-12: Campus Cloud Touchdown

Pac-12: Campus Cloud Touchdown

This article originally appeared in FEED Magazine, Issue 13.

Professional sports take top spot in US television viewing, but university level sports has an avid following that –given fierce university loyalties – can eclipse the pros.

The Pacific-12 Conference, or Pac-12, is the university sports organization covering the western US, a region that contains some of the country’s top college teams and its most spirited rivalries. It comprises 12 universities and covers 11 men’s and 13 women’s sports, including that venerable college institution, American football.

Pac-12 Networks is the conference’s media arm and is the first such company to be wholly owned by 12 universities. It offers the full range of Pac-12 sports, from football, basketball and baseball to swimming, gymnastics and rowing. The network offers streaming of all the conference sports and includes livestreaming channels from each university, an output that makes it one of the top live sports producers in the country.

Last year, Pac-12 Networks moved its entire video and media infrastructure to AWS. Determined to transform the sports fan experience – reaching more viewers at more locations, on more devices, Pac-12 Networks took a cloud-enabled approach. The upgrade involved a re-imagining of core master control production workflows, as well as solutions for content archiving, personalization and monetization.


The network hoped to demonstrate that higher-quality services would bring greater fan engagement and enhanced advertising revenue. By tying together a standardized video encoding source that feeds a comprehensive cloud production workflow and distribution network, Pac-12 Networks raised the bar for college sports coverage.

“The way viewers expect to consume games is changing,” says Mark Kramer, VP, Engineering and Technology, at Pac-12 Networks. “Without AWS, we couldn’t meet our fans’ needs. Now, we’re quickly setting up new workflows at scale, such as live-to-VOD and OTT monetization. We’re changing how schools produce collegiate sports and giving fans much better, personalized experiences. We’ve solved a huge, yet simple problem: how not to run out of storage. Translation: we’re making assets available to consumers and syndication partners the minute they’re recorded, so more people can see them in more ways.”

In August 2018, Pac-12 Network aired the first collegiate football game of the season, Utah’s home-opener against Weber State.

“Performance and video quality were awesome,” remembers Kramer. “Everything was gorgeous during the game for linear broadcast and TV everywhere audiences. This was a huge moment – it was a threshold we’d worked towards for years.”


With the new cloud-centric approach, Pac-12 Networks’ master control uses Amazon Simple Storage Service (S3) and Amazon Glacier as petabyte-sized primary archives for all its recorded content, and as the basis of an automated ingest workflow.

Being able to call on unlimited amount of Amazon S3 storage, as required, liberates other aspects of the production workflow, enabling, among other things, the creation of a new live-to-VOD capture feature. Using the entirely cloud-based workflow, AWS Elemental MediaTailor now provides a simple option to perform server-side ad insertion (SSAI) for live and on-demand content, augmenting the means for content monetization.

Other AWS services in the Pac-12 cloud infrastructure include AWS IAM, Amazon CloudFront, Amazon EC2 Autoscaling, Amazon Elastic Block Store (EBS) and Amazon Elastic Load Balancing (ELB).

The number of Pac-12 games now being powered by AWS over the 2018-19 season is 850. Pac-12 Networks is connecting its 10Gb multi-venue contribution network to AWS using a 1Gb AWS DirectConnect. AWS Elemental MediaLive and AWS Elemental MediaPackage services prepare all live streams for delivery in Apple’s HLS format to iOS, Android, Web, Chromecast and Apple TV devices.

AWS Machine Learning services adds the potential for a whole new range of service enhancements in the future, including automated gameplay highlight clips and real-time closed captioning for broadcasters.

Another potential application involves AWS Lambda, a serverless application model that uses less compute for processing clip and highlight generation. In this workflow, the moment a game is over, Pac-12 Networks would have all assets, such as game highlights, directly accessible for streaming and syndication partners. As a result, a variety of highlight options could be quickly made available to the fans, offering a much richer post-game experience.

“As we standardize AWS machine learning and media services, we’ll be able to usher in a new era of entertainment for collegiate sports enthusiasts,” reveals Kramer. “Our fans will benefit from highly reliable and personalized viewer experiences, even in times of rapid traffic spikes like conference championships or rivalry games. Also, our internal teams will be able to experiment with ease using AWS services to rapidly test new ideas.”

from AWS Media Blog

WIND Hellas: “The New Way of Watching TV”

WIND Hellas: “The New Way of Watching TV”

Guest authored by: Patrick Vos, CEO, Zappware

The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post. 

The story of WIND Hellas illustrates what it takes to be an innovator and growth player in the new TV landscape. Just two years ago, WIND was a well-established mobile network operator in Greece, but without any kind of video service offering. The market environment did not seem very encouraging for a new entrant, with many current players struggling against the headwinds of low-cost streaming and piracy. But the pioneers at WIND had a vision of what they needed to do – they took the best technology available, married it with a compelling content offer and have now become by far the fastest growing player.

It’s easy to see what has helped the new WIND service take off like a rocket – for the big screen there’s a compelling combination of svelte WiFi connected hardware and engaging user experience, using the full power of Android TV and future proofed for next-gen UHD/HDR content. A vibrant content offer covers a full spectrum of the viewers universe from DTT, to Pay TV, to the best of Internet VOD, music and games in a single, uniform navigational framework. And the service goes wherever you go on your choice of mobile device. Combine all this with buzz from an intensive promotional campaign and you get a service that has gone from nothing to a significant market share in just over a year.

So, what’s behind the scenes of this popular service? Central to the realization of the project was Belgium-based Zappware, an experienced developer of some of the world’s most acclaimed user experiences. Zappware was selected as the prime contractor on a rapid turn implementation project: 10 months from the drawing board to initial launch. Within this project, Zappware was able to fully realize the operator’s vision with its service delivery platform, user interface skills, and approach to interface optimization. Working in conjunction with VP Media Solutions, specialists in converged content architectures, the team at Zappware fully managed the end-to-end TV platform-as-a-service during the ramp to commercial success and continues to do so under a long-term managed service contract.

One key to this success story is the technical partnerships at the heart of the project and the way the technology has enabled vibrant connection between viewers and the full range of their content. The list of specialist components includes many of today’s leading vendors. Starting at the source of the satellite and terrestrial signal acquisition, Zappware used a modular DVB receiver system from to provide a flexible, compact and reliable source of master transport streams. AWS Elemental provides the vital redundant chain of subsequent video processing and compression, with a network of AWS Elemental Live encoders orchestrated by AWS Elemental Conductor. AWS also plays a critical part in cloud hosting for the management and control software that underpins the service delivery framework and on-demand service scaling, including extensive use of AWS diagnostics and management visualization tools. For delivery, WIND brings its own network for service delivery, augmented with origin management servers and custom private CDN. On the client side, the Zappware NEXX user interface is used in conjunction with the sophisticated streaming player SDKs and DRM services from Castlabs.

The selection of Android TV as the basis for the Technicolor-supplied set-top box (STB) environment brought significant benefits. Use of Android TV Operator Tier brings with it the entire application ecosystem for streaming apps such as YouTube and including, for this project, the critical Netflix service. The value of this level of integration is set to grow over time with options such as voice support and operational enhancements included in yearly Android updates. As part of the Zappware STB integration process, the environment also now deals exceptionally well with terrestrial DVB service reception, which acts as the heart and soul of the content selection when twinned with ever-popular live sports. And the whole hybrid content offer, including user preferences, is presented in a seamless way through a fully WIND branded interface and navigation solution based on Zappware NEXX technology.

A couple of key lessons were learned on this project that are worth highlighting:

  • What turns viewers into fans of this service is an interface approach that builds on a proven navigation framework, then uses comprehensive instrumentation, personalization, and operator refinement to improve the experience every time the viewer interacts with the system. The way that linear TV, Netflix and other content have been technically and commercially integrated on Android TV within this project is now a reference for other similar ecosystem initiatives.
  • The strategic use of an especially strong ecosystem of partners made for a smooth trajectory for initial system integration and launch. Specialist modular hardware components combined with state-of-the-art cloud services for the major software subsystems provided a flexible and highly resilient backbone for service delivery. The vendors in this ecosystem also stand behind the continuous upgrade process that’s necessary for the service to thrive and reinforce its highly competitive positioning.

“Using the AWS cloud as a foundation of the deployment was one of the best architectural decisions we could have made, and has given us the flexibility to deal in real time with changes in the content landscape, the rapid growth of the subscriber base, and the pace of service evolution demanded by today’s viewers,” said Hermann Riedl, Chief Strategy & Digital Transformation Officer at WIND Hellas. “With this sophisticated back-end approach, combined with use of Android TV on our STB and a best-in-class UI/UX, we have been able to offer a new and exciting service option to the Greek audience that has reignited their enthusiasm for TV.”

from AWS Media Blog

Updates for Media2Cloud: Increased control, SageMaker Ground Truth integration, and more partner support

Updates for Media2Cloud: Increased control, SageMaker Ground Truth integration, and more partner support


Let me start by saying I hate the word “ingest” in the context of video workflow.  The word is much better suited to the common definition of absorbing food and drink.  Candidly, if the process is not well managed for video, the outcome can be very similar.  Ingest is often confused with just moving a digital file from one location to another.  The true meaning in the term in the industry context is to run a standardized process that performs necessary registration steps to manage content in your digital platform.  These steps include a technical inspection of the video file, registering a unique identifier, performing/validating a checksum, creating proxies and thumbnails, and associating descriptive metadata to the asset.  These steps are necessary to ensure the integrity of the video asset, ability to uniquely find, manage, and utilize these video assets to power your business. 


Media2Cloud is an AWS Solution designed to provide a structured process for getting video content under management within AWS.  It’s a serverless ingest framework that considers all the best attributes of ingest to ensure that new video assets are processed and supported with consistent metadata and supporting proxies.  The framework provides customers with a way of avoiding weeks of setup configuration and provides a secure baseline in which to modify the framework to match the customer’s ingest objective.

It’s now been one year since the initial preview of the Media2Cloud solution at International Broadcast Convention 2018.  The following blog post is an update that outlines the issues with managing large scale video content on-premise and how Media2Cloud can help customers and partners establish an elastic ingest model.

Media2Cloud covers the standard essentials for ingesting video content like assigning a UUID, running a MD5 checksum, technical metadata extraction, and the creation of proxies of thumbnails. In addition to this process, the framework includes a trigger to augment the baseline metadata of the video assets with AWS Machine Learning.  The video asset will have object and face recognition performed with Amazon Rekognition, speech to text is created via Amazon Transcribe, and contextual metadata is created using Amazon Comprehend.  The service is elastic meaning the same workflow can be used to support day to day production and archive migration ingest as long as the requirements are the same.  There’s no need to create separate workflows to accommodate capacity, a common issue with on-premise solutions.


We continue to listen to our customers and are working to make it easier to modify the ingest framework to match the customer needs.   The new version now supports a configuration panel to decide which machine learning services you wish to use and what language the source content is in.  This provides the customer with an easy way to tailor the usage machine learning services to only use the ones that will be appropriate for the source video.  For example, if you are ingesting establishing scenic shots, you can select Amazon Rekognition for objects but there’s no need to run facial analysis or use Amazon Transcribe if there’s no audio.  The configuration for Amazon Rekognition has also added inappropriate video detection as an option which can support several video compliance use-cases.

Media2Cloud solution now includes SageMaker Ground Truth to provide a crowdsourcing tool supporting custom face curation training.  The tool provides an easy to configure front-end to enable a public or private workforce to quickly view, identify and tag faces in the video found by Amazon Rekognition.  The training data is fed into a private faces collection in the customer account and added to their facial recognition service.  Take for example the rise in demand for content series acquisition.  Often times the actors are playing a character and they may be under a lot of makeup or it’s a reality show with a cast that is not in the celebrity database.  Sagemaker Ground Truth provides the ability to quickly review the faces from the first episode, tag them with descriptive metadata, and add them to your training model so that they are recognized by facial analysis on subsequent face recognition processing.


The Media2Cloud solution was launched with key partners supporting both ends of the framework,, Levels Beyond, and Nomad-CMS.

In order to enable the customer to tackle on-premise challenges, companies like bridge the gap. is an AWS technology partner that supports customers that need to migrate large-scale, often proprietary, physical LTO archive solutions. They actively assist customers with complex, legacy archive migrations helping them establish a migration-to-cloud strategy that does not interfere with day-to-day production activity. offers a SaaS based tool called Rapid Migrate which helps clients with the heavy-lifting of legacy on-premise, proprietary tape archive systems (i.e. Oracle DIVA, SGL, Quantum and IBM) to the cloud. The system essentially provides a means to take advantage of unused resources without interfering with the complexities of existing production workflows.  This service enables the mass-migration of assets to AWS Snowball or file transfer to AWS S3 cloud storage in a non-proprietary video format and sidecar metadata file.

It’s key to note that customers will still want a MAM to import and manage this content and metadata.   Levels Beyond is an AWS Partner that provides a MAM service platform called Reach Engine. Levels Beyond is able to manage Media2Cloud or interface with the output to consume the JSON formatted metadata to provide customers with a rich search, discovery and management service that can manage the content archive.  Reach Engine can provide a number of visualizations to your content inventory, including Timeline Views that support structured metadata analysis, captioning, and most any timeline-based search metaphor.  Levels Beyond can support customers further by configuring the services to add additional metadata faceting as well as automating the processing of content for production, OTT, digital publishing and other content related services.


Our latest partner, Nomad CMS, gives businesses the ability to bring an OTT metadata enrichment and discovery system to their existing S3 assets. Nomad augments S3 asset storage without requiring any changes to the existing asset structure or files themselves – and automatically integrates with Media2Cloud and other AWS AI/ML services. Confidence scores, labels, transcriptions, and other AI enrichment is used to tag each asset with appropriate discovery information. Searching and publishing activities are used to make the resulting metadata available to custom solutions or in support of other integration activities.


Content Lakes help customers evolve from legacy environments where content is stored in multiple locations like NAS, SAN, HSMs, LTO Robots, desktops, and offline physical media.  The Amazon Simple Storage Service (S3) integrated with Glacier and Glacier Deep Archive creates an environment where all content can be stored centrally and leveraged by multiple services, organizations and third-party service providers. Content access is controlled with multiple security mechanisms including encryption at rest and in transit.  Content owners don’t need to be worried about the disappearance of critical creative content on an LTO or portable hard drive.  The 11 nines of durability, high performance capabilities of S3, combined with the economical tiering with Glacier and Glacier Deep Archive, provide a safe storage environment free of technology refresh issues.  The introduction of Intelligent Tiering in storage removes the need to guess at life cycle policies; content owners can store their content in S3 and let it move automatically to the most economical storage tier based on usage. The centralized storage structure helps customers designate a logical storage structure that reduces complexity and redundant processes.


Serverless and Machine Learning powered ingest provides an infrastructure capable of supporting both production and archive migrations within the same workflow; the infrastructure will expand and contract to match the processing needs required to create standardized assets and descriptive metadata.  Most content owners will tell you that due to the varying state of metadata assets on-premise, they struggle to find what they are looking for in a timely manner.  This is also true for content stored in LTO Robots and MAM systems.   As employees turn over, often times the understanding of where to find valuable assets in the archive is lost.  Establishing a metadata strategy that includes machine learning on each video asset will continue to evolve and raise the bar for search and discovery.  Customers can finally tackle metadata atrophy in their process and provides a sustainable means to improve management of content.

The most important point of the design: the customer truly has control of their content!

The point here is that this process can be managed within the customer’s AWS storage.  The assets and metadata are published in a manner that removes proprietary formatting.  The structure of the content lake creates an ideal environment for maintaining structured metadata, storage and access policies, and automated maintenance; all these factors result in reduced complexity and manual intervention.  Customers can focus on higher business challenges such as greater content utility by improving the user experience and media tools that access the content.  Customers can provide access to one or more MAM technology providers to empower different use-cases.  This enables customers to choose to their MAM partners based on their ability to enable their content driven strategy.  For more information on Media2Cloud, click here for the solution.


from AWS Media Blog

Graham Media Group Taps AWS Media Services to Build Powerful Media Solutions for Reporters and Elevate the Live Stream Experience

Graham Media Group Taps AWS Media Services to Build Powerful Media Solutions for Reporters and Elevate the Live Stream Experience

Reaching modern audiences with compelling, informative content can be challenging in the digital era, but Graham Media Group (GMG), a subsidiary of Graham Holdings Company, has managed to break through the noise by innovating the way its local stations deliver news coverage. As part of the company’s VIDEO 2020 total video transformation project, which aims to develop technology that helps reporters reach audiences faster with more meaningful content, GMG has built a series of tools that its seven stations are using to transform the digital viewing experience for audiences in Houston, Detroit, Roanoke, San Antonio, Orlando and Jacksonville. One such technology is Broadcast, a mobile app built on Washington Post’s Arc Publishing Platform (powered by AWS Media Services) that allows journalists to capture and stream high-quality live mobile phone video to web sites and social platforms. The app, along with AWS Media Services, recently helped give WKMG viewers in Orlando, Florida dynamic coverage of the “Fireworks at the Fountain” festivities.

For the event, WKMG produced and streamed more than 100 hours of live event coverage to as well as its mobile and OTT apps, including footage shot by 12 roaming reporters and producers using the Broadcast app. Thirteen additional cameras, including a 360-degree camera, provided additional perspectives. WKMG gave audiences a choice to watch a stream produced by the News 6 team or “direct” their own experience by switching between camera feeds throughout the stream. AWS Media Services powered video ingest and delivery for both streams, ensuring a reliable and high-quality viewing experience across user device types.

For the “Director’s Chair” stream, the mobile and traditional camera feeds were ingested and delivered to the website and OTT and mobile apps through AWS Elemental MediaLive. WKMG’s digitally produced stream ran the camera feeds through Grabyo’s video production, editing, and publishing service (also powered by AWS) for live switching, file playout, and graphics overlays. AWS Elemental MediaLive and AWS Elemental MediaPackage were then used to encode and package the produced stream as HLS for Amazon CloudFront delivery. A touch panel device and the AWS Elemental MediaLive API were employed to drive ad personalization using SCTE 35 ad markers, with AWS Elemental MediaTailor serving as the ad replacement service. Live closed captioning for the produced stream was supported by Amazon Transcribe.

“AWS allows us to break the bounds of physical hardware and deploy layers of commercial- and consumer-level equipment nearly anywhere we want. We can select the best-in-class technologies for each layer of our video pipeline and scale to levels beyond what was possible only a few years ago,” said Michael Newman, Lead Developer for Graham Media Group and Technical Lead for VIDEO 2020. “We’re excited to facilitate reporters’ workflows with new tools and apps like Washington Post’s Arc Publishing Broadcast Application and Grabyo, which allowed us to produce the entire show in the cloud instead of using a truck full of equipment.

Providing this technology to our local newsrooms will allow them to cover their local communities faster and with more relevance than ever.”

The event was funded in part by a grant from the Google News Initiative YouTube Innovation Fund Grant.

For more information about the stream, visit:

from AWS Media Blog

AWS Media Services Learning Path

AWS Media Services Learning Path

Guest Post by Susan Holmes, Technical Curriculum Developer – AWS Training & Certification

Want to learn how to create professional quality media experiences? Want to help your organization deliver live and on-demand video processing, storage and monetization workflows in the AWS Cloud using AWS Media Services?

AWS Training and Certification offers a Media Services Learning Path, designed for anyone who wants to learn how to create live streaming and video-on-demand media experiences.

Whether you are just starting out, building on existing IT skills, or sharpening your cloud knowledge, AWS Training and Certification can help you be more effective and do more in the cloud.

AWS Media Services training at a glance

The Learning Path provides a structured and comprehensive roadmap of how to grow your expertise with AWS Media Services. You can progress along a curated path, from foundational video concepts to technical deep dives.

Through the Learning Path you can access digital courses, videos, tutorials, and self-paced labs.

The training is organized into four paths designed to provide numerous entry points depending on your skill level and area of need:

If you want hands-on practice using the services in the AWS Management Console in a supported environment, there are self-paced online labs. Each lab has its own AWS training account and walks you through procedures with detailed step-by-step instructions.

Navigating the page

From the Learning Path, you can also view information about each course within a workflow so that you can easily scan the courses available, read a description, determine how long it will take to complete, and then directly access the course content via a simple click of a link.

Getting started

There are two ways to access AWS Media Services training:

  1. You can access the Media Services Learning Path here. From the landing page you can explore each of the curated paths and launch digital courses. This will take you to the AWS Digital Training platform. When accessing the AWS platform for the first time, you will need to sign in and create an account.
  1. You can also explore the broader AWS Learning Library. View and search the full catalog of available training across all AWS services. As with the previous method, you will need to sign in and create an account if you haven’t already done so.

To view all the available training related to AWS Media Services in the library, simply select the “Media Services” domain in the navigation pane.

AWS Spotlight Labs for Media

In addition to the self-paced digital assets available on the Media Services Learning Path, from time to time we take our hands-on training on the road to industry events. We offer free facilitator-led workshops with self-paced labs that attendees can sign up for.

Come visit us at our next event.

NAB 2019: Las Vegas M&E Symposium 2019: Los Angeles

Looking ahead

The Media Services Learning Path will continue to grow and evolve as part of the AWS Training and Certification team’s ongoing commitment to developing content that educates our customers to more quickly and easily build media workflows.

Ready to get started? Create an account and begin learning today.

from AWS Media Blog

Your Call Football delivers real time, interactive sports experience with Mission and AWS

Your Call Football delivers real time, interactive sports experience with Mission and AWS

Your Call Football offers a wholly unique football experience where fans get to call plays in real time, then see them run on the field by real players.

Given the inherent nature of the application, Your Call Football needed an infrastructure that could handle bursts of traffic from 100,000 concurrent users during the three hours each week when games are played, while seeing relatively little concentrated traffic at other times. Additionally, everyone playing the game votes during the exact same 10-second window. Each burst of online activity would be followed by the live action on the field. Once the play is run, the entire cycle is repeated (up to 100 times per game). Any lag, delay in service, or timed-out request could result in a fan’s vote not getting counted – and potentially missing out on prize money.

Your Call Football chose Mission, an APN Advanced Consulting Partner, in order to implement this infrastructure. With Your Call Football needing to ensure a smooth, real-time, and instantaneous user experience during large traffic spikes, Mission teamed up AWS and Kubernetes.

You can read the full case study at the Mission site.

For an in-depth look at the solution deployed by Mission, check out this post from Kiril Dubrovsky, Senior Solutions Architect at Mission, on the APN blog and watch the video below:


from AWS Media Blog

NGA introduces Microburst cloud pilot during Enterprise Challenge 2019

NGA introduces Microburst cloud pilot during Enterprise Challenge 2019

Concluding our updates from the 2019 Enterprise Challenge series of demonstrations, sponsored by the Under Secretary of Defense for Intelligence, or USD(I) and led by the National Geospatial-Intelligence Agency (NGA), we can report that NGA’s Microburst cloud concept was successfully demonstrated to many government organizations. Supported by AWS Elemental and several Amazon Web Services technology partners, this proof of concept cloud pilot—fittingly named “Microburst” to call to mind a concentrated cloud-to-ground downpour—extended numerous AWS cloud services from in-region to the remotely connected tactical edge, as well as demonstrating the ability to run mission-critical cloud services at the edge autonomously during Disconnected, Intermittent, or Low-Bandwidth (DIL) conditions. This architecture delivered a transformative end user experience by taking advantage of the full power of the cloud in connected and disconnected modes of operations.

The Microburst cloud concept introduced a variety of key products and services to the community, incorporating AWS Elemental Live software, AWS Elemental Media Services, and AWS Snowball Edge running partner applications powered by Amazon Machine Images (AMIs). Together, these solutions served to improve and modernize the process, exploitation, and dissemination of live Full Motion Video (FMV) from high-definition aerial video sensor systems to remotely situated tactical users and enterprise analysts anywhere in the world. Several testing objectives focused on a cohesive cloud strategy aimed at improving operational efficiencies found in mission systems such as:

  1. Deploying ubiquitous cloud capabilities from Enterprise to Edge
  2. Improving FMV quality, accessibility, and resiliency in the network
  3. Preservation and use of geospatial metadata in a web-centric architecture

The hybrid cloud concept leveraged HEVC/H.265 encoding from air-to-ground, improving overall video quality while reducing bandwidth over wireless mesh networks. The use of multi-profile adaptive bit rate encoding and packaging, along with web-enabled MPEG-DASH streaming, allowed users to consume content with ease using thin clients or browser-based user experiences. Overall, the forward-leaning solution showcased an end-to-end, cloud-first strategy for FMV, unlocking greater potential for intelligence gathering and processing. For more information around this event, in addition to other technical considerations discussed, refer to part one and part two of the EC19 blog series.

from AWS Media Blog

AWS Joins the Academy Software Foundation

AWS Joins the Academy Software Foundation

Filmmaking is evolving at an incredible pace. The content created for features, episodics, and beyond is becoming increasingly complex. Sophisticated visual effects and animation tools have been used to realize the director’s vision at a level never been seen before, from replacing entire greenscreen backlots with stunning vistas or CG cityscapes, to waging epic superhero showdowns featuring explosions, lifelike fluid simulations, and more. While substantial advancements have been made in the last ten years, the next phase of innovation is poised to reveal even more incredible filmmaking milestones, especially with the media and entertainment industry’s continued adoption of open source software.

In the spirit of open and collaborative technological advancement, I am proud to announce that AWS has joined the Academy Software Foundation (ASWF), an initiative founded by the Academy of Motion Picture Arts and Sciences and The Linux Foundation to provide a neutral forum for open source software developers in media and entertainment. The mission of ASWF is to increase the quality and quantity of contributions to the media and entertainment industry’s open source software base. By joining ASWF, AWS brings its open source expertise along with deep M&E industry expertise spanning render management, content creation tools, and cloud-based workflows to help establish industry standards that will reduce friction points and enable studios to fully leverage the flexibility and scalability of the cloud.

AWS Thinkbox is a long-time provider of production-proven technology used in content creation. As the developer of Deadline render management software, we look forward to continuing this legacy through collaboration on code and standards with the ASWF community, along with our customers and partners. Keep an eye out for more to come!

from AWS Media Blog

FANtastic. Engaging. AWSome.

FANtastic. Engaging. AWSome.

AWS Media & Entertainment Symposium 2019, Kings Place, London, 27th June

At twice the scale of previous years, and with a superb new venue and hands-on AWS technology tracks, the 2019 edition of the AWS UK Media & Entertainment Symposium brought together a wide array of industry tech pioneers. Attendees and presenters explored the thinking, tactics, and strategies behind their respective cloud transformations and shared learnings, visions, and philosophies for the future. The result was rewarding and Illuminating while revealing shared common threads.

Featuring a stellar line-up of industry leaders ready to share their cloud experiences and learnings, the event’s characteristic mood of disruption, empowerment, and enablement was evident from the very beginning.

Even as the 250 or so delegates arrived for early coffee, croissants, and conversation, the tone was positive, upbeat, open, and receptive. And, from the first of the day’s truly engaging keynotes, so it would remain.

The first presentation from Chelsea Football Club’s director of marketing, Gary Twelvetree, introduced a theme that would echo throughout: the need for awareness, clarity, and pragmatism.

Setting out Chelsea’s vision – quite simply to win both on and off the pitch – and speaking to the club’s success in engaging with its fan base across the world, Twelvetree posed an interesting question: What does engagement truly mean?

“There were some big audience engagement numbers being bandied about when I first arrived. Plenty of fan engagement with the Chelsea brand. But so what? I thought. What does that really mean? How was it all being translated into revenue dollars?”

The club wasn’t really treating its teams and products as consumer offerings, he explained, and that had to change. It needed to gain a better understanding of the connection between its consumers, their engagements, and the value being created between the two.

“We wanted to become a brand – not just in football, but ‘from’ and beyond football. A fully-fledged entertainment brand.” That meant, he said, putting data at the heart of it all, and this is what Chelsea has set about doing since.

Clear echoes of this philosophy were heard in the event’s sports track – the first held at an EMEA AWS M&E Symposium – from Formula 1’s Frank Arthofer and Liverpool FC’s Andy Fletcher; leveraging the many capabilities of AWS to innovate the fan experience and create new value.

Matching Twelvetree’s enthusiasm, both spoke passionately of the importance of engaging their global audiences in innovative, measurable new ways; of “telling untold stories”, and of the importance of technology and data in underpinning it all to deliver the vital insights that make closer engagement possible.

The day’s pragmatism was further reflected, and added to, in other keynotes, particularly that of UKTV CTOO Sinead Greenaway, and in the fascinating panel discussions that followed.

While Greenaway was, she said, hugely excited about some of the emerging audience opportunities in tech, the cloud, and IP, it is vital not to get too caught up in the hype; to “keep our heads”.

Likening the ideal position to a Goldilocks scenario – not too hot, not too cold, but just right – it’s vital, noted Greenaway, not to deploy tech for the sake of deploying tech; not to go non-linear (via OTT technologies) simply for the sake of going non-linear.

Fundamentally, said Clive Santamaria, Chief Architect with ITV, it’s about giving audiences what they want. And for that to happen in perpetuity, he suggested, transformation cannot be a one-off process. It must instead be a continual one.

A related thread ran through the afternoon’s panel sessions.

The first session focused on supply chain transformation, where it was noted that while the entire way in which media and entertainment supply chains are viewed, perceived, and addressed is changing, it is not enough to simply shift the supply chain and its tools into the cloud. It has to be much more intuitive and measured than that.

The second session, which explored OTT and its monetization, saw the debate turn to how rights holders are beginning to rethink and repackage how their audiences consume their content. Here too, while it was acknowledged that rights owners want to get closer to their audiences and engage with them in a different way (and that all manner of exciting new ways to do so are now emerging), caution is still required.

“They [rights holders] have to be very careful about how they go about it. Better not to do it, than to do it badly”, as one commentator put it.

It was evident from the increased attendance, broader profile of companies, and common threads of interest that reaching, delighting, and understanding the end viewer is as important as it has ever been. But the capabilities to do that, to innovate, test, and iterate are here. It was an honor to have so many leading organizations share their experiences of business transformation built on AWS.

Suffice to say the future looks FANtastic, engaging, and AWSome. In every sense.

For details of the original event agenda please see here. For more information on AWS cloud services, reference solutions and AWS partners all focused-on Media & Entertainment – please see here. Additional blogs from the UK event will be published, diving deeper into the sessions.

from AWS Media Blog



At SIGGRAPH 2019, booth #1203, AWS Thinkbox will highlight a complete studio in the cloud workflow, including rendering, virtual workstation, and storage solutions that help creative studios iterate faster and take on more projects.

SIGGRAPH is the premier event for computer graphics artists and professionals, drawing more than 16,000 attendees from around the world. With a schedule of rotating destination cities, this year’s event takes place in Los Angeles, CA July 28 – August 1.

Cloud rendering helps studios increase productivity by reducing the time visual effects artists spend waiting for their render jobs to finish, providing artist more time for content creation. Using Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances, studios can achieve near-limitless rendering scale at up to 90 percent cost savings compared to On-Demand pricing, and pay only for resources used.

Cloud-based virtual workstations running on EC2 G3 or G4 instances allow studios to scale their creative talent, with the ability to add artists to their pipeline from almost anywhere in the world. Artists work securely using a streaming application and the studio’s existing licensing for their preferred content creation tools, with content stored securely using Amazon Simple Storage Service (S3).

Partners are also showcased in the AWS booth at SIGGRAPH. Conductor Technologies, Weka IO, Qumulo, Teradici, Otoy, Shotgun, Ftrack, and Luxion will demonstrate solutions that help studios collaborate and scale their rendering, workstations, and storage workloads on AWS.

from AWS Media Blog