The COVID-19 pandemic has forced a new norm on industries around the globe, and video production has certainly not been immune. COVID-19’s impact on video production became immediately apparent when it quickly turned entire broadcast studio campuses into ghost towns.

Even large broadcast studios, such as Riot’s LCS and Blizzard’s Overwatch League teams, were left blindsided as their facilities shut down overnight in response to the outbreak spreading across the country. These teams had to rush to identify and build solutions that would provide a safe working environment for staff and all parties involved. It also forced previously live events to become increasingly distributed over the internet where possible or worse, completely canceled.

Mad City has a team of video & infrastructure engineers that have come together to architect, build, and validate cloud-based video production workflows that enable distributed teams to operate and produce video content of all types. Our goal is to curate and provide the infrastructure and audio/video-engineering know-how that helps bring video production to every consumer. To achieve this goal, we’ve fully committed ourselves to a customer-centric approach.

We have deep roots in the esports event organization and live production space. Established in 2015; we have produced successful events in the CS:GO, Rocket League, and Smash Ultimate communities. Outside of our own homegrown events and fostering communities, we have done white-label creative and live production services for clients including but not limited to: Warner Brothers / Netherrealm Studios on the Injustice 2 Pro Series and Mortal Kombat 11 Pro Kompetion Tours, GRIDLIFE, C3 Presents, and Skyzone. We’ve been on the ground with our offline infrastructure for years and when the world moved to a remote-first reality, the team was able to organically transition its expertise to cloud-based video production solutions. Mad City built and customized the cloud infrastructure that powered the recent broadcast of the PAX Arena VALORANT Invitational. As part of the VALORANT Ignition Series, the event hosted 20 teams and garnered the largest concurrent audience for a VALORANT tournament to date with more than 75,000 concurrent viewers on the PAX Arena Twitch channel during the finals.

The production for the PAX Arena VALORANT Invitational required that the entire staff be in separate geographical locations while still providing a seamless workflow. The staff included multiple observers, a technical director, an audio engineer, a replay operator, a graphics operator, player cameras and video-enabled host and casters. This also included international production staff based in Europe. We were engaged to provide a turnkey infrastructure solution that could be accessed and operated from anywhere in the world while still maintaining system familiarity.

With the help of Mad City’s infrastructure solutions, the event drew the largest viewership of any VALORANT tournament to date. We were responsible for the AWS hosted cloud studio that supported the entire production staff involved in the show. In this post, we will walk you through the technology we used to power the show and describe how we built a robust architecture to support the top-tier production of this event which drew an unprecedented viewership.

View of Technical Director, Plamen Marinov

Why is Robust Infrastructure Important?

It’s typically taken for granted that the place you live and sleep in has flowing water, electricity, and a source of heat. Your sink turns on, toilet flushes, lights turn on, and oven heats. Perhaps you have a dishwasher, maybe laundry machines and always have hot water readily available. We’re accustomed to identifying water, electricity, and natural gas as utilities. These same every-day necessities can also be thought of as infrastructure.

In·fra·struc·ture: the basic physical and organizational structures and facilities (e.g. buildings, roads, power supplies) needed for the operation of a society or enterprise.

In computing technology, infrastructure is often used to describe the data centers, servers, and networking required for a computer system.

On a basic level, the computer systems that power the virtual machines of cloud behemoths such as AWS, Microsoft Azure and Google Cloud Platform have a lot in common with the computers that many of you are using to read this post. Cloud machines also run familiar operating systems, such as Windows and contain common hardware components like CPU, memory, storage and network access. 

People in the production industry have used (and continue to use) their gaming-grade home computers to produce shows. This practice was especially prevalent during the early days of the COVID-19 pandemic. While these computers may be suitable for small, low-risk, high fault tolerance productions, using these same systems for high-impact shows can easily lead to disastrous results.

The lack of access to reliable, redundant and robust physical hardware, networking (local and Internet), storage and power makes our home computers unsuitable for powering important remote productions. Our home computers also do not provide any redundancy in terms of the potential on-the-go need for extra storage space. Public cloud providers, such as AWS, go to great lengths to offer the redundancy in components on their platforms. They also make an extra terabyte or 100 terabytes of space readily available anytime you need it making cloud computing an optimal solution for a robust remote production environment.

Building, maintaining and constantly improving the ever-reliable underlying infrastructure that powers the cloud virtual machines of some of the biggest providers is a gigantic effort. It requires incredible levels of human and monetary investments as well as organizational expertise. This infrastructure is instantly available to anyone around the world with just a single click. Moreover, this infrastructure is available to consumers on-demand with no up-front costs.

People who run hardware at home will never be able to enjoy the instant scalability that cloud computing offers. Running hardware at home also brings along a slew of other issues. Even if you ignore the up-front investment in powerful home hardware required for video engineering, an Internet provider or power outage can delay a show or even prevent it from happening altogether. You would also run the risk of potential hardware failures that can cost the production team days of lost revenue. Both of these issues create an unstable workflow that simply does not allow for the level of planning and scheduling required to pull off a large-scale event. Offloading as much of the production workflow to reliable and highly-available datacenters, such as those that AWS operates, dramatically reduces these not-so-uncommon risks of Internet outages, power outages, and hardware failures.

Naturally, cloud machines must still be accessed, configured and, in many cases, constantly monitored using remote-based computers. However, in this scenario, an outage at one or more of the production staff members’ remote locations can be completely transparent to the show’s audience if carefully planned.

Essentially, a robust infrastructure provides multiple layers of fail-safes in the event that something goes wrong allowing you to maintain your schedule and keep your viewers engaged. In most scenarios, your viewers will not even realize there was an issue as any issue that may occur will be isolated to a single staff member’s location.

VALORANT Introduction

Use Case: PAX Arena Invitational

According to Riot Games, the VALORANT Ignition Series is a series of tournaments created in partnership with players, teams, content creators, and tournament organizers from all over the world focused on building the VALORANT competitive ecosystem. This is the first step Riot Games is taking after launch to facilitate organized competitive play at scale by unlocking best-in-class esports organizations to experiment with a diverse set of formats and lay the foundation for competitive VALORANT.

In order for the production to match the prestige of the the VALORANT Ignition Series, the client set a few base requirements for their cloud studio. One of the main prerequisites included a streamlined and familiar operating experience for the Technical Director, A1, and Replay Operator. We also needed to provide the operators with the ability to connect familiar control surfaces such as an Elgato Stream Deck and multiple Midi/USB Controllers to the cloud studio machines. Other major requirements included high bitrate streaming, sending remote game observer feeds to the cloud studio and providing the power to record multiple high quality feeds. For broadcast talent, we needed an easy to use (talent-side) solution for ingesting and returning talent video and audio feeds.

Other Cloud Studio “Solutions” and How They Compare

Established cloud studios exist that enable broadcasters to mix remote shows without needing the hardware power and bandwidth to execute a local show. Two examples of these cloud production tools are Grabyo and easylive.io. These tools provide broadcasters with the basic features that would be expected from a video switcher, such as remote stream ingests, remote guest calls, graphic overlays, and RTMP encoding to send streams to any desired platforms. While these cloud platforms provide the tools required to stitch together a broadcast, they do not provide a straightforward or efficient experience for operators, especially in scenarios where many operators need to collaborate together. Cloud broadcast solutions like these have a learning curve for operators as they do not utilize familiar or intuitive interfaces. Another feature that many productions seek is the ability to have granular control over some settings such as decoding, encoding, and audio settings. Finally, the inability to connect operator control surfaces such as Elgato’s Stream Deck and USB Midi panels is a big deal-breaker for many. Without these tools, the studios must be controlled by clicking or static keyboard shortcuts making the entire operator experience feel sluggish and slow. It also makes it incredibly difficult for operators to keep up with the high-paced action on the multiple video feeds that they’re monitoring. Overall, these programs are great for small-scale productions and are amazing tools for many broadcasters with the current conditions of the production world. However, due to a few key components missing, they are not a viable option for more complex shows.

Every platform that took OBS or VMIX out of the equation also removed flexibility, and the commodity aspect that we strive to achieve. It removed the granular controls of video routing and transcoding. We believe the future is heading towards these types of platforms but today, the attempt to abstract granular control just didn’t feel right. Finally, it added unknown variables for operators creating a steep learning curve for new platforms.

View of Production Operations Director, Colin J Murphy
View of Production Operations Director, Colin J Murphy

Architecture & Design

Our approach to architecture and design was fairly straightforward: work backwards from the requirements to ensure every need is met. Arguably, esports has some of the most complex production signal flows in live broadcasting. Once you add in factors like a distributed crew and talent, you now have a mission just shy of impossible. Our engineering acumen told us that if we could build a stable 1080p60 esports workflow in the cloud, we could take just about any signal flow and implement the system needed to execute. We also knew that we needed to provide granular and familiar controls for the production team to ensure the highest quality result for any potential application of our solution.

Building a reliable system that was within reach cost wise and availability wise for anyone was one of the primary focal points when building our architecture. This meant using commodity platforms, software, and services to remove potential barriers of entry to our solutions while still providing a great, reliable user experience.

After extensive testing in AWS, Azure, and Google Cloud Platform as well as receiving feedback from studios currently utilizing similar solutions, we found that AWS g4dn instances were the sweet spot and put us on the road to perfection.

System Summary

System Diagram

Cloud Studio Signal Flow
Cloud Studio Signal Flow

If we had to pick one thing for you to take away from our experience, it’s the core pillar of our architecture: take CPU intensive tasks away from software mixers like VMix or OBS. One way of achieving this involves using NDI wherever possible to move video instead of software mixers. NDI is engineered to transmit and receive lossless video. As a result, this means less compression, and therefore, fewer resources are required for encoding and decoding video. This is great for the cloud because you can offload encoding and decoding to the network and have one final place for transcoding your program feed.

Load Testing

AWS Vmix Load Testing
AWS Vmix Load Testing

Crew Experience

“Parsec was a breeze to work with”

Live production environments are constantly evolving throughout a show and require operators to quickly make decisions then perform the necessary operations to execute on those decisions immediately. One of the best ways to make sure that our operators can perform an accurate action in a moment’s notice is to give them the ability to use the tools they are comfortable and familiar with.

Our infrastructure is designed to allow clients and operators to choose the software that best fits their needs. In this case, the software of choice was VMix because of the widely familiar UI and NDI/SRT distribution options that it provides. Interfacing with the software is often indistinguishable from using a local machine due to the low latency and high quality remote connection that Parsec provides. Operators can configure connections, scenes, and settings seamlessly with little to no loss in efficiency. While the setup and configuration of VMix shows are often done using mouse and keyboard, the majority of experienced operators like to avoid using the mouse and keyboard for actually running the show when on air.

The operators for this show requested to use a mix of Elgato Stream Decks and AKAI USB Midi controllers to interact with their VMix instances. The setup for a Streamdeck is straightforward when paired with the open-source Bitfocus Companion software which allows a local Streamdeck to send commands via http requests to VMix (or any Bitfocus supported application) running in the cloud.

View of Replay Operator, Ben Gumer
View of Replay Operator, Ben Gumer

In order for an operator to utilize an AKAI USB controller, we needed to mimic a physical USB connection to the virtual machine. This is known as USB over IP. VirtualHere has several offerings that enable our operators to connect their local USB devices to their instance in the cloud. It did have one minor downside of requiring some port forwarding on the operator’s end. Overall, operators that have tested this system have said that it is rarely distinguishable from what they would experience on their local machine.

Moving Forward

The seamless collaboration between Mad City and Real Time Strategies (RTS) resulted in the best produced and most successful VALORANT event to date. The broadcast was a success from the eyes of the hundreds of thousands of VALORANT fans that tuned in throughout the entire weekend of competitions. That level of success and praise meant that the services our team provided to RTS combined with their excellent execution generated a seamless viewing experience for all to enjoy.

This event further solidified that complex production flows can be done in a fully cloud based workflow. Going forward, this solution can be scaled to match the needs of any client by adding or removing more VMs and VMix instances in the system. Some of the great features that could be easily integrated into this system are individual team/player cameras, a clipping solution for social media posts, or even separate output streams for broadcasts in different languages. For a smaller show where a client just needs a single instance of VMix while still maintaining a high level of redundancy and reliability, the auxiliary machines can be cut to allow a show to be mixed on a single machine. Hardware flexibility in the cloud is unmatched by on-site solutions as one only needs to pay for what they are using. Computing power can also be added and removed as necessary to optimize the specific system in play.

Mad City looks forward to continuing our research, testing, and optimization of cloud based production tools and workflows indefinitely as we understand the importance of continuous improvement. We hope that this content was thought provoking for you and your team. If you would like to discuss this further with us or with other production industry professionals, we encourage you to join our Discord server today!

Mad City offers turnkey live broadcast production services and infrastructure solutions. If you’re interested in getting in touch with us please don’t hesitate to reach out. We would be thrilled to provide our services to make your video project an undeniable success.