Q&A: Navigating cloud security challenges in the broadcast and media industry
Subscribe to NewscastStudio for the latest news, project case studies and product announcements in broadcast technology, creative design and engineering delivered to your inbox.
The cloud has become integral to the media industry’s workflow and operations. As more companies replace legacy systems with cloud systems, there’s a growing need for a deeper understanding of cloud security and its implications for the content creation process. In this Q&A with Simon Eldridge, chief product officer at SDVI, we delve into the pressing concerns and emerging trends surrounding cloud security for broadcasters and media companies.
Eldridge weighs in on the challenges of migrating legacy applications to the cloud and how cloud-native applications must prioritize security from the very start, along with the current state of cloud adoption and the reason behind AWS’ popularity with broadcasters.
The interview has been edited for length and clarity.
What is being overlooked as broadcasters embrace the cloud for every part of their workflow and operation?
I think the biggest issue is legacy vendors lifting and shifting their standard applications to the cloud. Many broadcast media applications were traditionally in closed, isolated networks, and security was somewhat of an afterthought because the thinking was, well, this is isolated.
I think when you build cloud-native applications, security is one of the things that you bake in right from the very start. So anybody who’s just taking their on-prem applications and moving them up isn’t getting any benefit of that kind of new model.
How is security different when comparing cloud with on-prem?
I think there’s an argument to be made that public cloud is more secure than private cloud or private data centers, assuming that you follow the best practices.
Whenever somebody is assigned a permission, they get the absolute minimum permissions needed to do that task. Those kinds of practices didn’t tend to happen on premise or in private data centers. It was pretty common for an application or a server to have a default user and a password. That doesn’t fly in the cloud.
What additional levels of security do broadcasters need to keep in mind with the cloud?
Almost all big enterprise customers at this point insist on support for single sign-on platforms. If your application can’t tie into the customer’s enterprise-wide authentication system, then it’s just not going to work.
What do you see as the largest security concerns today?
I think the biggest thing that our customers are concerned about is security of their content itself. And so most of them, if not all, want to own the storage platform in which their content is stored. They want to own how access is granted to those storage locations. They want to audit exactly who’s accessing it and why, and they want the ability to be able to revoke access at any point.
All of those things are pretty hard to do with a traditional on-premise system, whereas with a proper cloud security model, you can just revoke someone’s privileges and they’re out of your storage locations.
The other part is just general hacking fears of ransomware attacks or phishing attacks. These systems tend to be isolated from public interfaces, so there is a mechanism to get into your stiff system, but it’s completely separate from your email services or your other file storage services.
Where are we on cloud adoption?
Historically, the trepidation was, “Can it do what I need, and is it secure?” I think we’re past that, and I think the vast majority of customers have seen what is possible and have seen the increased security that you get with cloud if you do it right.
I would say that the challenge now is the skillsets broadcasters have to migrate from broadcast operations or broadcast engineering people to DevOps and cloud engineers.
In your view, what is the most significant piece of the cloud that is currently missing?
That’s a good question. I can’t think of a use case now that I haven’t seen demonstrated on the cloud, whether it’s live production, media processing, or running playout channels, even down to complex movie production, done completely distributed.
So it’s hard to say that there’s a technology missing at this point.
Is there any concern with the fact that this relies on two or three public cloud vendors and that there’s not enough diversity in endpoints?
There’s definitely that concern I think in the market, and I mean it’s quite easy to argue that there is one dominant public cloud vendor and then other options.
I think there’s a desire for more choice and quality there just because nobody wants the whole world to end up with a single platform they run on. Having said that, I mean AWS is typically the first name out of customers’ mouths when they talk about the cloud.
Is there a reason AWS is typically preferred by broadcasters?
I think just the maturity of the platform really. I mean part of it is first mover advantage, but the benefit that comes with first mover advantage is they’re way ahead in their innovation curves in terms of capabilities.
That said, there are things like AI tools or content analysis tools. We have multiple customers who are moving content around in order to take advantage of those tools. So definitely, there’s a move to multi-cloud, use the best tools where the best tools are, but really they’re the sort of leading edge customers as opposed to the majority. That introduces another whole set of interesting security challenges as well as soon as you get multi-cloud.
What else should broadcasters be considering with cloud and security workflows right now?
If you think your broadcast network is completely isolated from your corporate network, it’s probably not. There’s probably a path somewhere.
So encryption is obviously a big one, and making sure that where you store the content has a high level of encryption, making sure that when the content’s being moved around in transit, it’s still encrypted. There are multiple customers who are doing things like adding DRM or watermarks to the content so that if it does get out, then they have a traceback mechanism.
There’s the default level of encryption that gets turned on when you create an S3 bucket and is a perfectly good example of, “It’s probably good enough.” Most customers like to do things like use their own keys. So even if that encrypted content was copied to somewhere else without the key, you actually can’t play it anyway. To some degree, it depends on the content that is being dealt with and where it is in its lifecycle. If it’s 20-year-old TV seasons that are being published to a VOD platform, you are going to be less paranoid than if it’s pre-release movie content. So to some degree that’s nice because you can apply the level of security that’s applicable to the value of the content that you’re dealing with.
The other one is about threat detection and proactive monitoring. Not waiting to find out, hey, something just happened, but actually knowing either when it’s happening or when it looks like something is about to happen. That’s becoming much more prevalent.
Then the only other one that I wanted to touch on is the idea that the process of evaluating vendors or customers being able to know who’s safe, gathering around a standard way of doing that would benefit everybody.
How do you see AI and machine learning continuing to be embraced by broadcasters?
I think the use cases that I’ve seen AI and ML being used in media today are things like, show me where the ad breaks are. Show me where the clock is or show me where the scene changes, those kinds of basic functions.
Beyond that, certainly, we have some customers who are using it pretty heavily for content compliance. So you run it through AI, tell me where all the nudity, violence, bad language, all the stuff that you worry about for international distribution. And most customers are using that to inform operators instead of replacing operators.
I think as we go forward, that whole process can probably be automated as people gain confidence in the results of the AI analysis. We’ve seen people toying with automatically creating highlight reels from sporting events. So I think as we go forward, it won’t replace highly produced quality content, but there’s a subset of content that doesn’t need a person sitting and making.
How do you look to incorporate new technologies like AI into your product stack?
We have integrations with all of the major AI tools and really what that does for our customers is allow them to pick the one that’s most appropriate for the use case that they’re trying to do.
We actually had a customer recently that did a really interesting thing. They found that when they used to make a particular show on television, you’d always get the kind of catch up and here’s what you saw before the break and here’s what you saw after the break, that kind of repetition. And that was primarily to make a program fill the slot it was designed for without shooting more content.
Now as soon as people start bingeing that show, it becomes extremely repetitive because you’re just seeing the same thing repeatedly without ad breaks. So they actually ran their library through an AI tool to remove all that repetition.
The shows were much shorter, but the retention of viewers was much higher because they didn’t have to put up with that stuff. So there’s a good use case for, “Hey, we got more eyeballs because we processed the content in this way.”
Subscribe to NewscastStudio for the latest news, project case studies and product announcements in broadcast technology, creative design and engineering delivered to your inbox.
tags
Broadcast Workflow, cloud, Cloud Storage, Cybersecurity for Broadcasters, MAM Workflow, Media Asset Management, media storage, SDVI, security, Simon Eldridge, storage
categories
Broadcast Engineering News, Content Delivery and Storage, Featured, Media Asset Management, Voices