Carter Yagemann

Carter Yagemann

Assistant Professor

The Ohio State University

Systems security professor with interests in automated vulnerability discovery, root cause analysis, and exploit prevention.

About Awards Grants Media Patents Projects Publications Service Talks Teaching

The Problem with DRM

Sat 22 October 2016


The topic of digital rights management (DRM) systems is a controversial one among those affected by it. Some readers are going to jump to conclusions without properly reading what I want to write on the matter and there's nothing I can do about that. To those with minds open enough to read this entire blog post honestly, I promise to present you with a perspective that, although not novel to everyone, isn't a rehash of the most common arguments made on the topic. What I will argue is a stance based on my technical understanding of computer systems as an information security researchers, which I believe is a perspective many aren't exposed to. If by chance you happen to be such a researcher, you probably won't find this post particularly interesting. For everyone else I hope to present a robust formulation of the problem that is insightful while still being easy to understand.

Additionally, as is necessary when discussing controversial topics, I must state that the contents of this post are my personal opinions and mine alone.


DRM has become an active topic of debate in multiple communities due to recent changes in how technology allows us to access and experience digital content; such as movies, shows, games, and music. With the decline in users buying and storing their own digital content in favor of services (Netflix, Hulu, Steam, Spotify, etc.) that offer to stream it over the internet on-demand, DRM touches more lives now than ever before.

For users of these services the benefits of not having to think about storage and having cheap and immediate access to the latest content are very appealing. Try to access content from even a year ago however and the trade-off becomes apparent. These services may be cheaper, but they also don't guarantee lifetime access as licenses change and budgets require money saving cuts. As some users have been frustrated to realize, there's a difference between paying to access and paying to own. Adding to the frustration is the inability to access content on some services when an internet connection is slow or unavailable.

The idea of using computers to illegally share digital content is not new to digital content services, but these services do give the act new motivation. Where users might have considered illegal sharing to avoid the cost of buying the digital content, now users seek to avoid subscription fees, ensure lifetime availability, and counteract limited internet connectivity. This motivates license holders to require services to implement DRM; systems designed to make it difficult for a user to permanently store and illegally share on-demand digital content.

However, I fear that many license holders demand these systems without actually realizing their limitations and unintended consequences. That is why in this blog post I would like to take the time to formulate the problem of using DRM to protect against illegal storing and sharing from the perspective of an information security researcher. Frankly, I think DRM is a losing battle and I want to present the reader with a robust formulation to justify why I see it that way.

Cat and Mouse

The first thing we have to understand is that DRM in practice cannot completely prevent illegal storage and sharing. Simply put, you can't show someone something without showing it to them and once they've seen it, you can't prevent them from having some ability to reproduce it. Even if you had a magic wand that could somehow wipe their memory, who's going to want what you have to share if they won't remember it? DRM cannot be perfect.

However, this is not to claim that DRM cannot be effective. Specifically, we can think of using DRM as making a trade-off between multiple factors. Namely, the cost of implementing the DRM and the inconvenience the DRM presents the benign user verses the time and skill required for the adversarial user to bypass it. In other words, an effective DRM system is one that is cheap to implement, produces few enough side effects that the benign user is still willing to pay for the service, and requires the adversarial user to commit a lot of time and skill to bypass.

So keep the good, throw out the bad, and we're done, right? Not so fast. We could do just that if the factors had no relationship to each other, if they were independent, but they aren't. Anything you do to make the adversarial user's task harder is going to increase the cost of implementation and inconvenience the benign user. Don't believe me? Implementing software DRM restricts the benign user to only systems that can run that software and allows the adversarial user to bypass the DRM using her own software. Operating system DRM now requires the adversarial user to implement their own operating system software, but also now restricts which operating systems the benign user can use. Hardware DRM raises the bar further by requiring the adversarial user to devise a hardware level bypass, but now the benign user can only use certain hardware. Hopefully you can see how this is a game of cat and mouse. The harder you make it for an adversarial user to bypass the DRM, the more restrictive the benign user's experience becomes. Similarly, as you increase the skill the adversarial user needs to bypass the DRM, you also raise the skill the programmer implementing the DRM needs to design it, which raises the cost. Basically, as you make the DRM better at thwarting the adversarial user, you also make the service more expensive and less appealing to the benign user.

Hopefully you now see why balancing the factors I've pointed out is not trivial. The next question is how hard is it to find this optimal balance. If it's easy we can just find it and we're done. We'd then know what degree of DRM to implement.

Sadly, I'm going to argue that it's not easy to find. In fact, the reason why finding it is difficult is because it's subjective and constantly changing! Notice that all the factors I defined are very soft. User experience is hard to measure. The user's tolerance for being inconvenienced is hard to measure. Even skill and cost are hard to measure in this context. Not only that, but these factors change over the course of public discussion. Opinions simply change. What all this means for us is that it's difficult to measure the factors we're interested in, it's difficult to determine when we've struck an optimal balance, and even if we strike a balance it might not stay balanced for very long. In other words, our best efforts will be no better than a random guess. Sure we might get lucky, but why pay to play in the first place?

Two Extra Cents

In general I find it interesting to argue that people are blinded by an unjustified pressure to achieve progress. That's not to claim that we never take steps in the right direction, but rather that when the path becomes too foggy we tend to start taking random steps and then spend a lot of effort convincing ourselves that the steps somehow weren't random. It's worth pondering if a solution has fallen into this pattern because the result when it does is a lot of effort spent on something that doesn't actually solve the intended problem.