[Committee-d] DRM And Authentication Issue Draft Text

Seth David Schoen schoen at loyalty.org
Mon May 15 03:01:16 EDT 2006


Don Armstrong writes:

> I'd like to get this issue recommendation turned in before the
> deadline if at all possible.
> 
> Can any final changes to the text that need to be made be suggested
> before tomorrow? [The DLA comment bits will be removed as will the
> comments at the end of the document.]
> 
> PDF here: http://archimedes.ucr.edu/ss/drm_allowing_authentication.pdf
> Full source:
> http://svn.donarmstrong.com/don/trunk/projects/gplv3/issues/drm_allowing_authentication/

I still have a concern about this, but I have not proposed any new
text because I haven't been able to solve the problem to my own
satisfaction.

I refer everyone to this thread

http://lwn.net/Articles/178950/

as well as to my earlier message about a developer whose software
is used by another party in the course of a scheme that restricts
users of a device.

I'm not sure that these problems can be solved at all, although I'm
still thinking about whether I can propose a way to solve them.

There are basically two scenarios that I think are a challenge for
GPLv3's anti-DRM strategy.

One is the use of _hardware_ to restrict users, either by someone
who also publishes software or by a third party.  We can further
divide this into a couple of cases:

* The hardware might rely on its ability to verify the identity of
the software using a database that is never under the control of
the end-user.  (For example, I have been pointing out that a TCG
TPM can be used by a remote party to determine whether a user has
modified software, even if the user has the "complete corresponding
source code" in the strictest possible sense.  The remote party can
then make decisions that affect the user's ability to interact or
to receive certain data contingent upon whether the user has
actually modified the software or not.  The user might be punished
fairly severely for modifying the software; in practice, it might
not work at all for the purpose for which the user wants to use it.
The party receiving the attestation could have any number of
relationships to 

* The hardware might verify a signature or other information created
by the distributor of the software.  For example, a distributor
could sign binaries and distribute them along with the source code
used to produce them (without the signing key).  The signature could
be taken to imply that the software has certain behavior or enforces
certain policies (which might be DRM, or might be usable by someone
else to enforce DRM, or whatever).  In some cases, the "intended"
functionality of some software that is not DRM at all (like a
kernel or microkernel) could be a crucial part of the functionality
of a DRM scheme built on top of it.  This is certainly true for
Microsoft's original Palladium design, but it could be true for
any number of other designs.  If a kernel ensures interprocess
isolation, for example, a DRM developer might want to rely on that
isolation to stop users from attaching a debugger.  (This implies
a nearly classic problem about "dual-use" technologies; there are
some security technologies that can be used in the user's interest
or against the user's interest.  This could be as simple as a rule
that "the kernel can't be modified after boot" or "a process can't
read the address space of another process".)

Both of these scenarios can be used to create the supposedly
impossible "open source DRM": the user knows everything about how
the software works and has the right and ability to modify it.
But the hardware allows modifications to be detected and punished,
and prevents the user from altering the software undetectably
while it's running.

We can also add

* Non-free software written by a third party (not under the GPL)
can make use of signatures on GPL-covered binaries to use them
as part of DRM enforcement.  Basically the same consideration
as above apply.  To take a simple example, if we assume that a
GPLv3-covered driver could otherwise function as a Windows kernel
driver (I don't really want to have a debate about whether this
is inherently a GPL violation; I'm trying to construct an example
about another point), Microsoft can use the Windows driver signing
mechanism to ensure that the drivers thus loaded can't be used for
purposes that Microsoft thinks is inappropriate.

David Turner has given a defense of the "recommended or principal
context of use" and "normally implies that the user already has
it" approach.  I understand that these clauses are meant to discourage
distributors of GPLv3-covered software from having a close relationship
with someone doing something DRM-like with that software.  I have
several concerns about whether this will work.

First, I think it's clear that there are important situations in which
nobody who is distributing GPLv3-covered software will have a close
relationship with someone who is enforcing a DRM-ish policy -- yet the
policy will still get enforced and the entity enforcing it will still
get a benefit and be able to deter people from modifying the software.

Second, I'm not sure that copyright law is strong enough to make
the kinds of restrictions David described effective in general.
David Lang argues that the GPLv3 clauses can be circumvented; David
Turner suggests that doing this would be an infringement of copyright.
I worry that this represents an expansion in the scope of copyright,
and not one that we should support; I am also not at all sure that
it would work.

Third, I'm not sure that the notions of recommended context of use
or a normal implication that a user has some information are clear
enough concepts to provide useful guidance to software developers,
or to be enforced in court.  I worry that a court would be puzzled
about the definition and scope of these concepts.

Fourth, I'm not sure about the general situation in which someone
sets up a computer system to restrict someone else and exactly which
kinds of power relationships the GPLv3 does or should try to prohibit
GPL-covered software from being used in furtherance of.

It's definitely true that the intuitive concept of complete
corresponding source code as everything that was used to produce a
particular binary (in the preferred form for making modifications to
it) is simple, appealing, and difficult to attain in practice.
Signing keys are the most salient example of a problem here.  If
they have to be distributed, then people are very upset and digital
signatures on software distributions don't really mean anything
anymore.  (You can't tell whether the copy you got was from Linus,
in the presence of an attacker who wants to mislead you...)

If they _don't_ have to be distributed, then there is a property of
certain binaries that a user can't reproduce in the user's own
binaries -- the evidence of authenticity of the binaries.  If that
evidence is used by some system to restrict the user's freedom,
the user does not necessarily have a technological means to beat
the system.  If the system says "I only run Linus's kernel" (or
"I only run Mark Steffik's microkernel" or "I only run AOL Time
Warner's instance of Sun's DReaM client") then the user who makes
changes to the code can't get the system to run the resulting
thing, and can't, as Eben put it, get it to "play all the same
movies" or perform whatever other task is being restricted.  (On
a locked-down PDA, the task might not even be DRM-related at all.
It might be just "the ability to turn the machine on"!)

The current draft's strategy seems to be to provide legal recourse
against the developer of the system.  But like David Lang, I'm
concerned that this recourse may not be practical.  The developer
of the system, as I've said, may not have distributed the program
that was covered by GPLv3.  The developer may not be in a relationship
of mutual control with anyone who did, or not even in any kind of
contractual privity.  They might exchange money -- or not.  The
restrictions may end up being a result of a dual-use technology,
where portions of GPLv3-covered software were developed for a reason
that can enhance user freedom and are then applied to restrict
user freedom.  Actually suing people to stop them from doing this
could be hampered by vagueness in the current text and also by the
claim that copyright law does not (and, in general, should not) give
a copyright holder the ability to stop anyone from doing this.

Someone (perhaps on LWN or perhaps here) also mentioned the idea
that the GPLv3 could still function as a deterrent, even if it's
possible to circumvent the spirit of the anti-DRM clause.  For
example, perhaps TiVo-like companies could not themselves "distribute"
the code if they were also directly responsible for using it to
restrict users.  Since they would want to be able to distribute it,
they would encounter a disincentive.  I think there is a deterrent
effect here (assuming that the anti-DRM clause is enforceable at
all), but I'm not sure that it's very large.  Increasingly, both
computers and consumer electronics devices regularly download
their operating systems over the Internet.  The Ubuntu laptop that's
running my ssh client and the Debian server where I'm typing this
message have both probably gotten a plurality of their software
from public deb-package archives.  We've started to see embedded
systems downloading firmware upgrades over the Internet.  I think
this trend is expanding dramatically.  Internet distribution of
software may become routine.  If digital signatures are used in
certain ways, it could be entirely practical to allow embedded
systems, for example, to download updates from third parties.

Finally, David Turner suggested that free software developers don't
have a reason to actively help restrict their users, and might
choose to do the opposite.  For example, if Alice writes a program
and a company allows any version signed by Alice to run on a particular
PDA, but forbids other third-party software to run, Alice might
choose to add a feature that gives the user an interactive shell or
allows the user to download other applications and run them in her
application's address space.

I'm not sure that these incentives are stable enough to count on.
One problem is the dual-use problem -- if people are writing
"security" software, they might cause the software to enforce some
policy without investigating the question of whether that enforcement
will ever harm user freedom.  (This is my experience in talking to
some people who are DRM skeptics but working on trusted computing.
They feel that their software, which can be used for DRM and non-DRM
applications, would be less good if it contained features that
always let users violate policy.  Why this is so is a complicated
argument, but they have some reasons that seem valid to them and do
not imply that they actively want their software to be used for
DRM.)  You can certainly imagine that people who are interested in
provably correct security and formal proofs of program behavior will
not want to rely on user decisions which might result in the program
carrying out arbitrary operations.  There are people who want to
write provably-correct microkernels and interprets and they feel
that there are no properties of their software that could be proven
if the user could issue arbitrary commands resulting in arbitrary
behavior.

Another problem is that some free software developers might be
sympathetic to the use of their code in DRM applications.  This
might seem bizarre to us, but there is nonetheless the example of
the DReaM project, as well as the long term possibility that
free software based DRM will be (1) more appealing from a public
relations point of view (it benefits from the cachet and good
will of free software); (2) more procompetitive in some ways than
DRM based on proprietary software (because there is more room for
a wider variety of parties to make changes, enhancements,
suggestions, and new versions as long as they demonstrate that
they have not thereby allowed users to escape from DRM policy
enforcement); and (3) much more acceptable from a transparency
and privacy point of view, because it's easy to verify what the
DRM software actually does and to confirm that it doesn't contain
hidden spyware functionality, as a number of proprietary software
DRM systems have turned out to do.  (Users can also have more
assurance that the restrictions that will be enforced against them
are _only_ those restrictions that the publisher has revealed,
and that there is no hidden ability to make access expire or to
revoke it arbitrarily, or for the system to malfunction in a way
that nobody but the publisher could have anticipated.)

Another problem is that someone who would ordinarily want to help
users break a policy might get paid not to -- and then whoever was
the source of the funds would argue that this didn't violate
the "recommended or principal context of use" clause, or that that
clause couldn't be enforced.

-- 
Seth David Schoen <schoen at loyalty.org> | This is a new focus for the security
     http://www.loyalty.org/~schoen/   | community. The actual user of the PC
     http://vitanuova.loyalty.org/     | [...] is the enemy.
                                       |          -- David Aucsmith, IDF 1999


More information about the Committee-D mailing list