-
Notifications
You must be signed in to change notification settings - Fork 21
/
Copy pathDeath Note Anonymity.page
237 lines (153 loc) · 50.6 KB
/
Death Note Anonymity.page
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
---
title: Death Note: L, Anonymity & Eluding Entropy
description: Applied Computer Science: On Murder Considered as one of the Hard Sciences
created: 04 May 2011
tags: anime, criticism, computer science, cryptography
status: finished
belief: highly likely
...
> This essay assumes a familiarity with the early plot of _[Death Note](!Wikipedia)_ and [Light Yagami](!Wikipedia); if you are unfamiliar with it, see my [_Death Note_ Ending](Death Note Ending) essay or consult [Wikipedia](!Wikipedia "Death Note#Plot").
I have elsewhere called Light 'hubristic' and said he made mistakes. So I am obliged to explain what he did wrong and how he could do better.
While Light starts scheming and taking serious risks as early as the arrival of the FBI team in Japan, he has fundamentally already screwed up. L should never have gotten that close to Light. The Death Note kills flawlessly without forensic trace and over arbitrary distances; _Death Note_ is almost a thought-experiment - given the *perfect* murder weapon, how can you screw up *anyway*?
Some of the other Death Note users highlight the problem. The user in the [Yotsuba Group](!Wikipedia "List of Death Note characters#Yotsuba Group") carries out the normal executions, but *also* kills a number of prominent competitors. The killings directly point to the Yotsuba Group and eventually the user's death. The moral of the story is that indirect relationships can be fatal in narrowing down the possibilities from 'everyone' to 'these 8 men'.
# Detective stories as optimization problems
In Light's case, L starts with the world's entire population of 7 billion people and needs to narrow it down to 1 person. It's a search problem. It maps fairly directly onto basic [information theory](!Wikipedia), in fact. (See also [Copyright](), [Simulation inferences](), and [The 3 Grenades]().) To uniquely specify one item out of 7 billion, you need 33 bits of information because $log_2(7000000000) = ~32.7$; to use an analogy, your 32-bit computer can only address one unique location in memory out of *4* billion locations, and adding another bit doubles the capacity to >8 billion. Is 33 bits of information a lot?
Not really. L could get one bit just by looking at history or crime statistics, and noting that mass murderers are, to an astonishing degree, *male*^[In fact, every single person mentioned in my [Terrorism is not Effective](Terrorism is not Effective#competent-murders) is male, and this seems to be true of the full [Wikipedia list of mass murderers](!Wikipedia "List of rampage killers") as well.], thereby ruling out half the world population and actually starting L off with a requirement to obtain only 32 bits to break Light's anonymity.[^Misa] If Death Note users were sufficiently rational & knowledgeable, they could draw on concepts like [superrationality](!Wikipedia) to acausally cooperate^[Acausality is an odd sort of new concept in [decision theory](!Wikipedia), primarily discussed in [Douglas Hofstadter](!Wikipedia)'s ["superrationality" essays](/docs/1985-hofstadter "Metamagical Themas: Sanity and Survival"), [Gary Drescher](!Wikipedia)'s _[Good and Real](http://www.amazon.com/Good-Real-Demystifying-Paradoxes-Bradford/dp/0262042339/)_ chapters 5-7, and on [LessWrong.com](http://lesswrong.com/tag/acausal/).] to avoid this information leakage... by arranging to pass on Death Notes to females^[My first solution involved sex reassignment surgery, but that makes the situation worse, as transsexuals are so rare that an L intelligent enough to anticipate these ultra-rational Death Note users would instantly gain a huge clue: just check everyone on the surgery lists. Anyway, most Death Note users would probably prefer the passing-it-on solution.] to restore a 50:50 gender ratio - for example, if for every female who obtained a Death note there were 3 males with Death Notes, then all users could roll a 1d3 dice and if 1 keep it and if 2 or 3 pass it on to someone of the opposite gender.
[^Misa]: This reasoning would be wrong in the case of [Misa Amane](!Wikipedia), but Misa is an absurd character - a Gothic lolita pop star who falls in love with Light through an extraordinary coincidence and doesn't flinch at anything, even sacrificing 75% of her lifespan or her memories; hence it's not surprising to learn on Wikipedia from the author that the motivation for her character was to avoid a "boring" all-male cast and be "a cute female". (_Death Note_ is not immune to the [Rule of Cool](http://tvtropes.org/pmwiki/pmwiki.php/Main/RuleOfCool) or [Sexy](http://tvtropes.org/pmwiki/pmwiki.php/Main/RuleOfSexy).)
We should first point out that Light is always going to leak *some* bits. The only way he could remain perfectly hidden is to not use the Death Note at all. If you change the world in even the slightest way, then you have leaked information about yourself in principle. Everything is connected in some sense; you cannot magically wave away the existence of fire without creating a cascade of consequences that result in [every living thing dying](http://lesswrong.com/lw/hq/universal_fire/). For example, the fundamental point of Light executing criminals is to *shorten their lifespan* - there's no way to hide that. You can't both shorten their lives and *not* shorten their lives. He is going to reveal himself this way, at the very least to the actuaries and statisticians.
More historically, this has been a challenge for cryptographers, like in WWII: how did they exploit the Enigma & other communications without revealing they had done so? Their solution was misdirection: [constantly arranging for plausible alternatives](!Wikipedia "Ultra#Safeguarding of sources"), like search planes that "just happened" to find a German ship or submarine. (However, the famous story that Winston Churchill allowed the town of Coventry to be bombed rather than risk the secret of Ultra has [since been put into question](!Wikipedia "Coventry Blitz#Coventry and Ultra").) It's not clear to me what would be the best misdirection for Light to mask his normal killings - use the Death Note's control features to invent a anti-criminal terrorist organization?
So there is a real challenge here: one party is trying to infer as much as possible from observed effects, and the other is trying to minimize how much the former can observe while not stopping entirely. How well does Light balance the competing demands?
# Mistakes
## Mistake 1
However, he can try to reduce the leakage and make his [anonymity set](!Wikipedia "Degree of anonymity") as large as possible. For example, killing every criminal with a heart attack is a dead give-away. Criminals do not die of heart attacks that often. (The point is more dramatic if you replace 'heart attack' with 'lupus'; as we all know, in real life it's never lupus.) Heart attacks are a subset of all deaths, and by restricting himself, Light makes it easier to detect his activities. 1000 deaths of lupus are a blaring red alarm; 1000 deaths of heart attacks are an oddity; and 1000 deaths distributed over the statistically likely suspects of cancer and heart disease etc. are almost invisible (but still noticeable in principle).
So, Light's fundamental mistake is to kill in ways unrelated to his goal. Killing through heart attacks does not just make him visible early on, but the deaths reveals that his assassination method is supernaturally precise. L has been tipped off that Kira exists. First mistake.
## Mistake 2
Worse, the deaths are non-random in other ways - they tend to occur at particular times! Graphed, daily patterns jump out.
L was able to narrow down the active times of the presumable student or worker to a particular range of longitude, say 125-150° out of 180°; and what country is most prominent in that range? Japan. So that cut down the 7 billion people to around 0.128 billion; 0.128 billion requires 27 bits ($log_2 (128000000) = ~26.93$) so just the scheduling of deaths cost Light 6 bits of anonymity!
### De-anonymization
On a side-note, some might be skeptical that one can infer much of anything from the graph and that _Death Note_ was just glossing over this part. "How can anyone infer that it was someone living in Japan just from 2 clumpy lines at morning and evening in Japan?" But actually, such a graph is surprisingly precise. I learned this years before I watched _Death Note_, when I was heavily active on Wikipedia; often I would wonder if two editors were the same person or roughly where an editor lived. What I would do if their edits or user page did not reveal anything useful is I would go to "Kate's [edit counter](!Wikipedia "Wikipedia:WikiProject edit counters")" and I would examine the times of day all their hundreds or thousands of edits were made at. Typically, what one would see was ~4 hours where there were no edits whatsoever, then ~4 hours with moderate to high activity, a trough, then another gradual rise to 8 hours later and a further decline down to the first 4 hours of no activity. These periods quite clearly corresponded to sleep (pretty much everyone is asleep at 4 AM), morning, lunch & work hours, evening, and then night with people occasionally staying up late and editing^[This applies to many other activities like [Twitter posts](http://bits.blogs.nytimes.com/2012/06/07/good-night-moon-good-night-little-bird/ "Twitter Knows When You Sleep, and More") or Google searches; eg. blogger [muflax](http://webcitation.org/6EDvDSVzN "Google Web History (original http://blog.muflax.com/personal/google-web-history/)") observed the same clear circadian rhythms in his Google searches by hour.]. There was noise, of course, from people staying up especially late or getting in a bunch of editing during their workday or occasionally traveling, but the overall patterns were clear - never did I discover that someone was actually a nightwatchman and my guess was an entire hemisphere off. (Academic estimates based on user editing patterns correlate well with what is predicted by on the basis of the geography of IP edits.^[See the 2011 paper, ["Circadian patterns of Wikipedia editorial activity: A demographic analysis"](http://arxiv.org/abs/1109.1746).])
Computer security research offers more scary results. There are an amazing number of ways to break someone's privacy and de-anonymize them ([background](http://33bits.org/2013/04/16/privacy-technologies-an-annotated-syllabus/); there is also [financial incentive](http://www.dtc.umn.edu/~odlyzko/doc/privacy.economics.pdf "'Privacy, Economics, and Price Discrimination on the Internet', Odlyzko 2003") to do so in order to advertise & [price discriminate](!Wikipedia)):
1. small errors in their computer's [clock's time](http://www.caida.org/publications/papers/2005/fingerprinting/) (even [over Tor](https://www.cl.cam.ac.uk/~sjm217/papers/usenix08clockskew.pdf))
2. [Web browsing history](http://w2spconf.com/2010/papers/p26.pdf)^[You can steal information through [JS](http://jeremiahgrossman.blogspot.com/2006/08/i-know-where-youve-been.html) or [CSS](http://blog.mozilla.com/security/2010/03/31/plugging-the-css-history-leak/), and analyzing the history for [inferring demographics](http://www.mikeonads.com/2008/07/13/using-your-browser-url-history-estimate-gender/) is [already patented](http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=1&f=G&l=50&s1=%2220070073681%22.PGNR.&OS=DN/20070073681&RS=DN/20070073681).] or just the [version and plugins](https://panopticlick.eff.org/browser-uniqueness.pdf)^[You can try your own browser live at the [EFF](!Wikipedia "Electronic Frontier Foundation")'s [Panopticlick](https://panopticlick.eff.org/).]; and this is when random [Firefox](http://33bits.org/2010/06/01/yet-another-identity-stealing-bug-will-creeping-normalcy-be-the-result/) or [Google Docs](http://33bits.org/2010/02/22/google-docs-leaks-identity/) or [Facebook](http://33bits.org/2010/09/28/instant-personalization-privacy-flaws/) bugs don't leak your identity
3. [Timing attacks](!Wikipedia) based on how slow pages load^[Felten & Schneider 2000, ["Timing Attacks on Web Privacy"](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.32.6864&rep=rep1&type=pdf)] (how many [cache misses](!Wikipedia) there are; timing attacks can also be used to [learn website usernames or # of private photos](http://crypto.stanford.edu/~dabo/abstracts/webtiming.html))
4. Knowledge of what 'groups' a person was in could [uniquely identify 42%](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.155.820&rep=rep1&type=pdf)^[See also the researchers' [blog](https://web.archive.org/web/20130926165125/http://honeyblog.org/archives/51-A-Practical-Attack-to-De-Anonymize-Social-Network-Users.html).] of people on social networking site [XING](!Wikipedia), and possibly Facebook & 6 others
5. Similarly, [knowing just a few movies](http://33bits.org/about/netflix-paper-home-page/) someone has watched^[Coverage of this de-anonymization algorithm generally linked it to [IMDb](!Wikipedia) ratings, but the authors are clear - you could have those ratings from *any* source, there's nothing special about IMDb aside from it being public and online.], popular or obscure, through [Netflix](!Wikipedia) often grants access to the rest of their profile if it was included in the [Netflix Prize](!Wikipedia). (This was more dramatic than the [AOL search data scandal](!Wikipedia) because AOL searches had a great deal of personal information embedded in the search queries, but in contrast, the Netflix data seems impossibly impoverished - there's nothing *obviously* identifying about what anime one has watched unless one watches very obscure ones.)
6. The researchers [generalized their Netflix work](http://randomwalker.info/social-networks/) to find isomorphisms between arbitrary graphs^[This sounds like something that ought to be [NP-complete](!Wikipedia), and while the [graph isomorphism problem](!Wikipedia) is known to be in NP, it is almost unique in being like [integer factorization](!Wikipedia) - it may be very easy or very hard, there is no proof either way. In practice, large real-world graphs tend to be [very efficient to solve](http://33bits.org/2008/11/20/graph-isomorphism-deceptively-hard/).] (such as social networks stripped of *any and all* data *except* for the graph structure), [for example](http://33bits.org/2011/03/09/link-prediction-by-de-anonymization-how-we-won-the-kaggle-social-network-challenge/) [Flickr](!Wikipedia) and [Twitter](!Wikipedia), and give many examples of [public datasets](http://33bits.org/2008/11/12/57/) that could be de-anonymized[^abstract] - such as your [Amazon purchases](http://33bits.org/2011/05/24/you-might-also-like-privacy-risks-of-collaborative-filtering/) ([Calandrino et al 2011](http://www.cs.utexas.edu/~shmat/shmat_oak11ymal.pdf "'You Might Also Like': Privacy Risks of Collaborative Filtering"); [blog](http://freedom-to-tinker.com/blog/jcalandr/you-might-also-privacy-risks-collaborative-filtering)). These attacks are on just the data that is left after attempts to anonymize data; they don't exploit the observation that the choice of what data to remove is as interesting as what is left, what [Julian Sanchez](!Wikipedia) calls ["The Redactor's Dilemma"](http://www.juliansanchez.com/2009/12/08/the-redactors-dilemma/).
7. Usernames hardly [bear discussing](http://33bits.org/2011/02/16/usernames-linkability-uber-profiles/)
8. Your hospital records can be [de-anonymized](http://dataprivacylab.org/dataprivacy/projects/law/law1.html) just by looking at public voting rolls^[eg. 97% of the Cambridge, Massachusetts voters could be identified with birth-date and zip code, and 29% by birth-date and just gender.] That researcher later went on [to run](http://latanyasweeney.org/work/identifiability.html) "experiments on the identifiability of de-identified survey data [[cite](http://latanyasweeney.org/cv.html#survey)], pharmacy data [[cite](http://dataprivacylab.org/projects/identifiability/pharma1.html)], clinical trial data [[cite](http://latanyasweeney.org/cv.html#clinicaltrial)], criminal data [State of Delaware v. Gannett Publishing], DNA [[cite](http://dataprivacylab.org/dataprivacy/projects/genetic/dna3.html), [cite](http://dataprivacylab.org/dataprivacy/projects/genetic/dna2.html), [cite](http://dataprivacylab.org/dataprivacy/projects/genetic/dna1.html)], tax data, public health registries [[cite](http://latanyasweeney.org/cv.html#iterativeprofiler) (sealed by court), etc.], web logs, and partial Social Security numbers [[cite](http://dataprivacylab.org/dataprivacy/projects/ssnwatch/index.html)]." (Whew.)
9. Your [typing](!Wikipedia "Keystroke dynamics#References") is surprisingly unique
10. Knowing your morning commute as loosely as to the individual blocks (or less granular) [uniquely identifies](http://33bits.org/2009/05/13/your-morning-commute-is-unique-on-the-anonymity-of-homework-location-pairs/) ([Golle & Partridge 2009](http://crypto.stanford.edu/~pgolle/papers/commute.pdf "On the Anonymity of Home/Work Location Pairs")) you; knowing your commute to the zip code/census tract uniquely identifies 5% of people
11. Your handwriting is fairly unique, sure - but so is how you fill in bubbles on tests[^bubbles]
12. Speaking of handwriting, your writing style can [be](http://www.nytimes.com/2011/07/24/opinion/sunday/24gray.html) [pretty unique](http://www.ncfta.ca/papers/emailforensics.pdf) [too](http://randomwalker.info/publications/author-identification-draft.pdf)
13. the unnoticeable background electrical hum may [uniquely date audio recordings](http://www.bbc.co.uk/news/science-environment-20629671 "The hum that helps to fight crime")
(The only surprising thing about [DNA-related privacy breaks](http://www.nytimes.com/2013/01/18/health/search-of-dna-sequences-reveals-full-identities.html) is how long they have taken to show up.)
To summarize: [differential privacy](!Wikipedia) is [almost](http://radar.oreilly.com/2011/05/anonymize-data-limits.html) impossible[^FAQ] and privacy is dead[^Brin]. (See also ["Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization"](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1450006 "Ohm 2009").)
[^abstract]: From the [paper's](http://www.cs.utexas.edu/~shmat/shmat_oak09.pdf) abstract:
> [we] develop a new re-identification algorithm targeting anonymized social-network graphs. To demonstrate its effectiveness on real-world networks, we show that a third of the users who can be verified to have accounts on both Twitter, a popular microblogging service, and Flickr, an online photo-sharing site, can be re-identified in the anonymous Twitter graph with only a 12% error rate. Our de-anonymization algorithm is based purely on the network topology, does not require creation of a large number of dummy "sybil" nodes, is robust to noise and all existing defenses, and works even when the overlap between the target network and the adversary's auxiliary information is small.
[^FAQ]: Arvind Narayanan and Vitaly Shmatikov [brusquely summarize](http://www.cs.utexas.edu/~shmat/socialnetworks-faq.html) the implications of their de-anonymization:
> **So, what's the solution?**
>
> We do not believe that there exists a technical solution to the problem of anonymity in social networks. Specifically, we do not believe that any graph transformation can (a) satisfy a robust definition of privacy, (b) withstand de-anonymization attacks described in our paper, and (c) preserve the utility of the graph for common data-mining and advertising purposes. Therefore, we advocate non-technical solutions.
So, the de-anonymizing just happens [behind closed doors](http://33bits.org/2012/12/17/new-developments-in-deanonymization/):
> researchers don't have the incentive for deanonymization anymore. On the other hand, if malicious entities do it, naturally they won't talk about it in public, so there will be no PR fallout. Regulators have not been very aggressive in investigating anonymized data releases in the absence of a public outcry, so that may be a negligible risk. Some have questioned whether deanonymization in the wild is actually happening. I think it's a bit silly to assume that it isn't, given the economic incentives. Of course, I can't prove this and probably never can. No company doing it will publicly talk about it, and the privacy harms are so indirect that tying them to a specific data release is next to impossible. I can only offer anecdotes to explain my position: I have been approached multiple times by organizations who wanted me to deanonymize a database they'd acquired, and I've had friends in different industries mention casually that what they do on a daily basis to combine different databases together is essentially deanonymization.
In general, there's no clear distinction between 'useful' and 'useless' information from the perspective of identifying/breaking privacy/reversing anonymization ([emphasis added](http://33bits.org/2009/10/14/de-anonymization-is-not-x-the-need-for-re-identification-science/)):
> 'Quasi-identifier' is a notion that arises from attempting to see some attributes (such as ZIP code) but not others (such as tastes and behavior) as contributing to re-identifiability. However, the major lesson from the re-identification papers of the last few years has been that *any information at all about a person* can be potentially used to aid re-identification.
[^bubbles]: See ["Bubble Trouble: Off-Line De-Anonymization of Bubble Forms"](http://web.archive.org/web/20110703183356/www.cs.princeton.edu/~wclarkso/bubble-trouble.pdf), USENIX 2011S Security Symposium; from ["New Research Result: Bubble Forms Not So Anonymous"](http://www.freedom-to-tinker.com/blog/wclarkso/new-research-result-bubble-forms-not-so-anonymous):
> If bubble marking patterns were completely random, a classifier could do no better than randomly guessing a test set's creator, with an expected accuracy of 1/92 ≈ 1%. Our classifier achieves over 51% accuracy. The classifier is rarely far off: the correct answer falls in the classifier's top three guesses 75% of the time (vs. 3% for random guessing) and its top ten guesses more than 92% of the time (vs. 11% for random guessing).
[^Brin]: But hey, at least the lack of privacy is two-way and the public can [keep an eye on](!Wikipedia "Transparency (social)") malefactors like the government, as [David Brin](!Wikipedia)'s _[The Transparent Society](!Wikipedia)_ argues is the best outcome.
But wait, Wikileaks has revealed the massive expansion of American government secrecy due to the War on Terror and even the [supposed friend](http://www.washingtonpost.com/wp-dyn/content/article/2008/12/10/AR2008121003364.html) of transparency, President Obama, [has](http://www.nytimes.com/2009/03/17/us/politics/17signing.html) [presided](http://www.nytimes.com/2010/06/12/us/politics/12leak.html) [over](http://www.salon.com/news/opinion/glenn_greenwald/2010/05/25/whistleblowers) [an](http://web.archive.org/web/20110624070259/http://harpers.org/archive/2010/08/hbc-90007562 "Obama's War on Whistleblowers") [expansion](http://www.dailykos.com/story/2011/03/01/951432/-War-on-Whistleblowers-Escalating) of President George W. Bush's secrecy programs and crackdowns on [whistle-blowers](!Wikipedia) of all stripes? Oh. Too bad about that, I guess.
## Mistake 3
Light's third mistake was reacting to the provocation of Lind L. Tailor. Running the broadcast in 1 region was a gamble on L's part; he had no real reason to think Light was in [Kanto](!Wikipedia "Kanto region") and should have arranged for it to be broadcast to exactly half of Japan's population. But it was one that paid off; he narrowed his target down to $\frac{1}{3}$ the original Japanese population, for a gain of ~1.6 bits. (You can see it was a gamble by considering if Light had been outside Kanto; since he would not see it live, he would not have reacted, and all L would learn is that his suspect was in that other $\frac{2}{3}$ of the population, for a gain of only ~0.3 bits.)
But even this wasn't a *huge* mistake. He lost 6 bits to his schedule of killing, and lost another 1.6 bits to temperamentally killing Lind L. Tailor, but since the male population of Kanto is 21.5 million (43 million total), he still has ~24 bits of anonymity left ($log_2 (21500000) = ~24.36$). That's not too terrible, and the loss is mitigated even further by other details of this mistake, as pointed out by [Zmflavius](http://alternatehistory.com/discussion/showpost.php?p=7751679&postcount=10 "How might the real world react to a real Kira (Death Note)"); specifically, that unlike "being male" or "being Japanese", the information about being in Kanto is subject to *decay*, since people move around all the time for all sorts of reasons:
> ...quite possibly Light's biggest mistake was inadvertently revealing his connection to the police hierarchy by hacking his dad's computer. Whereas even the Lind L. Taylor debacle only revealed his killing mechanics and narrowed him down to "someone in the Kanto region" (which is, while an impressive accomplishment based on the information he had, entirely meaningless for actually finding a suspect), there were perhaps a few hundred people who had access to the information Light's dad had. There's also the fact that L *knew* that Light was probably someone in their late teens, meaning that there was an extremely high chance that at the end of the school year, even that coup of his would expire, thanks to students heading off to university all over Japan (of course, Light went to Toudai, and a student of his caliber not attending such a university would be suspicious, but L had no way of knowing that then). I mean, perhaps L had hoped that Kira would reveal himself by suddenly moving away from the Kanto region, but come the next May, he would have no way of monitoring unusual movements among late teenagers, because a large percentage of them would be moving for legitimate reasons.
(One could still run the inference "backwards" on any particular person to verify they were in Kanto in the right time period, but as time passes, it becomes less possible to run the inference "forwards" and only examine people in Kanto.)
This mistake also shows us that the important thing that information theory buys us, really, is not the *bit* (we could be using $log_10$ rather than $log_2$, and compares ["dits"](!Wikipedia "Ban (information)") rather than "bits") so much as comparing events in the plot on a *logarithmic* scale. If we simply looked at how the absolute number of how many people were ruled out at each step, we'd conclude that the very first mistake by Light was a debacle without compare in human history since it let L rule out >6 billion people, approximately 60x more people than all the other mistakes put together would let L rule out. Mistakes are relative to each other, not absolutes.
## Mistake 4
Light's fourth mistake was to use confidential police information stolen using his policeman father's credentials. This mistake was the largest in bits lost. But interestingly, many or even most _Death Note_ fans do not seem to regard this as his largest mistake, instead pointing to his killing Lind L. Tailor or perhaps relying too much on Mikami. The information theoretical perspective strongly disagrees, and lets us quantify how large this mistake was.
When he acts on the secret police information, he instantly cuts down his possible identity to one out of a few thousand people connected to the police. Let's be generous and say 10,000. It takes 14 bits to specify 1 person out of 10,000 ($log_2 (10000) = ~13.29$) - as compared to the 24-25 bits to specify a Kanto dweller.
This mistake cost him 11 bits of anonymity; in other words, this mistake cost him *twice* what his scheduling cost him and almost *8* times the murder of Tailor!
## Mistake 5
In comparison, the fifth mistake, murdering Ray Penbar's fiancee and focusing L's suspicion on Penbar's assigned targets was positively cheap. If we assume Penbar was tasked 200 leads out of the 10,000, then murdering him and the fiancee dropped Light from 14 bits to 8 bits ($log_2 (200) = ~7.64$) or just 6 bits or a little over half the fourth mistake and comparable to the original scheduling mistake.
## Endgame
At this point in the plot, L resorts to direct measures and enters Light's life directly, enrolling at the university. From this point on, Light is pretty much screwed. He frittered away >25 bits of anonymity and then L intuited the rest and suspected him all along. (We could justify L skipping over the remaining 8 bits by pointing out that L can analyze the deaths and infer psychological characteristics like arrogance, puzzle-solving, and great intelligence, which combined with heuristically searching the remaining candidates, could lead him to zero in on Light.)
From the theoretical point of view, the game was over at that point. The question became proving it to L's satisfaction.
# Security is Hard (Let's Go Shopping)
What *should* Light have done? That's easy to answer, but tricky to implement.
One could try to manufacture *dis*information. [Terence Tao](!Wikipedia) rehearses many of the above points about information theory & anonymity, and [goes on to discuss](https://plus.google.com/114134834346472219368/posts/8vmpA9fgRMq) [faking information](!Wikipedia "Disinformation"):
> ...one additional way to gain more anonymity is through deliberate *disinformation*. For instance, suppose that one reveals 100 independent bits of information about oneself. Ordinarily, this would cost 100 bits of anonymity (assuming that each bit was _a priori_ equally likely to be true or false), by cutting the number of possibilities down by a factor of 2^100^; but if 5 of these 100 bits (chosen randomly and not revealed in advance) are deliberately falsified, then the number of possibilities increases again by a factor of (100 `choose` 5) ~ 2^26^, recovering about 26 bits of anonymity. In practice one gains even more anonymity than this, because to dispel the disinformation one needs to solve a [satisfiability](!Wikipedia) problem, which can be notoriously intractable computationally, although this additional protection may dissipate with time as algorithms improve (e.g. by incorporating ideas from [compressed sensing](!Wikipedia)).
## Randomizing
The difficulty with suggesting that Light should - or *could* - have used disinformation on the timing of deaths is that we are, in effect, engaging in a sort of [hindsight bias](!Wikipedia). How exactly is Light or anyone supposed to know that L could deduce his timezone from his killings? I mentioned an example of using Wikipedia edits to localize editors, but that technique was (as far as I know) unique to me and no doubt there are many other forms of information leakage I have never heard of; if I were Light, even if I remembered my Wikipedia technique, I might not bother evenly distributing my killing over the clock or adopting a pattern suggesting I was in Europe rather than Japan. If Light had known he was leaking timing information but didn't know that someone out there was clever enough to use it (a "known unknown"), then we might blame him; but how is Light supposed to know these "unknown unknowns"?
[Randomization](!Wikipedia) is the answer. Randomization and encryption scramble the correlations between input and output, and they would serve as well in _Death Note_ as they do in cryptography & statistics in the real world, at the cost of some efficiency. The point of randomization, both in cryptography and in statistical experiments, is to not just prevent the leaked information or [confounders](!Wikipedia) (respectively) you do know about but also the ones you do *not* yet know about.
To steal & paraphrase an example from [Jim Manzi](!Wikipedia "Jim Manzi (software entrepreneur)")'s [_Uncontrolled_](http://www.amazon.com/Uncontrolled-Surprising-Trial---Error-Business/dp/046502324X/): you're running a weight-loss experiment. You know that the effectiveness might vary with each subject's pre-existing weight, but you don't believe in randomization (you're a practical man! only prissy statisticians worry about randomization!); so you split the subjects by weight, and for convenience you allocate them by when they show up to your experiment - in the end, there are exactly 10 experimental subjects over 150 pounds and 10 controls over 150 pounds, and so on and so forth. Unfortunately, it turns out that unbeknownst to you, a genetic variant controls weight gain and a whole extended family showed up at your experiment early on and they all got allocated to 'experimental' and none of them to 'control' (since you didn't need to randomize, right? you were making sure the groups were matched on weight!). Your experiment is now bogus and misleading. Of course, you could run a second experiment where you make sure the experimental and control groups are matched on weight and also now matched on that genetic variant... but now there's the potential for some third confounder to hit you. If only you had used randomization! Then you would probably have put some of the variants into the other group as well and your results wouldn't've been bogus!
So to deal with Light's first mistake, simply scheduling every death on the hour will not work because the wake-sleep cycle is still present. If he set up a list and wrote down _n_ criminals for each hour to eliminate the peak-troughs rather than randomizing, could that still go wrong? Maybe: we don't know what information might be left in the data which an L or Turing could decipher. I can speculate about one possibility - the allocation of each kind of criminal to each hour. If one were to draw up lists and go in order (hey, one doesn't need randomization, right?), then the order might go 'criminals in the morning newspaper, criminals on TV, criminals whose details were not immediately given but were available online, criminals from years ago, historical criminals etc'; if the morning-newspaper-criminals start at say 6 AM Japan time... And allocating evenly might be hard, since there's naturally going to be shortfalls when there just aren't many criminals that day or the newspapers aren't publishing (holidays?) etc., so the shortfall periods will pinpoint what the Kira considers 'end of the day'.
A much safer procedure is thorough-going randomization applied to timing, subjects, and manner of death. Even if we assume that Light was bound and determined to reveal the existence of Kira and gain publicity and international notoriety (a major character flaw in its own right; accomplishing things, taking credit - choose one), he still did not have to reduce his anonymity much past 32 bits.
1. Each execution's time could be determined by a random dice roll (say, a 24-sided dice for hours and a 60-sided dice for minutes).
2. Selecting method of death could be done similarly based on easily researched demographic data, although perhaps irrelevant (serving mostly to conceal that a killing has taken place).
3. Selecting criminals could be based on internationally accessible periodicals that plausibly every human has access to, such as the _New York Times_, and deaths could be delayed by months or years to broaden the possibilities as to where the Kira learned of the victim (TV? books? the Internet?) and avoiding issues like killing a criminal only publicized on one obscure Japanese public television channel. And so on.
Let's remember that all this is predicated on anonymity, and on Light using low-tech strategies; as one person asked me, "why doesn't Light set up an cryptographic [assassination market](!Wikipedia) or just take over the world? He would win without all this cleverness." Well, then it would not be _Death Note_.
# See Also
- ["Who wrote the _Death Note_ script?"](Death Note script) (statistical analysis of authorship)
# External links
- Discussion:
- [LessWrong](http://lesswrong.com/lw/5ld/death_note_anonymity_and_information_theory/)
- [Hacker News](https://news.ycombinator.com/item?id=3634320)
- ["On Murder Considered as one of the Fine Arts"](!Wikipedia "On Murder Considered as one of the Fine Arts"), [Thomas De Quincey](!Wikipedia)
- ["Stakeout: how the FBI tracked and busted a Chicago Anon; Continuous surveillance, informants, trap-and-trace gear—the FBI spared no …"](http://arstechnica.com/tech-policy/2012/03/stakeout-how-the-fbi-tracked-and-busted-a-chicago-anon/) -(deanonymizing [Jeremy Hammond](!Wikipedia))
- ["When Anonymous Isn't Really Anonymous"](http://brooksreview.net/2014/01/i-see-you/)
# Appendices
## Communicating with a Death Note
One might wonder how much information one could send *intentionally* with a Death Note, as opposed to inadvertently leak bits about one's identity. As deaths are by and large publicly known information, we'll assume the sender and recipient have some sort of pre-arranged key or one-time pad (although one would wonder why they'd use such an immoral and clumsy system as opposed to steganography or messages online).
A death inflicted by a Death has 3 main distinguishing traits which one can control:
1. the person
The 'who?' is already calculated for us: if it takes 33 bits to specify a unique human, then a particular human can convey 33 bits. Concerns about learnability (how would you learn of an Amazon tribesman's death?) imply that it's really <33 bits.
If you try some scheme to encode more bits into the choice of assassination, you either wind up with 33 bits or you wind up unable to convey certain combinations of bits and effectively 33 bits anyway - your scheme will tell you that to convey your desperately important message _X_ of 50 bits telling all about L's true identity and how you discovered it, you need to kill an Olafur Jacobs of Tanzania who weighs more than 200 pounds and is from Taiwan, but alas! Jacobs doesn't exist for you to kill.
2. the time
The time is handled by similar reasoning. There is a certain granularity to Death Note kills: even if *it* is capable of timing deaths down to the nanosecond, one can't actually witness this or receive records of this. Doctors may note time of death down to the minute, but no finer (and how do you get such precise medical records anyway?). News reports may be even less accurate, noting merely that it happened in the morning or in the late evening. In rare cases like live broadcasts, one may be able to do a little better, but even they tend to be delayed by a few seconds or minutes to allow for buffering, technical glitches be fixed, the stenographers produce the closed captioning, or simply to guard against embarrassing events (like Janet Jackson's nipple-slip). So we'll not assume the timing can be more accurate than the minute. But which minutes does a Death Note user have to choose from? Inasmuch as the Death Note is apparently incapable of influencing the past or causing Pratchettian[^Mort] superluminal effects, the past is off-limits; but messages also have to be sent in time for whatever they are supposed to influence, so one cannot afford to have a window of a century. If the message needs to affect something within the day, then the user has a window of only $60 \times 24 = 1440$ minutes, which is $log_2(1440) = 10.49$ bits; if the user has a window of a year, that's slightly better, as a death's timing down to the minute could embody as much as $log_2(60 \times 24 \times 365) = 19$ bits. (Over a decade then is 22.3 bits, etc.) If we allow timing down to the second, then a year would be 24.9 bits. In any case, it's clear that we're not going to get more than 33 bits from the date. On the plus side, an 'IP over Death' protocol would be superior to [some other protocols](!Wikipedia "IP over Avian Carriers") - here, the worse your latency, the more bits you could extract from the packet's timestamp! _[Dinosaur Comics](!Wikipedia)_ on [compression schemes](http://www.qwantz.com/index.php?comic=354 "T-Rex As: 'The Computer Scientist'"):
!["Yeah, but there's more to being smart than knowing compression schemes!" "No there's not!" "Shoot - he knows the secret!!" --Ryan North](/images/2004-ryannorth-dinosaurcomics-391.png "http://www.qwantz.com/index.php?comic=354")
3. the circumstances (such as the place)
[^Mort]: [Terry Pratchett](!Wikipedia), _[Mort](!Wikipedia)_:
> The only things known to go faster than ordinary light is monarchy, according to the philosopher Ly Tin Weedle. He reasoned like this: you can't have more than one king, and tradition demands that there is no gap between kings, so when a king dies the succession must therefore pass to the heir *instantaneously*. Presumably, he said, there must be some elementary particles -- kingons, or possibly queons -- that do this job, but of course succession sometimes fails if, in mid-flight, they strike an anti-particle, or republicon. His ambitious plans to use his discovery to send messages, involving the careful torturing of a small king in order to modulate the signal, were never fully expanded because, at that point, the bar closed.
The circumstances is much more difficult to calculate. We can subdivide it in a lot of ways; here's one:
1. location (eg. latitude/longitude)
Earth has ~510,072,000,000 square meters of surface area; most of it is entirely useless from our perspective - if someone is in an airplane and dies, how on earth does one figure out the exact square meter he was above? Or on the oceans? Earth has ~148,940,000,000 square meters of *land*, which is more usable: the usual calculations gives us $log_2(148940000000) = 37.12$ bits. (Surprised at how similar to the 'who?' bit calculation this is? But $37.12 - 33 = 4.12$ and $2^{4.12} = 17.4$. The old SF classic _[Stand on Zanzibar](!Wikipedia)_ drew its name from the observation that the 7 billion people alive in 2010 would fit in Zanzibar only if they stood shoulder to shoulder - spread them out, and multiply that area by ~18...) This raises an issue that affects all 3: how much can the Death Note control? Can it move victims to arbitrary points in, say, Siberia? Or is it limited to within driving distance? etc. Any of those issues could shrink the 37 bits by a great deal.
2. cause of death
The [International Classification of Diseases](http://www.who.int/classifications/icd/en/index.html) lists upwards of 20,000 diseases, and we can imagine thousands of possible accidental or deliberate deaths. But what matters is what gets communicated: if there are 500 distinct brain cancers but the death is only reported as 'brain cancer', the 500 count as 1 for our purposes. But we'll be generous and go with 20,000 for reported diseases plus accidents, which is $log_2(20000) = 14.3$ bits.
3. action prior to death
Actions prior to death overlaps with accidental causes; here the series doesn't help us. Light's early experiments culminating in the "L, do you know death gods love apples?" seem to imply that actions are very limited in entropy as each word took a death (assuming the ordinary English vocabulary of 50,000 words, 16 bits), but other plot events imply that humans can undertake long complex plans at the order of Death Notes (like Mikami bringing the fake Death Note to the final confrontation with Near). Actions before death could be reported in great detail, or they could be hidden under official secrecy like the aforementioned death gods mentioned (Light uniquely privileged in learning it succeeded as part of L testing him). I can't begin to guess how many distinct narratives would survive transmission or what limits the Note would set. We must leave this one undefined: it's almost surely more than 10 bits, but how many?
Summing, we get $\lt33 + \lt19 + 17 + \lt37 + 14 + {?} = 120{?}$ bits per death.
## "Bayesian Jurisprudence"
[E.T. Jaynes](!Wikipedia) in his posthumous [_Probability Theory: The Logic of Science_](http://omega.albany.edu:8008/JaynesBook.html) (on [Bayesian statistics](!Wikipedia)) includes a chapter 5 on ["Queer Uses For Probability Theory"](http://omega.albany.edu:8008/ETJ-PS/cc5d.ps), discussing such topics as ESP; miracles; heuristics & [biases](!Wikipedia "Cognitive bias"); how visual perception is theory-laden; philosophy of science with regard to Newtonian mechanics and the famed [discovery of Neptune](!Wikipedia); horse-racing & weather forecasting; and finally - section 5.8, "Bayesian jurisprudence". Jaynes's analysis is somewhat similar in spirit to my above analysis, although mine is not explicitly Bayesian except perhaps in the discussion of gender as eliminating one necessary bit.
The following is an excerpt; see also ["Bayesian Justice"](http://lesswrong.com/r/discussion/lw/6u0/bayesian_justice/).
> It is interesting to apply probability theory in various situations in which we can't always reduce it to numbers very well, but still it shows automatically what kind of information would be relevant to help us do plausible reasoning. Suppose someone in New York City has committed a murder, and you don't know at first who it is, but you know that there are 10 million people in New York City. On the basis of no knowledge but this, $e(\text{Guilty}|X ) = -70 db$ is the plausibility that any particular person is the guilty one.
>
> How much positive evidence for guilt is necessary before we decide that some man should be put away? Perhaps +40 _db_, although your reaction may be that this is not safe enough, and the number ought to be higher. If we raise this number we give increased protection to the innocent, but at the cost of making it more difficult to convict the guilty; and at some point the interests of society as a whole cannot be ignored.
>
> For example, if 1000 guilty men are set free, we know from only too much experience that 200 or 300 of them will proceed immediately to inflict still more crimes upon society, and their escaping justice will encourage 100 more to take up crime. So it is clear that the damage to society as a whole caused by allowing 1000 guilty men to go free, is far greater than that caused by falsely convicting one innocent man.
>
> If you have an emotional reaction against this statement, I ask you to think: if you were a judge, would you rather face one man whom you had convicted falsely; or 100 victims of crimes that you could have prevented? Setting the threshold at +40 _db_ will mean, crudely, that on the average not more than one conviction in 10,000 will be in error; a judge who required juries to follow this rule would probably not make one false conviction in a working lifetime on the bench.
>
> In any event, if we took +40 db starting out from -70 db, this means that in order to ensure a conviction you would have to produce about 110 db of evidence for the guilt of this particular person. Suppose now we learn that this person had a motive. What does that do to the plausibility for his guilt? Probability theory says
>
> $e(\text{Guilty}|\text{Motive}) = e(\text{Guilty}|X) + 10 log_{10} \frac{P(\text{Motive}|\text{Guilty})}{P(\text{Motive}|\text{Not Guilty})}$ (5-38)
>
> $\simeq -70 - 10log_{10} P(\text{Motive}|\text{Not Guilty})$
>
> since $P(\text{Motive}|\text{Guilty}) \simeq 1$, i.e. we consider it quite unlikely that the crime had no motive at all. Thus, the [importance] of learning that the person had a motive depends almost entirely on the probability $P(\text{Motive}|\text{Not Guilty})$ that an innocent person would also have a motive.
>
> This evidently agrees with our common sense, if we ponder it for a moment. If the deceased were kind and loved by all, hardly anyone would have a motive to do him in. Learning that, nevertheless, our suspect *did* have a motive, would then be very [important] information. If the victim had been an unsavory character, who took great delight in all sorts of foul deeds, then a great many people would have a motive, and learning that our suspect was one of them is not so [important]. The point of this is that we don't know what to make of the information that our suspect had a motive, unless we also know something about the character of the deceased. But how many members of juries would realize that, unless it was pointed out to them?
>
> Suppose that a very enlightened judge, with powers not given to judges under present law, had perceived this fact and, when testimony about the motive was introduced, he directed his assistants to determine for the jury the *number* of people in New York City who had a motive. If this number is $N_m$ then
>
> $P(\text{Motive}|\text{Not Guilty}) = \frac{N_m - 1}{(\text{Number of people in New York}) - 1} \simeq 10^{-7} (N_m - 1)$
>
> and equation (5-38) reduces, for all practical purposes, to
>
> $e(\text{Guilty}|\text{Motive}) \simeq -10 log(N_m - 1)$ (5-39)
>
> You see that the population of New York has canceled out of the equation; as soon as we know the number of people who had a motive, then it doesn't matter any more how large the city was. Note that (5-39) continues to say the right thing even when $N_m$ is only 1 or 2.
>
> You can go on this way for a long time, and we think you will find it both enlightening and entertaining to do so. For example, we now learn that the suspect was seen near the scene of the crime shortly before. From Bayes' theorem, the [importance] of this depends almost entirely on how many innocent persons were also in the vicinity. If you have ever been told not to trust Bayes' theorem, you should follow a few examples like this a good deal further, and see how infallibly it tells you what information would be relevant, what irrelevant, in plausible reasoning.^["Note that in these cases we are trying to decide, from scraps of incomplete information, on the truth of an Aristotelian proposition; whether the defendant did or did not commit some well-defined action. This is the situation an issue of fact for which probability theory as logic is designed. But there are other legal situations quite different; for example, in a medical malpractice suit it may be that all parties are agreed on the facts as to what the defendant actually did; the issue is whether he did or did not exercise reasonable judgment. Since there is no official, precise definition of 'reasonable judgment', the issue is not the truth of an Aristotelian proposition (however, if it were established that he willfully violated one of our Chapter 1 desiderata of rationality, we think that most juries would convict him). It has been claimed that probability theory is basically inapplicable to such situations, and we are concerned with the partial truth of a non-Aristotelian proposition. We suggest, however, that in such cases we are not concerned with an issue of truth at all; rather, what is wanted is a value judgment. We shall return to this topic later (Chapters 13, 18)."]
>
> In recent years there has grown up a considerable literature on Bayesian jurisprudence; for a review with many references, see Vignaux and Robertson (1996) [This is apparently [_Interpreting Evidence: Evaluating Forensic Science in the Courtroom_](http://www.amazon.com/Interpreting-Evidence-Evaluating-Forensic-Courtroom/dp/0471960268/) --Editor].
>
> Even in situations where we would be quite unable to say that numerical values should be used, Bayes' theorem still reproduces qualitatively just what your common sense (after perhaps some meditation) tells you. This is the fact that George Polya demonstrated in such o exhaustive detail that the present writer was convinced that the connection must be more than qualitative.