Frequently Asked Questions about the NUKEMAP

This document is a work in progress; please excuse the poor formatting for the moment, and the typos that have not yet been corrected!

NUKEMAP: General
Who created the NUKEMAP?
The original NUKEMAP was created in February 2012 by me, Alex Wellerstein, a historian of nuclear weapons who works at the American Institute of Physics in College Park, Maryland (just outside of Washington, DC). I have a B.A. in History from UC Berkeley, a Ph.D. in History of Science from Harvard University, and I am finishing a book on the history of nuclear secrecy in the United States from the Manhattan Project through the War on Terror. Note, again, that I am a historian, not a physicist — people seem to sometimes get confused on this because of where I work. You can read more about my research on my blog, Restricted Data: The Nuclear Secrecy Blog.

In July 2013, I unveiled NUKEMAP2 and NUKEMAP3D.
How was the NUKEMAP created?
The original NUKEMAP and NUKEMAP2 are both Google Maps "mashups." This means that they use publicly-available code to modify the way that Google Maps data is displayed (this is the "Google Maps API") along with a custom-built Javascript model to show various nuclear weapons effects. In simpler terms, this means that the NUKEMAP is code that can work with Google Maps technology to show you what happens when a bomb goes off. NUKEMAP2 is essentially the same thing as the original NUKEMAP except the nuclear effects information is based on much more sophisticated coding and models. Information about the models is below.

NUKEMAP3D uses the same models, but uses the Google Earth API to display these in a 3D environment. This allows the visualization of 3D mushroom clouds, for example, by importing cloud models and manipulating them within the browser environment.

All of the coding, design, and adaptation of the old Cold War models to modern Javascript was done by Alex Wellerstein (me). The population density dataset was graciously purchased for this use by the Center for History of Physics at the American Institute of Physics, and AIP in general needs to be credited for supporting the NUKEMAP activity.
Why was the NUKEMAP created?
We live in a world where nuclear weapons issues are on the front pages of our newspapers on a regular basis, yet most people still have a very bad sense of what an exploding nuclear weapon can actually do. Some people think they destroy everything in the world all that once, some people think they are not very different from conventional bombs. The reality is somewhere in between: nuclear weapons can cause immense destruction and huge losses of life, but the effects are still comprehendible on a human scale.

The NUKEMAP is aimed at helping people visualize nuclear weapons on terms they can make sense of — helping them to get a sense of the scale of the bombs. By allowing people to use arbitrarily picked geographical locations, I hope that people will come to understand what a nuclear weapon would do to places they are familiar with, and how the different sizes of nuclear weapons change the results.

There are many different political interpretations one can legitimately take away from such results. There is not intended to be a simple political "message" of the NUKEMAP.
Could a terrorist, rogue state, or other nuclear power use this for nefarious purposes?
Nuclear states have people whose jobs it is to do the kinds of calculations that the NUKEMAP does, but they probably use better models that are more specific to their particular targets and weaponry. The NUKEMAP would not tell such people anything that they didn't know.

As for terrorists: If we get to the point where a terrorist group is asking, "where should I set off my nuclear weapon that I have?" then we've already gone past the point of no return. There's no way to avert a catastrophe at that point. No terrorist is going to be surprised that nuclear weapons do a lot of damage. Similarly, it isn't exactly too hard to figure out what the most attractive targets would be even without such a map (the most populous or politically important areas of a target country). So I don't really consider the NUKEMAP to be giving such people anything new. The reason terrorists don't currently have nuclear weapons (so far as we know) has nothing to do with them not being aware that nuclear weapons are impressive and devastating.

All of the effects models used by the NUKEMAP are unclassified. There is no secret information here. What the NUKEMAP does is make the models easier to visualize. I've had a hard time seeing any harm in that. A nuclear effects calculator is not a nuclear weapon. It seems like an obvious statement, but people seem prone to mixing these up.
I've found a bug or have a suggestion!
Send me an e-mail at wellerstein@gmail.com. I try to reply to all of them, eventually! If you are seeing buggy behavior, a detailed description of how you triggered it (and a screenshot if possible!) would be great. Also let me know what kind of computer (e.g. your operating system) and web browser you are using.
Known issues:
NUKEMAP: Technology
Technically, how does the NUKEMAP and NUKEMAP3D work?
NUKEMAP and NUKEMAP3D are "mash-ups." This means they take code written by others — in this case Google, who created the Google Maps API and the Google Earth API — and use it for somewhat different purposes than it was intended.

For NUKEMAP, after the user specifies the detonation information, it calls upon a nuclear effects library written in Javascript. This library outputs distances for various effects of the bomb. These distances are then translated into coordinates that the Google Maps API can understand (either circles of fixed radii or more complicated fallout polygons), and then displayed through the Google Maps interface.

For NUKEMAP3D, a similar thing occurs, except that instead of simply displaying the circles or shapes in the Google Earth interface, the effects information is used to manipulate fixed 3D models (the mushroom cloud is composed of four separate models, for example: the head, the stem, the base, and the shadow) so that they look like the appropriate sized mushroom cloud. The animated version is simply the same process, except the models are rescaled according to the change in the cloud over time.

The casualty estimator uses an ambient population density database to query the number of people who are within various distances of ground zero, and applies a model of casualties to those raw numbers. See more information on this below.

The "humanitarian impact" model works by using the Google Places API to search out tagged places near the ground zero location. (This is the same algorithm Google Maps uses whenever you ask how many restaurants are near where you happen to be.) Its accuracy is 100% tied to how good Google's information is. Which is to say... it's not perfect.
Why determines the default city?
The NUKEMAP attempts to guess your location based on Google Maps' estimate based on your IP address (often near where you are, but rarely perfect). It then picks the center of the largest city near that location. So ideally it will pick a fairly large city near where you live, or the large city in which you live. Sometimes it guesses wrong. And if it can't guess at all, because Google Maps doesn't have an estimate (which is often the case), it just chooses New York City, because New York is "traditionally" the city that always gets nuked first.
NUKEMAP: Effects models
How does the prompt effects model work?
Most of the prompt effects equations come from E. Royce Fletcher, Ray W. Albright, Robert F.D. Perret, Mary E. Franklin, I. Gerald Bowen, and Clayton S. White, "NUCLEAR BOMB EFFECTS COMPUTER (Including Slide-rule Design and Curve Fits for Weapons Effects)," (CEX-62.2) U.S. Atomic Energy Commission Civil Effects Test Operations, February 1963. This report explains the curve-fitting equations used to develop the famous little "nuclear bomb effects computer" that came in the back of Samuel Glasstone and Philip J. Dolan's The Effects of Nuclear Weapons, 1964 edition. Most of these were simply imported into Javascript, though a lot of tweaking had to be done (and a few typos discovered).

A few of the equations are taken from my own curve fits of data in Samuel Glasstone and Philip J. Dolan, The Effects of Nuclear Weapons, 1977 edition. In particular, the original curves for the calories per square centimeter needed for various burns were significantly modified in the 1977 edition, so my equations reflect that.
What is the difference between an airburst and a surface burst in this context?
A surface burst is defined as a nuclear explosion which is set off between 0 and 100 feet from ground level. It also indicates that the fireball has itself touched the ground, which drastically increases the residual radiation (fallout).

An airburst here is defined as a weapon detonated at a height above the ground to maximize a given parameter. So in some sense the various psi ratings are a little misleading here, since they are technically for bombs detonated at different heights. However the differences are not extremely large in some cases so at the moment I'm not worrying much about it. But in the long-run I'd like to set it up so you can set up fixed detonation altitudes. But this isn't how the equations derived from the above sources were meant to be used, so it will require a bit of work to adapt them in this way.
How does the fallout model work? How should the contours be interpreted?
When a nuclear weapon explodes, it produces prompt (immediate) radiation, but it also produces radiation that is released in the shorter and longer terms. The "short-term" radiation, defined here as the radioactive residues of the explosion that remain active for the next few weeks or months (as opposed to years) that "fall out" of the mushroom clouds is known as the "fallout."

It is very difficult to accurately model radioactive fallout. There are many relevant variables: the height of the blast detonation, the ratio of fission to fusion reactions in the bomb (most thermonuclear weapons derived at least 50% of their yield from fission reactions), the type of terrain the explosion is detonated on or over (e.g. desert, coral reefs, inhabited cities), and, importantly. the weather conditions, including wind shear at the many different altitudes at which the cloud exists. (Mushroom clouds go up to the 10,000-100,000s of feet and are exposed to different wind conditions along that distance.)

There are two basic approaches to modeling fallout. One is to try and develop a model based on weather conditions. These are complex and computationally intensive, but yield results that match up very well to past testing results. The other are what are known as "scaling models," which present graphs that attempt to give a general idea of the approximate distances of various levels of radioactive exposure, but make no attempt to model realistically specific wind conditions.

The NUKEMAP fallout model is a scaling model. This is not because scaling models are the best, necessarily, but there are things in their favor. For one, they are computationally very easy: one does not need to know detailed meteorological data about the location of the detonation, and thus can be generalized to many times and places very easily. Their falseness is also quite apparent: people are not as likely to confuse their contours as being entirely realistic estimates of what would actually happen, but will understand them to be rough indications. Lastly, there are good pre-existing scaling models available for use, whereas detailed weather models are generally harder to get ahold of, and the prospects of them working quickly even in modern web browsers is not as clear. (If someone has a more complicated model that they'd like to share with me, I'd love to hear from you.)

The scaling model used in the NUKEMAP is based on the work of Carl F. Miller, who published extensively on fallout in the 1960s based on information derived from American atmospheric nuclear testing (which ended in 1963 after the signing of the Limited Test Ban Treaty). His "Simplified Fallout Scaling System" (SFSS) was first outlined in the following report: Carl F. Miller, "Fallout and Radiological Countermeasures, Volume 1," Stanford Research Institute Project No. IM-4021 (January 1963). The copy available on the web has been scanned in from a hard-to-read microfilm copy. A clearer version of the relevant equations is available here, which I photocopied from an original copy of the report on file at the National Library of Medicine. As you can see, even the original is a bit hard to read. Another version was reprinted in Werner Grune, et al., "Evaluation of Fallout Contamination of Water Supplies," Final Technical Report, Contract No. OCD-PS-64-62, OCD Subtask 3131B (1 October 1963-15 May 1965), Office of Civil Defense, Department of Defense, Washington, D.C., Part IV, "Summary and Analysis of the Miller Fallout Model." This one is much easier to read, and gives some corrections to Miller's original model, and explains it slightly differently, which was helpful as well. My implementation of the SFSS in Javascript and Google Maps was developed by tacking back between these multiple sources.

The Miller fallout model works by assuming that the fallout plume is a result of both the cloud and the stem. Basically the elongated shape of it is sort of like a mushroom cloud put on its slide and smeared out. Here is a Miller's drawing of the final shape of a 1 Mt yield surface burst at a wind speed of 15 mph and with 100% fission yield:



At first glance it doesn't look much like the fallout contours one might be used to, like this one of the Castle BRAVO detonation from 1954:



But part of that is because the ones people are used to have often been "cleaned up" and modified a bit to look better. This is another version of the BRAVO contours, created by the RAND Corp. from the same data, and you can see the resemblance to the Miller model in terms of the separation of stem and cloud fallout, leading to a large downwind "hot spot":



Separately, it is worth noting that real fallout evolves over time. The scaling models are known as H+1 models, which is to say that the fallout has been "normalized" to what it would look like after 1 hour, assuming that its maximum time to its final size was 1 hour. This is quite standard in the fallout literature, despite the fact that for large detonations, the arrival time is much longer than 1 hour. Here, for example, is the evolution of the BRAVO fallout plume over 18 hours:



So of what value is the H+1 hour model? It lines up not too poorly with the final total-dose contours for the fallout plume, and as such could be taken as an idealized understanding of what your average rad dose per hour would be downwind of the blast. It is meant, by Miller and by me, to give an indication of the rough size of the contaminated area after a nuclear explosion, which has both pedagogical and planning value, even if it is a little confusing in terms of the movement of the actual cloud.

Note that this model is exclusively for modeling a surface burst, not an airburst. Airbursts do contribute to long-term fallout (e.g. the overall radioactivity in the atmosphere, or the amount of cesium-127 that eventually makes it into human diets at very, very long distances), but, by and large, contribute only negligibly to short-term local fallout. This appears to be the case even with very large yield airbursts that contain significant fission products. The line between an "airburst" and a "surface burst" in such a consideration is whether the fireball touches the ground, as this pulls up significant numbers of heavy particles (e.g. dirt, coral, buildings, people) into the rising fireball, and these heavy particles affect the "falling out" considerably. When the fireball does not touch the ground, the fireball appears to rise high enough and fast enough that the bulk of its fission products do not fallout until much later, when they have lost much of their radioactivity. (Radiation energy is inversely related to time— this is the import of "half-life" measurements. The more radioactively active a given isotope is, the quicker it reduces in quantity. This does not mean that all radioactive hazard dissipates quickly, but the nature of the hazard between short-lived, highly-energetic particles is different than long-lived, moderately-energetic particles. The former are an immediate, acute radiation hazard— e.g., they can give you radiation sickness and hurt you in a few hours or weeks —and the latter are a long-term, chronic radiation hazard — e.g., they can give you cancer and hurt you in several years or decades.)
How does the casualties model work?
The casualties model queries a very large and very fine-grained ambient population database known as the LandScan Global Population 2011. The database was developed by Oak Ridge National Laboratory and is licensed through a company called EastView. Special thanks to the Center for the History of Physics at the American Institute of Physics for purchasing this database for my use. "Ambient population" here means a 24-hour average of people in an area. In many respects this is better than census information, because that usually just measures where people live, as opposed to where they go when they are not at home.

In short, a spatial query is run on database whenever casualties are requested. The database spits back information about how many people live within several radii of ground zero. This information is then used to generate a list of casualties and injuries, according to data contained in a 1973 report by the Defense Civil Preparedness Agency titled DCPA Attack Environment Manual, later reprinted in the 1979 Office of Technology Assessment report, The Effects of Nuclear War:



As you can see, it primarily relies on blast effects (pounds per square inch) as a proxy for calculating injuries and fatalities.

There are limitations to this model. For some yields, especially those which are very low or very high, blast effects are less important than thermal or radiation effects. The model itself also does not take into effect the fact that highly dense urban areas have a "shielding" effect from blast effects — those buildings nearest the ground zero bear most of the brunt of the blast. It's also not entirely clear what the OTA based these estimates on.

So the numbers might be too high. They also might be too low. Without taking into account many more variables than the model can deal with, like terrain type, building type, expected reaction of the bombed populace, and radioactive fallout, it's hard to do anything more than gesture at the numbers that would be affected by a nuclear explosion. I'm not trying to say "it's too complicated, so any model is as good as any other." But in choosing a model I went with one that could be relatively straightforwardly be implemented given the data I have available, and was backed by at least one serious source. So I thoroughly encourage you to take these numbers with a grain of salt — they give some indication of how many people live in reasonably close proximity to the selected ground zero. I have seen some other official estimates of fatalities and injuries that put the numbers (especially of the injured) much higher than the estimates that are given by the casualty model here, and I have seen some other official estimates of blast effects that would put it lower depending on the building types. It's not my intention to over- or under-exaggerate the effects.
How does the 'humanitarian impact' model work?
The "humanitarian impact" model works by using the Google Places API to search out tagged places near the ground zero location. (This is the same algorithm Google Maps uses whenever you ask how many restaurants are near where you happen to be.) Its accuracy is 100% tied to how good Google's information is. Which is to say... it's not perfect.

The point of the "humanitarian impact" model is to emphasize some of the collateral impacts of a nuclear explosion, and to indicate the ways in which support services (e.g. hospitals and fire stations) would be themselves impacted by a nuclear attack.
How does the mushroom cloud model work?
The mushroom cloud model dynamics come primarily from Carl F. Miller, "Fallout and Radiological Countermeasures, Volume 1," Stanford Research Institute Project No. IM-4021, January 1963. Miller was, in his day, considered one of the premier experts on modeling mushroom cloud behavior. Some of the information also comes from curve fitting various figures (in particular the rate of cloud rise) in Samuel Glasstone and Philip J. Dolan, The Effects of Nuclear Weapons, 1977 edn.

For the animated cloud, given the limitations of Google Earth's API (you can move, rotate, and scale models but not otherwise manipulate them), I had to do a little bit of fudging to make things look right aesthetically. But many of the parameters, such as the rate of rise, the changing size of the cloud head, and the final size of the cloud, are taken directly from models derived from nuclear test data.
What kinds of statistics are kept about usage of the NUKEMAP?
The following statistics are kept about every detonation, unless the "do not log anonymous statistics" checkbox is checked: All latitudes and longitudes are rounded to three decimal digits, because I don't really care what block you detonated it on, just the general area. Individual IP addresses are not logged.

Why do I record this information? It's because I'm interested in broad usage patterns. You can see a write-up I did of past usage patterns here to get an idea of what I'm doing with them. I want to know whether people nuke themselves or other countries, and what types of bombs they use, and what kinds of scenarios they imagine. No government is going to knock on your door late at night because you used the NUKEMAP; they don't have access to the data, and even if they did, it wouldn't tell them much.

Separately, I use Google Analytics to keep track of web and browser statistics in general. This information is not correlated with the actual "detonations."

Have more questions? Send them to wellerstein@gmail.com and I'll try to answer them.