Arms Control and Nanotechnology
The present speaker is Gary Marchant, and his talk is about international treaties that might, hypothetically, regulate nanotechnology. It's terribly, terribly interesting, even though the abstract (see here) makes it sound pretty dry. He spoke a great deal about the failings of efforts to create international regimes to regulate nuclear proliferation (A. Q. Khan, anyone?), and biological weapons, and chemical weapons. He talked about the difficulty of treaty compliance verification. And he makes an interesting comparison to the failed efforts to create a global ban of reproductive cloning, a failure he chalks up to disagreements about how broad the ban should be (i.e., should it include therapeutic cloning, too).
Maybe the most interesting point he makes is about the spreading adoption of the "precautionary principle," a term that means different things to different people. "The precautionary principle," he said, "has spread very quickly around the world -- it's spread much more quickly than nanotechnology has." Marchant mentions that it's been adopted in many European laws, and is "being advanced by some scholars as a principle that is now customary in international law" and there's an impending sense that it should automatically be applied to new fields -- like, say, nanotechnology. Marchant disapproves of the precautionary principle, and quotes from a 1999 letter to Nature (which you can read here) that argues that "the precautionary principle will leave us paralyzed."
One side note: Among the people who attended part of this conference was Ron Bailey, the Reason magazine writer. Bailey and I had a very nice talk yesterday, and he told me that he's done a lot of reading lately on the precautionary principle, and found articulations of it that go back to the early decades of the twentieth century. I'm sure Bailey will write something about this soon, maybe even for The New Atlantis.
UPDATE: Alright, I'm heading off to lunch, but I've got much, much more to post this afternoon...
How Might Nanotech Affect Privacy?
Brad Templeton (described below) just finished his talk about privacy in the age of nanotech. The official title of his talk was "Preserving Privacy as Nanosurveillance Arrives" (as you can see from his abstract, here), but on his PowerPoint slide, he called the talk "The Automation of Good and Evil." I'm not going to recount his whole argument here, except to say that most of it was an attempt to respond to the arguments put forward in David Brin's provocative book The Transparent Society, in which Brin argues, essentially, that the only way we can maintain our freedom is to willingly and thoroughly give up our privacy to one another.
Now, Templeton's 30-minute talk is being followed by a 45-minute debate on the subject, in which he is facing off against Mr. Robin Hanson, a professor at George Mason University (homepage here). The debate isn't great, frankly, but there've been a few interesting points made. Prof. Hanson, an economist, says that he is best known as "the guy behind terrorism futures" (indeed, he is: see the Wired article here), but that he's not worried about nanoterrorism, or even non-nano terrorism. (His exact words: "I'm not very worried about terrorism.") Professor Hanson argues that privacy serves to protect lazy employees (who don't want their laziness known) or treacherous spouses (who don't want their secrets known).
UPDATE: The questions from the audience helped enliven the debate somewhat, with questions about nanobots in the brain, and about the reliability of the information obtained through surveillance.
What Does Nano Mean for Economics?
The next speaker is David Friedman, from the Santa Clara University School of Law, speaking on "What Would a Nanotech Economy Look Like" (personal homepage here, professional homepage here, abstract here).
Professor Friedman is trying to answer the basic questions that people have about nano-economics. Will we all be out of work if it's cheap and easy to manufacture every imaginable product? (No.) Aren't patents going to be a problem? (No, because patents aren't the kind of intellectual property we'll be dealing with. Most likely, when we get to the point of advanced nanotechnology, we'll be programming matter -- so we'll have to mostly be concerned with something like copyright, just like in programming software today.)
There are, Prof. Friedman suggests, two options to protect the world from the dangers of nanotechnology, and we can choose only one of those options. One option is to have government defend us (like in national defense today), and the other is private decisions to protect us (like private defenses against computer viruses). Prof. Friedman -- as you might guess from his writings on libertarianism and anarcho-capitalism -- prefers the latter.
Prof. Friedman refers the audience to a book he's writing, which is on the Web in draft form: Future Imperfect.
UPDATE: At the end of the conference, I asked Professor Friedman to explain on camera his argument about something like copyright, rather than patents, being the best model for the sort of intellectual property protections needed in the distant nanotech future. Click the picture below to see his response, in streaming Windows video: