Lampoon wrote a VERY good reply to Mojician's KK on Green Tech:
I hope it is not too late to throw another log on the embers of this discussion, but I ran across a paper that offers an interesting framework for evaluating a policy that may cause severe harm to the public domain. The paper is entitled “The Precautionary Principle” (or “PP” for short) and a pdf is available here. http://www.fooledbyrandomness.com/pp2.pdf
PP states that if the risk of an action includes severe or unrecoverable global harm (or as the authors put it, “ruin”) then in the absence of scientific near-certainty of the safety of the action, the burden of proof about absence of harm falls on those proposing the action. The authors compare nuclear energy risks with Genetically Modified Organism (GMO) risks as an illustration. They conclude that nuclear energy risks, if implemented in small quantities, can be localized (how small to be determined by direct analysis so threats remain local) and subject to traditional cost-benefit analysis for local decision making. GMO’s, on the other hand, should be subject to PP because their risk is systemic both to the ecosystem – a GMO might spread uncontrollably and cannot be localized – and on human health – the modification of crops affects everyone. Therefore, the absence of evidence, one way or the other, that an action might cause ecocide shifts the burden to those proposing the action to demonstrate to a near-certainty of its safety.
I bring up this paper because it help me focus my thinking about some of the issues raised above.
- See more at: http://baseball.seattlesportsinsider.com/blogs/konspiracy-korner#comment...
That's a cool paper Lampoon. My own noodlings, below, are a little bit TIC but please don't take them as disrespect to your own thoughts. You know how Dr. D is ... anyway. The wiki page on 'Global Catastrophic Risk' informs us that we've got a good 1-in-5 chance of ending ourselves before the year 2100 A.D.:
I give Wiki credit for acknowledging that their page is philosophical, not scientific. As does the paper you Lampoon cites, coming from the Paris School of Philosophy. Personally, Dr. D was a bit disappointed that Global Warming, Grey Goo, and Particle Accelerator Accidents didn't make the top eight. But at his age, he can afford to take these things, um, philosophically.
What is Grey Goo, you may ax? It's a colorful name for --- > self-replicating Von Neumann machines eating the Earth out of house and home. ? eh? Why would this be called Grey Goo? Well, there's an easy comment for some intrepid SSI denizen. I'd like to know myself.
A good takeaway here is, "Well, if we want trillion$ to deal with the risk of Global Warming, what else should we have trillions for? And in what priority?" The Wiki page lists the top 17 threats to humanity's existence, for our convenience. They include the 'megatsunami' but omit their own proposal that the universe may be a simulation about to end within 84 years.
Scenes we'd like to see: Donald and Hillary debate the prioritization ... John Kerry and Ben Carson teaming for a film on 'The Inconvenient Truth' the universe could be a simulation ... finally, a Wiki page getting specific as to what % of our GNP ought to be allocated to planetary life insurance. Maybe 30%? Maybe just double everyone's taxes? It sounds funny but this is actually the live electrical wire behind the Wobal Glarming debate: how much power should we consolidate in Washington to protect ourselves, and how much dinero should we stash there.
My response is a little TIC but I do appreciate the link Lampoon. +1
Personally I haven't heard the Democratic or Republican candidates so much as asked how much of the GNP we should be allocating to self-SELF-defense. Or what research should be prohibited in the name of Fighting Skynet. Or this topic addressed in any way at all.
If there's a 19% chance that we'd done by 2100, you'd think the exchange would extend beyond the 1-in-19 factor of Global Warming?
Keith was (is?) a Mayor. It would be interesting to hear him (and all you all) frame a platform statement on this issue.
... Dr. D's?
It seems VERY reasonable to divide the basic responses to this into (1) spiritual and (2) non-spiritual. If we're fairly confident that a Creator would intervene to prevent a super-intelligent AI from nuking us, we can use reasonable precaution and go ahead living our lives (including, ahem, researching AI). If we rule that out, we'd better get about the business of deciding how much of our GNP should be allocated to the planet's life insurance :- )
And Dr. D would not be above Tongue-In-Cheek ripostes during televised debates: wouldn't a very cheap form of Planetary Life Insurance be to criminalize all scientific research that could feasibly lead to the planet's demise? What the deuce are we doing researching nanotech and AI if each field has a good 5% chance of ending us within 84 years?!
Ahhhhhhh. :: shrug :: :: winning smile :: The Declaration of Independence tells us that we are endowed by Our Creator with an inalienable right to life. About 95% of you watching tonight have a pretty good idea that He isn't going to let a comet the size of Texas land on the Eiffel Tower next year. ... But for the 5% of my friends, who sincerely believe that self-protection is totally up to Humankind, well, sure. I would favor a Congressional Action Committee to propose a budget and a set of laws addressing the issue. Let's do it. It should help us find middle ground on where to put the money.
Like Bill James said, there's still a 1-in-15 chance that the Mariners will go another 39 years without a World Series, or until the megatsunami, whichever comes first. Do you smoke?