Part of the reason I wrote that we’re probably as worried as we should be about climate change is to learn from my readers’ best objections. To be honest, I am somewhat disappointed by the effort so far. dreeves argued in favor of a carbon tax, but both I and the median American are basically in agreement already. No one wrote anything that would challenge the “close to the median on the non-worry side” position besides shakeddown, but I haven’t seen too much evidence for his two assertions:
- That the economic cost of climate change action is small.
- That the chance of mass extinction due to climate change is nontrivial.
Left-leaning CNBC quoted the economic cost of fighting climate change in the US as 2-4% of GDP. That could well be worth every penny, but it’s not a small amount. It also suggests that we might start seeing diminishing returns on the next billion dollars of effort.
On the other hand, there’s a looming disaster with a decent chance of wiping out humanity that we’re currently dedicating 0.00001% of world GDP to addressing. That’s 3% of what Americans alone spend on Halloween costumes for their pets. If you want to direct your marginal dollars towards saving humankind from annihilation, rather than towards saving dogkind from seasonal unfashionableness, I would encourage you to join me in donating to artificial intelligence alignment research.
The last time we raised money for a cause I believed in, I wrote it up myself. But there are so many great resources explaining AI alignment that I don’t want to get in the way.
If you don’t have all day, you can read a blog post with stick figure drawings, or a blog post with childhood photos. If you have all day, you can read the book. If you have just 14 minutes and also want to see a person who seems physiologically incapable of laughter try to tell a couple of jokes, you can watch this TED talk. If you got here from SlateStarCodex and only want to read stuff by Scott, you can read any of the dozens of posts by Scott.
I do hope you take the time to follow the above links, so I’ll limit the discussion here to the only thing I know in life: opinion distribution curves.
An important characteristic of the climate change opinion distribution curve is that the smarter opinions are closer to the middle than to the extremes. It’s not there aren’t stupid people holding opinions close to the median for stupid reasons, it’s that there isn’t a lot of intelligent discussion around the tails.
In contrast, the opinion distribution curve on AI looks like this:
society species, the sensible direction for our collective curve to move is to be a hell of a lot more worried about AI.
Fortunately, this seems to be happening. There’s a solid trend of smart people like Elon Musk, Bill Gates, Stephen Hawking, and a long list of computer scientists from Alan Turing to Stuart Russell who have become a lot more worried about superintelligent AI immediately after giving the subject serious thought. It’s time to join them.
A smart thing to do if we’re worried about AI is to donate money to AI alignment research right now. We don’t know how far away superintelligent AI is, and we will probably think that it’s decades away until the last year or two; by then it will be quite late. Also, scientific research is something that builds on itself over time. A problem that takes 50 years for scientists to solve can’t usually be solved in 5 years with a tenfold budget.
There’s a good reason for Americans, in particular, to donate money before the end of 2017. The new tax plan raises the standard deduction to $12k/$24k and eliminates most itemized deductions. This means that you won’t get to deduct the first several thousand dollars of charity donations going forward, but you can in 2017. Whatever amount you were going to donate in 2018, it makes sense to donate it now and get a tax cut.
A non-suicidal civilization should be dealing with existential risk collectively, for example with a government-funded “AI safety Manhattan project” of our best scientists. We’re not there yet, and in the meantime, private donations by smart people need to carry the load. To get there it’s also important to spread the word, which is why I’m blogging about matching funds publicly instead of giving anonymously.
I am planning to donate to the Machine Intelligence Research Institute fundraiser because I’m reasonably certain that MIRI is doing important work on AI alignment. However, I’m not as certain that they’re the absolute best in the field. I encourage my readers to look at other organizations like the Future of Life Institute, the Center for Human-Compatible AI, the Berkeley Existential Risk Initiative, the Future of Humanity Institute at Oxford, and the Effective Altruism Far Future Fund. In my donation to MIRI, I will match the amount donated by my readers to any legitimate organization that deals with existential risk, up to 0.3 Bitcoins (currently $5,150).
I will go ahead with my donation on December 24th, so in the next two weeks please consider giving some money to ensure that humanity survives beyond our generation. Email me your donation receipt when you do, and I will happily give you credit on this blog. I will include or exclude according to your wishes any of the following information: your name, the amount donated, the organization you chose, and the reason for your donation.
Good Solstice, Merry Christmas, Hag Hannukah Sameah, Joyous Kwanzaa, and Happy Far Future to us all!
Donation update – my readers have donated over $35,000 and I matched over $5,000 of my own to MIRI!
Below are the names, amounts, and explanations from all the donors who wanted those details made public, some of us also made MIRI’s list of top contributors.
James Landis – 1 BTC. “Now is a really good time to donate Bitcoin to charity – it harnesses the greed of speculation and counterbalances all the unethical uses of the currency so far (cryptolocker being among the worst). It’s a great way to capture all of the capital gains without any of the tax penalty, too!”
Clark Gaebel – $12,000. “For a long time now (well before hearing about the AI-risk folk), I’ve considered unfriendly AI to be inevitable without intervention. As a fun exercise, spend 30 quiet seconds thinking of a way to design a super-intelligence which makes as many paperclips as possible, and isn’t dangerous. It’s hard!
Designing a general optimizer is hard. Once upon a time, I thought deep learning wasn’t good enough to get there. Complicated, deep, neural networks pattern-matched in my head to “overfitting”. But when AlphaZero pwned stockfish, I updated. It didn’t just beat humans at chess. It beat the best humans with the assistance of a computer at chess”
Kevin Fischer – $5000. “I’ve never donated to MIRI or any other non-profit in the AI alignment field before, but your article convinced me to do so this year.”
I have also received donation receipts from Triinu, Cliff, Matthijs, Keegan. and Eric. If any of the above want your details published, let me know.
If I forgot someone who donated let me know ASAP, and if you were partly inspired by my blog to donate you can email me your donation receipt and I’ll add you to the list as well.
Thank you all!