I wondered it I might be able to write a little computer programme to explore this, and next thing I knew I was mugging up on the Stefan-Boltzmann law of black body radiation, and wondering how to solve fourth-power equations. I spent most of yesterday piecing together a bit of code which looked at a single square metre of ground on the equator, which was heated up by sunlight during the day, and conducted heat into the ground, and re-radiated heat back into space. By evening I had figured out how to solve fourth-power equations using a brute force approximation. And I'd gone shopping on the web for the emissivity and albedo of the Earth, and the thermal conductivity and capacitance of clay. And it all worked fine. And by evening I was producing graphs showing the variation of surface temperature with rotation rate. Somehow or other, I seemed to have written a bit of code with no bugs in it, for perhaps the first time in my life. I felt like, well, - how can I put this? -, a Master of the Universe.
By late evening, some faint doubts had begun to set in. Although the little simulation model looked like it was working fine, and was producing the expected sorts of results, something didn't feel quite right. There was something vaguely perplexing about the numbers I was looking at. I couldn't put my finger on what.
Today I returned to the code, and began to look closely at it. And after a while I realised that I'd written what I'd thought was a simple bit of code quite wrongly. It was the code that dealt with the conduction of heat into the the ground. I thought I'd got it right, but looking again, I realised I'd got it all wrong. But I've fixed it now. The code has improved immeasurably.
At left are my latest results. They show that if the Earth rotated slowly, like once every 28 days like the Moon does, it would have a very cold night, and a very hot day. And if it rotated once every hour, its surface temperature wouldn't vary much at all.
Am I confident about my results? Would I like to publish them in a journal somewhere? Well, no. I'm full of all sorts of doubts and questions. I've screwed up one piece of the code already. How many other bits did I screw up too? And how many of my assumptions were justified? And when I went shopping on the internet, did I buy the right numbers for emissivities and albedos and conductivities?
Most of the code that I've been paid to write is quite easy to test. I used once to write the firmware that runs inside keyboards (Yes, there's a little dedicated microprocessor in your keyboard which just sits there waiting for you to press a key. And that's all it does.) It was pretty easy to test whether my code was working, because if I pressed 'B', then I expected to find that a 'B' got sent down the wire to the computer at the other end. If it didn't, then something was badly wrong.
But when you're writing a bit of code that's looking at the surface temperature of the Earth if it spun on its axis once an hour, or once a month, you really have no idea what's supposed to come out the other end. You write the code because you want to find out. There's sometimes no way of knowing what's supposed to happen. And so you have no idea whether what you've written has something badly wrong with it.
What I wrote yesterday was, in some ways, a little climate simulation model. It was a junior version of the sorts of simulation models which are running on supercomputers in universities, chuntering out what the climate is going to be like in 50 years time. The only difference is that my piece of code has maybe 10 or 15 questionable assumptions built into it, and 5 or 6 bits of doubtful data, and within 24 hours one colossal bug has been revealed in it. Those supercomputers are running bits of code which have tens of thousands of assumptions, and more or less as many bits of doubtful data, and hundreds of bugs which nobody's yet found. And yet they're confident enough to publish their results in reputable journals!
What actually happens is that the assumptions tend to get buried and forgotten, and so does the dubious data, and everything else. If it looks good, and produces nice graphs with curving lines, then, well, it must be right, mustn't it? It's very easy to get seduced into thinking that the code works fine, when it doesn't really work fine at all. It's easy to become over-optimistic about what you've written, if most of it is buried out of sight and out of mind.
I produced some nice sweeping curves yesterday. I think they're quite impressive. Well, they would be it I could remove the hesitant jiggeriness of them, and lend them a confident certainty. They would then look like the stuff that gets published in upmarket journals. But while nice curves are ever so seductive, they don't really mean anything at all. After all, I could have just written a Nice Sweeping Curves generator - a bit of code which generates, well..., nice sweeping curves.
I've written too many computer programmes to believe anything just because it's the product of a computer programme. If anything, that seems to be a very good reason to thoroughly distrust and disbelieve it. Because computers are so seductive. They've got all the curves. When the climate scientists fall back on their computer models to provide evidence for their beliefs, I don't buy it. They're probably just fooling themselves. They've probably made a lot of dubious assumptions, and use lots of uncertain data, and fill their programmes with inadvertent bugs. I know about it, because that's what I do myself, over and over again.