2018-12-18

I thought a bit about referencing in math (e.g., how to name results, whether to number equations/propositions, how to make it easy to look up notation), the difference between references and tutorials (the former should “give you everything” while the latter should make the reader do the work), and the relative merits of some math textbooks I like. I hope to write about this thing more in the future (it’s not really directly about math or AI safety, but I do think this sort of thing is under-discussed).

I thought about the s-m-n theorem, in particular the proof of it in Epstein and Carnielli’s book. I think I got the general idea here but I had trouble understanding some of the details in the specific encoding they used (I probably have to read more of the surrounding context). I do find it pretty annoying that all these textbooks use different arbitrary encodings, so that to understand the same thing I need to keep multiple encodings in mind. (It doesn’t help that the notation can also vary a lot between books.) On the plus side, it seems like once one has both the universal function and the s-m-n theorem, the rest of computability theory can be done in an encoding-free way (sort of like how once one constructs the real numbers and proves the least upper bound property, one can do analysis in a construction-free way).

I began thinking again about logic. I began reading Peter Smith’s An Introduction to Gödel’s Theorems. I also began re-reading the logic parts of the Boolos/Burgess/Jeffrey book.