## 2019-01-29

I continued working through Goldrei’s logic book, mostly on results leading up to the completeness theorem for propositional logic. I also read through the proof of the completeness theorem (I believe it’s called a Henkin-style proof), but I don’t understand it yet. I think I need to go back and make more of the previous results “automatic” before I can fully understand the completeness proof.

## 2019-01-28

I continued with logic, working out of Goldrei’s book (soundness theorem and some peripheral results).

I also started going through the Henkin proof of completeness (for first-order logic) in Leary and Kristiansen’s book. There’s a lot of stuff going on in this proof, so I’m planning to go through it multiple times (Ankifying as I go).

## 2019-01-27

I worked through more of Goldrei’s logic book.

I spent some time adding to the Models symbol page.

I also spent a bunch of time trying to look up different definitions of semantic consequence (I think there are at least two different definitions floating around that disagree on formulas containing free variables, but I haven’t been able to find much).

## 2019-01-26

I didn’t do much math on this day. In the evening I tried reading again Robin Hanson’s pre-rationality paper as well as some related posts by Wei Dai (1, 2) and Abram Demski. I failed yet again to understand this; I probably wasn’t trying hard enough. Also, I think understanding this would be easier after I read more things about Aumann agreement and so forth.

## 2019-01-25

I did some non-math things on this day, including some work for Vipul.

I found out that a book about emotions in mathematics was recently published. I was initially excited and started reading it, but became less excited when I started to see the titles of the papers that were being written.

## 2019-01-24

I rested on this day and didn’t do any math.

## 2019-01-23

I continued working through Leary and Kristiansen’s logic book, but I didn’t like the proof of the deduction theorem, so I decided to work through Goldrei’s book a bit to get myself used to working with formal systems; Goldrei’s book goes slower, spends time building up propositional logic rather than going straight to first-order logic, and also uses a more “mindless” kind of axiomatic system (Leary and Kristiansen’s book seems to define the deductive system by bringing in semantics, which is the thing I didn’t like/understand).

I thought about how to explain belief propagation and started writing something up, but then gave up.

## 2019-01-22

This was pretty similar to the previous day: I worked through more of Leary and Kristiansen’s logic book.

I read a bit of the Boolos/Burgess/Jeffrey book, namely 14.1 (sequent calculus) and 14.3 (other proof procedures and Hilbert’s thesis). I had previously read this when I was first learning logic out of this book (many months ago). This made way more sense now that I have had more exposure to logic from other sources.

I started on the page Comparison of intuitive notions of computability and proof.

I thought again about the two different ways of stating semantic completeness.

I read more of Peter Smith’s Gödel book.

(The caffeine reset has been going well; I can mostly think normally now and I don’t need to nap in the afternoon.)

## 2019-01-21

I worked through more of Leary and Kristiansen’s logic book.

I also read a bit of Chiswell and Hodges’s logic book, to see what it’s like. (Peter Smith recommends both of these books for this level of first-order logic.) I think I have a preference for Leary and Kristiansen’s book, so I am planning to make that one my “main book” for now and only referring to Chiswell and Hodges as a supplement.

I also read a bit more of Peter Smith’s Gödel book.

## 2019-01-20

I continued with the caffeine reset and relaxation.

I again did some work for Vipul Naik (Donations List Website).

I worked through some exercises in Leary and Kristiansen’s logic book.

## 2019-01-19

I continued with the caffeine reset and relaxation.

I also did some work for Vipul Naik (on the Donations List Website).

In the evening I did some math. I think I finally figured out how to phrase covariance in terms of a conditional expectation (though I’m still undecided on whether this makes things more intuitive).

I also proved the equivalence of the definition of differentiation with Caratheodory’s definition (for a single variable), and proved the chain rule using Caratheodory’s definition.

## 2019-01-18

I continued with the caffeine reset and relaxation.

## 2019-01-17

I decided to do a caffeine reset and to relax, so didn’t do any math on this day.

## 2019-01-16

I continued learning about things in statistics (variance, covariance, confidence vs credible intervals, significance tests). I’m still at the stage of exploring the ideas rather than using them to solve problems.

## 2019-01-15

I continued thinking about some things in probability and statistics (extending the sample space, what the type of a “hypothesis” is, the setup of classical statistical inference).

I started on pages for Expectation and Variance.

## 2019-01-14

I thought about some things in probability and statistics, e.g. notation for expected value and the expansion of the sample space.

## 2019-01-13

I finished reading “A Technical Explanation of Technical Explanation”.

## 2019-01-12

I read most of the way through “A Technical Explanation of Technical Explanation”.

## 2019-01-11

I continued learning about Lagrange multipliers. I want to say more about this later (probably on a Subwiki), but for now I will just note that Lagrange multipliers seem like an inherently visual topic, so I don’t like how many explanations are very verbal. Even when an explanation includes a visualization, it usually includes just one visualization; if I were to explain it, I would want to include multiple visualizations, including nearby incorrect visualizations.

## 2019-01-10

I started a page called AI safety papers to track my progress on reading AI safety papers. How should I split my time between trying to understand papers and “catching up” on background mathematics? I am not really sure what to think, and so far I’ve just been going with what my curiosity/gut says. The two activities aren’t really separate because the latter helps with the former, but it’s more of a psychological thing: should I think of myself as someone trying to run up against a wall (the cutting edge AI safety papers) and then back-chaining toward background material when I get stuck? Or should I think of myself as building up some sort of knowledge base/personal encyclopedia, and slowly expanding the base to cover cutting edge stuff?

I wrote the page Distribution of X over Y. This mostly comes from my frustration with people using the word “distribution” in many sort-of-similar but formally-different ways. I wanted to see if there is some unified way of looking at this word.

I read a bit more of Myerson’s book. (My plan is to leave this alone for now while I process the definitions in Anki over the coming days.)

I looked at Tao’s proof of the least upper bound property for reals, especially at the part where I got stuck. The proof is actually pretty interesting to me now that I have forgotten some of the material (when I was originally going through this chapter, it was sort of a mindless sequence of arbitrary-seeming steps), especially all the previous propositions that are used in the proof of this theorem. When I was originally going through the book, the propositions come in kind of a meaningless order (they are just the order in which things can be proved, from a logical standpoint). But when I set a “target” of an interesting theorem to prove (here, the least upper bound property), I have to go backwards to hunt down all the prerequisites. If I just went through the prerequisites (as when I originally worked through the book), it can seem obvious what the prerequisites are (because things are still fresh in my mind), so I am not doing the work of hunting them down. If I forget just enough of the proof, I strike a good balance between (1) not being so lost that it’s impossible to prove it in a reasonable amount of time, and (2) not being primed so much that the results to use in the proof are obvious and I am not doing the work of thinking of the structure of the proof.

I started learning about Lagrange multipliers. Actually, I have seen this material several times, but I keep forgetting it, so this time I decided to go through it extra-carefully, Ankifying as I go, to make sure the understanding sticks. I am interested in this because it seems to come up everywhere.

I thought about the question of why a linear transformation (in $\mathbf R^2$, with rank 2) turns the unit circle into an ellipse. This actually led to the question of what an ellipse even is (I find the sum of distances definition unintuitive). What seems natural to me is to say something like an ellipse is anything that “looks like” $(a \cos t, b\sin t)$ if you use the right orthonormal basis. I think I will write more about this at some point (on a Subwiki). I am interested in this to understand how linear transformations act, and also because this geometric view seems important for understanding SVD.

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r