## 2018-12-30

I continued with linear algebra, reading part of week 10 of Tao’s notes.

I then started writing a multiple-choice quiz on computability theory. For now, you can see an interactive version of the quiz here. (That link may become obsolete when I publish the quiz, but the GitHub link should be permanent.)

## 2018-12-29

I continued with linear algebra using Tao’s notes, weeks 7, 8, and 9.

I thought about the following problem a bit, but didn’t get far: if we have an inner product space, we can project a vector onto a subspace to get a best approximation of the vector inside the subspace. But if we start out with a notion of best approximation, can we go from that to an inner product? For instance, the tangent line of a curve through a point is (in a specific sense) the best linear approximation of the curve near that point. Can we now define an inner product (over, say, the polynomials of degree at most $n$) and project an arbitrary polynomial onto the subspace of polynomials of degree at most $1$ so as to recover the tangent line?

## 2018-12-28

I continued with linear algebra. I went through some of Terence Tao’s linear algebra notes. Especially weeks 3, 4, 5, 6. I have gone through similar material in Axler’s book and other places, which is why I was able to go through the material relatively quickly. I mostly read the notes, but worked out some of the proofs before reading/did some of the exercises.

It’s interesting for me to see how different authors cover the same material (I also enjoyed doing this with real analysis). For instance, Tao gives up on doing determinants completely rigorously, saying we would need more advanced machinery to understand it properly. Tao also does fun one-dimensional unit conversion (length, currency) and chemistry (converting molecules to atoms, etc.) examples. I also hadn’t seen the shear operation explained in terms of a parallelogram’s area before (i.e. the shear operation changes neither the base nor the height of a parallelogram, so does not change the determinant).

I still feel like linear algebra is a jumble of facts. At the same time, I feel like there is a way to organize everything neatly (I think this table is a start), and that’s one of the things motivating me right now.

## 2018-12-27

I worked through this worksheet about a basis for kernel and image. I did some other linear algebra stuff as well. I read the “Coordinates” note from Vipul Naik’s linear algebra notes.

I returned to Kleene’s first and second recursion theorems. I was able to prove both (using the s-m-n theorem) without looking at notes, but I still feel like I don’t really understand these. I went to Sipser’s book to see how he did things there.

## 2018-12-26

I continued thinking about row equivalence.

I spent some time on finding bases for the “fundamental subspaces” (range, null space, etc.) and change of basis stuff.

## 2018-12-25

I thought about a thing in linear algebra (Vipul’s trick for calculating a basis for the null space).

I read Pete L. Clark’s axiomatic approach to integration, in his honors calculus text.

I started thinking about row equivalence, in particular equivalent ways to express that two matrices are row equivalent.

## 2018-12-24

I continued working through Leary & Kristiansen’s book (syntax and semantics of first-order logic).

I continued reading Peter Smith’s book. (Robinson arithmetic, $\Delta_0, \Sigma_1, \Pi_1$ formulas)

I moved over the page Understanding mathematical definitions to the Learning Subwiki (previously it was in my userspace on the Machine Learning Subwiki) and worked on it more.

## 2018-12-23

I continued working through Leary & Kristiansen’s book.

I wrote the page Model as the representation and as that which is represented.

I wrote a bibliography page for this blog.

## 2018-12-22

I continued working through Leary & Kristiansen’s logic book.

I felt somewhat tired on this day so I mostly didn’t do math (aside from Anki reviews as usual).

## 2018-12-21

I did some exercises out of Leary & Kristiansen’s mathematical logic book.

I also continued reading Peter Smith’s book.

## 2018-12-20

I continued reading Peter Smith’s book.

I thought about the two different ways to state the completeness theorem.

I made more stubs in my userspace on the Machine Learning Subwiki, e.g., Least search operator, Models symbol, Semantic completeness, and Gödel’s completeness theorem.

## 2018-12-19

I continued reading Peter Smith’s book. I thought about the difference between the completeness and incompleteness theorems in logic. I have a feeling that at least at this level, logic isn’t hard, but that it takes a lot of effort to become familiar with all the definitions (e.g., logical vs non-logical symbols, language, structure, interpretation, model, consequence, expressibility/arithmetical definability, capturability/definability, the two meanings of completeness, theory, effectively axiomatized theory) and notations (e.g., the two meanings of $\models$, always being clear about meta level vs object level).

I worked problems out of Intermediate Counting & Probability (chapter 2, sets and logic).

## 2018-12-18

I thought a bit about referencing in math (e.g., how to name results, whether to number equations/propositions, how to make it easy to look up notation), the difference between references and tutorials (the former should “give you everything” while the latter should make the reader do the work), and the relative merits of some math textbooks I like. I hope to write about this thing more in the future (it’s not really directly about math or AI safety, but I do think this sort of thing is under-discussed).

I thought about the s-m-n theorem, in particular the proof of it in Epstein and Carnielli’s book. I think I got the general idea here but I had trouble understanding some of the details in the specific encoding they used (I probably have to read more of the surrounding context). I do find it pretty annoying that all these textbooks use different arbitrary encodings, so that to understand the same thing I need to keep multiple encodings in mind. (It doesn’t help that the notation can also vary a lot between books.) On the plus side, it seems like once one has both the universal function and the s-m-n theorem, the rest of computability theory can be done in an encoding-free way (sort of like how once one constructs the real numbers and proves the least upper bound property, one can do analysis in a construction-free way).

I began thinking again about logic. I began reading Peter Smith’s An Introduction to Gödel’s Theorems. I also began re-reading the logic parts of the Boolos/Burgess/Jeffrey book.

## 2018-12-17

I thought a bit about the linear dependence lemma in Axler’s book.

I thought about things in computability theory, in particular the s-m-n theorem, and the first and second Kleene recursion theorems.

I wrote the page Taking inf and sup separately.

I also started on some other pages: Characterization of recursively enumerable sets, S–m–n theorem, Diagonalization out of a class, and Index and program. Some of these pages are quite stubby, but I hope to work on them at some point.

I organized the pages in my userspace of the Machine Learning Subwiki a bit. In particular, I noticed that there were many pages about computability that I was having trouble finding, so I created a dedicated subdirectory for these. I might do something similar with the other subjects later.

## 2018-12-16

I thought about some stuff in linear algebra again, in particular singular value decomposition, orthogonal projections, and Gram–Schmidt process.

I also thought about the “taking inf and sup separately” trick (which I would write about the following day), and proved lemma 11.3.3 in Tao’s Analysis I (without looking at the proof in the book, of course).

## 2018-12-15

I took a break from math studying to do some other things. (I still did my Anki reviews.)

## 2018-12-14

I thought about two definitions of a limit point of a set: (1) $p$ is a limit point of $S$ iff $p$ is the limit of distinct points of $S$; (2) $p$ is a limit point of $S$ iff it is an adherent point of $S\setminus \{p\}$. I tried to prove these were equivalent, and I think I succeeded. (The former definition is mentioned in Pugh’s analysis book; the latter is from Tao’s book.)

I did more problems out of Intermediate Counting & Probability. I did all of chapter 1 (review of basics) just to make sure I could do the problems.

## 2018-12-13

I went back to thinking about something in Tao’s Analysis I (proposition 6.4.12(c)). I also thought about another problem involving sequences and the limit superior/inferior.

I spent some time thinking about the “logistic success curve” and logistic regression.

## 2018-12-12

I thought about the proof of the first and second graph principles (a terminology from the Boolos/Burgess/Jeffrey book, which isn’t standard as far as I know; the two principles state that a total or partial function is recursive if and only if its graph relation is semirecursive).

I went back to Tao’s Analysis I to think about proposition 6.4.12 (a), (b), and (c).

(This was day 2 of a caffeine reset, which made thinking difficult.)

## 2018-12-11

I spent some time reading about the Kleene fixed point theorem and s-m-n theorem.

I tried proving the equivalent properties for recursively enumerable sets.

(This was day 1 of a caffeine reset, which made thinking difficult.)

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r