## 2018-12-26

I continued thinking about row equivalence.

I spent some time on finding bases for the “fundamental subspaces” (range, null space, etc.) and change of basis stuff.

I thought about how I would organize linear algebra, and wrote a page about this.

Advertisements

## 2018-12-25

I thought about a thing in linear algebra (Vipul’s trick for calculating a basis for the null space).

I read Pete L. Clark’s axiomatic approach to integration, in his honors calculus text.

I started thinking about row equivalence, in particular equivalent ways to express that two matrices are row equivalent.

## 2018-12-24

I continued working through Leary & Kristiansen’s book (syntax and semantics of first-order logic).

I continued reading Peter Smith’s book. (Robinson arithmetic, $\Delta_0, \Sigma_1, \Pi_1$ formulas)

I moved over the page Understanding mathematical definitions to the Learning Subwiki (previously it was in my userspace on the Machine Learning Subwiki) and worked on it more.

## 2018-12-23

I continued working through Leary & Kristiansen’s book.

I wrote the page Model as the representation and as that which is represented.

I wrote a bibliography page for this blog.

## 2018-12-22

I continued working through Leary & Kristiansen’s logic book.

I felt somewhat tired on this day so I mostly didn’t do math (aside from Anki reviews as usual).

## 2018-12-21

I did some exercises out of Leary & Kristiansen’s mathematical logic book.

I also continued reading Peter Smith’s book.

## 2018-12-20

I continued reading Peter Smith’s book.

I thought about the two different ways to state the completeness theorem.

I made more stubs in my userspace on the Machine Learning Subwiki, e.g., Least search operator, Models symbol, Semantic completeness, and Gödel’s completeness theorem.

## 2018-12-19

I continued reading Peter Smith’s book. I thought about the difference between the completeness and incompleteness theorems in logic. I have a feeling that at least at this level, logic isn’t hard, but that it takes a lot of effort to become familiar with all the definitions (e.g., logical vs non-logical symbols, language, structure, interpretation, model, consequence, expressibility/arithmetical definability, capturability/definability, the two meanings of completeness, theory, effectively axiomatized theory) and notations (e.g., the two meanings of $\models$, always being clear about meta level vs object level).

I worked problems out of Intermediate Counting & Probability (chapter 2, sets and logic).

## 2018-12-18

I thought a bit about referencing in math (e.g., how to name results, whether to number equations/propositions, how to make it easy to look up notation), the difference between references and tutorials (the former should “give you everything” while the latter should make the reader do the work), and the relative merits of some math textbooks I like. I hope to write about this thing more in the future (it’s not really directly about math or AI safety, but I do think this sort of thing is under-discussed).

I thought about the s-m-n theorem, in particular the proof of it in Epstein and Carnielli’s book. I think I got the general idea here but I had trouble understanding some of the details in the specific encoding they used (I probably have to read more of the surrounding context). I do find it pretty annoying that all these textbooks use different arbitrary encodings, so that to understand the same thing I need to keep multiple encodings in mind. (It doesn’t help that the notation can also vary a lot between books.) On the plus side, it seems like once one has both the universal function and the s-m-n theorem, the rest of computability theory can be done in an encoding-free way (sort of like how once one constructs the real numbers and proves the least upper bound property, one can do analysis in a construction-free way).

I began thinking again about logic. I began reading Peter Smith’s An Introduction to Gödel’s Theorems. I also began re-reading the logic parts of the Boolos/Burgess/Jeffrey book.

## 2018-12-17

I thought a bit about the linear dependence lemma in Axler’s book.

I thought about things in computability theory, in particular the s-m-n theorem, and the first and second Kleene recursion theorems.

I wrote the page Taking inf and sup separately.

I also started on some other pages: Characterization of recursively enumerable sets, S–m–n theorem, Diagonalization out of a class, and Index and program. Some of these pages are quite stubby, but I hope to work on them at some point.

I organized the pages in my userspace of the Machine Learning Subwiki a bit. In particular, I noticed that there were many pages about computability that I was having trouble finding, so I created a dedicated subdirectory for these. I might do something similar with the other subjects later.

## 2018-12-16

I thought about some stuff in linear algebra again, in particular singular value decomposition, orthogonal projections, and Gram–Schmidt process.

I also thought about the “taking inf and sup separately” trick (which I would write about the following day), and proved lemma 11.3.3 in Tao’s Analysis I (without looking at the proof in the book, of course).

## 2018-12-15

I took a break from math studying to do some other things. (I still did my Anki reviews.)

## 2018-12-14

I thought about two definitions of a limit point of a set: (1) $p$ is a limit point of $S$ iff $p$ is the limit of distinct points of $S$; (2) $p$ is a limit point of $S$ iff it is an adherent point of $S\setminus \{p\}$. I tried to prove these were equivalent, and I think I succeeded. (The former definition is mentioned in Pugh’s analysis book; the latter is from Tao’s book.)

I did more problems out of Intermediate Counting & Probability. I did all of chapter 1 (review of basics) just to make sure I could do the problems.

## 2018-12-13

I went back to thinking about something in Tao’s Analysis I (proposition 6.4.12(c)). I also thought about another problem involving sequences and the limit superior/inferior.

I spent some time thinking about the “logistic success curve” and logistic regression.

## 2018-12-12

I thought about the proof of the first and second graph principles (a terminology from the Boolos/Burgess/Jeffrey book, which isn’t standard as far as I know; the two principles state that a total or partial function is recursive if and only if its graph relation is semirecursive).

I spent more time reading about the Kleene fixed point/recursion theorem.

I went back to Tao’s Analysis I to think about proposition 6.4.12 (a), (b), and (c).

(This was day 2 of a caffeine reset, which made thinking difficult.)

## 2018-12-11

I spent some time reading about the Kleene fixed point theorem and s-m-n theorem.

I tried proving the equivalent properties for recursively enumerable sets.

(This was day 1 of a caffeine reset, which made thinking difficult.)

## 2018-12-10

I thought about some of the equivalences in this linear algebra summary table.

I proved some of the propositions in Linear Algebra Done Right.

I asked a question on Math Stack Exchange about the use of “conversely” in some proofs in Linear Algebra Done Right.

I thought about how to show that “divide the data points by the standard deviation” and “scale the data points by a constant so that the new standard deviation is 1” are the same.

I thought about some stuff related to composition of limits (in analysis), and did one exercise from Spivak’s Calculus.

I made some edits to the page Comparison of concepts in computability theory.

## 2018-12-09

I tried to reconstruct a proof (that I read a day or two ago) that there exists a pair of recursively enumerable but recursively inseparable sets (I think I succeeded).

I did problem 8 in chapter 8 of Spivak’s Calculus.

I did problem 1 in section 13 of Munkres’s Topology. I also started on problem 2 but after I got the hang of it I stopped.

I read some articles on the Tricki related to real analysis. I especially enjoyed this page.

I think I continued reading a bit of Rogers’s computability book.

## 2018-12-08

I didn’t do much math. I thought again about Stillwell’s proof that there is a computable infinite tree whose infinite paths are uncomputable (which I still don’t understand). (I think this is called the Kleene tree?)

While reading about recursively inseparable sets, I realized that this pattern of words was similar to saying a series is “absolutely divergent”, so I made a page to track similar terms.

I looked a bit at Hartley Rogers’s text on recursive functions. I think this book goes into computability in more depth than the text by Boolos, Burgess, and Jeffrey, so I might want to look at this text more. (Fun fact: Rogers was Stillwell’s advisor.)

## 2018-12-07

I continued reading Stillwell’s Reverse Mathematics. Stillwell’s book goes over some results in computability, so I went back a bit to thinking about some computability stuff.

I wrote a page called Tiers of learning in mathematics.

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Create your website at WordPress.com