2019-01-06

I thought about some stuff in linear algebra (matrix decomposition, change of basis). I started on Matrix of a linear transformation, Riesz representation theorem, and Invertible equals expressible as change of coordinate matrix. I also added to the page List of matrix products.

I watched the 3Blue1Brown playlist on neural networks. (As of this writing, there are four videos in this playlist.) I was familiar with the material since I went through Michael Nielsen’s book on the subject about a year ago. I watched this mostly for fun/review/to see if there was anything I didn’t already know. There were a couple parts that I don’t think Nielsen’s book explicitly mentioned (e.g. how feeding random data as input to the network would cause the network to confidently predict a digit). I also liked that in in the fourth video (“Backpropagation calculus”), the weights appear on the computational graph. This is a point that I’ve found confusing myself and have tried to emphasize in my own draft explanation. Another thing I liked is that one of the videos (I forget which one) explicitly said that in the cost function, the weights and biases are the inputs and the training set is the parameters.

Advertisements