I’ve been making an effort to make more time for my blog lately, but it’s difficult in the midst of the school year. So, I decided that, in order to keep my posts relatively frequent, I’ll post some lighter articles in between the more detailed ones, more about ideas then the full intuition and mathematical development of an algorithm. I’ll still work towards the larger, more developed articles as well, but there is no way I can keep them coming out weekly or so (neural networks is next!). So in that spirit, I want to talk about eigenvalues and eigenvectors. They’re used a lot in machine learning, specifically in something called Principal Component Analysis, a data reduction method. I mentioned them briefly in a post I did a while back about Linear Algebra, but I left the math out. It’s time to bring it back up. Note – If you feel you don’t have the basics to understand this article, read my intro to linear algebra article! Intuition If we consider a vector being multiplied by a matrix, it does some geometrical transformation. Maybe it shifts it rotates it by 30 degrees, or changes it’s dimensions. We can look at this mathematically as \(Ax = b\), where \(x\) is the vector we started with, and \(b\) is the vector we now have. (\(x\) is blue, \(b\) is red). There is a special case of this procedure, for certain vectors \(x\),where \(Ax\) yields simply a stretched or shrunk version of \(x\). Geometrically, we can picture it like this. In this case, \(x\) is called an eigenvector. Math Now, how can we represent this mathematically? The vector \(b\) is simply \(x\), but scaled. $$\begin{align} & Ax = b \\ & Ax = \lambda x,\ \text{where } \lambda \in \mathbb{R} \end{align}$$ Note that \(x\) must be nonzero. Here we call \(x\) an eigenvector of the matrix \(A\), and \(\lambda\) an eigenvalue of \(A\). Now that we have a basic understanding, we have to answer a harder question. How do we find the eigenvalues and eigenvectors of \(A\)? Let’s start with the eigenvalues. This gets a bit math heavy. Finding the eigenvalues $$\Large{\begin{align} &1.\ Ax = \lambda x \\ &2.\ Ax = \lambda I x \\ &3.\ Ax – \lambda I x = 0 \\ &4.\ (A – \lambda I) x = 0 \end{align}}$$ We can start with the above operations to make things clearer. Step 1 – We have…

Read More## Linear Algebra in Julia

Most people (including myself) are drawn to Julia by its lofty goals. Speed of C, statistical packages of R, and ease of Python?—it sounds two good to be true. However, I haven’t seen anyone who has looked into it say the developers behind the language aren’t on track to accomplish these goals. Having only been around since 2012, Julia’s greatest disadvantage is a lack of community support. If you have an obscure Julia question and you google it, you probably won’t find the answer, whereas with Python or R or Java you would. This also means less package support. The packages for linear algebra, plotting, and other stuff are there, but if you want to do computer vision or nlp, you’d be among the few. However, it is definitely worth looking into. I’m not quite a pro, but I’m getting to the point where if I code something in Python, I can easily transfer it to Julia. Then sometimes I test the speed for fun. Julia always wins. Recently, I wrote an article about linear algebra, with accompanying code in Python. Below is basically the same article, with the code in Julia. If you need help with the basic syntax, I also wrote a basic syntax guide, kind of a compressed version of the documentation. I included the concept descriptions, but they’re no different from my original article. The point of this is just to show how easy it is to do linear algebra in Julia. Here is the iPython notebook on my github. (You can write Julia code in iPython…it’s awesome). The Basics matrix – a rectangular array of values vector – one dimensional matrix identity matrix I – a diagonal matrix is an n x n matrix with one’s on the diagonal from the top left to the bottom right. i.e. [[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]] When a matrix A is multiplied by it’s inverse A^-1, the result is the identity matrix I. Only square matrices have inverses. Example below. Note – the inverse of a matrix is not the transpose. Matrices are notated m x n, or rows x columns. A 2×3 matrix has 2 rows and 3 columns. Read this multiple times. You can only add matrices of the same dimensions. You can only multiply two matrices if the first is m x n, and the second is n x p. The n-dimension has to match. Now the…

Read More