[NB: Scroll to the bottom if you don’t want to read my Maths rant and just want to see some cool graphs! Click to view full size.]

I’m not going to pretend I know everything about MATLAB. But the more I learn the more I’m impressed with this immensely powerful software. While reading up on matrices I was at one point inspired to write a program to solve matrix equations since it really seemed like a job for a computer, and also a good test of my 2-D array programming skills. It goes without saying of course that the program never materialised owing to laziness and high work load from school. However as I was drawn deeper and and deeper into the world of linear algebra I kept coming across this thing called MATLAB. The lecturer kept going on about how MATLAB would try to avoid not only non-zero pivots but also small pivots for the sake of computational accuracy while doing elimination, and how MATLAB only solves the RHS of an equation after getting the LHS in echelon form, and generally how MATLAB is so great, and eventually I broke and decided I had to find out more about this program. I really didn’t regret it – it is sincerely awesome. So what’s so good about it?

### Matrices

The lecture series I was watching was on matrices so I decided to test MATLAB’s apparently awesome matrix capabilities first of all. I created A, a random 100×100 matrix using MATLAB’s random number function:

`>> A = rand(100,100);`

The semicolon is to prevent it printing the entire matrix to the screen which just wastes time. This took an apparently infinitesimal amount of time so decided to make things a bit more interesting by making it a 1000×1000 matrix instead. MATLAB barely blinked; OK big deal, it can generate a million random numbers in very little time. Now for the proper tests. Elimination is generally speaking O(n^3). In fact it’s about (1/3)n^3. Well, precisely speaking it would involve 333,833,500 calculations for a 1000 square matrix. It’s not actually that many calculations for a computer to perform but it was the natural first test. Elimination is basically the same process as the factorisation of A into two triangular matrices: PA = LU. so let’s get MATLAB to do just that, with a massive matrix:

`>> [L,U,P] = lu(A);`

Quite impressive – it was fairly speedy: again, it did it in less than a blink of the eye. In fact I ran all sorts of tests with a single 1000×1000 matrix (generating a new one each time to get round falsely fast results from caching). Transposing was no sweat, and inverse ran for half a second. Curiously enough, rref (Row Reduced Echelon Form) took forever to run (ran for over a minute before I stopped it) although both factorisation and inverse were incredibly speedy. Surely getting from Upper Triangular to rref shouldn’t take that long? Especially since it’s obvious you’ll get a huge 1000×1000 Identity matrix (MATLAB managed to invert it so it has to be invertible). I also at first thought it was odd that no matter how many times I regenerated A, I always got its det as either infinity or -infinity, although after thinking about it, a 1000×1000 matrix is likely to have a pretty extreme det given its size, larger than MATLAB can cope with displaying.

### Equation Solver

I subsequently discovered MATLAB’s equation solver which is _very_ awesome and includes complex solutions (e.g. x^5 + x = 0 has 5 solutions, 4 of which are complex – MATLAB got them all). Tying in with our FP1 syllabus, MATLAB confirmed 1 has 6 sixth roots (4 complex).

>> solve('x^6 = 1')

```
```ans =

` -1`

1

-1/2*(-2+2*i*3^(1/2))^(1/2)

1/2*(-2+2*i*3^(1/2))^(1/2)

-1/2*(-2-2*i*3^(1/2))^(1/2)

1/2*(-2-2*i*3^(1/2))^(1/2)

It also managed to get a solution for cos(y) = sin(x) although failed to give me an entire solutionspace, possibly something to do with its insisting by default on solving for x, which was slightly peeving:

>> solve('cos(y)=sin(x)')

```
```ans =

`1/2*pi-y`

### Graph Plotting

And now enough of me speaking gibberish that I don’t properly understand and forwards: towards the awesome(-ish) visual bit of MATLAB: the graph plotter. There are some annoying things about this: you can’t just type in some equation involving x and y like a circle/ellipse equation (or indeed sin(x) = cos(y)) and ask it to plot it – you have to physically create a set of x values:

>> x = -10:0.1:10; %-- set of numbers between -10 and 10, resolution of 0.1

then create a second set of data, y, and define it as a function of x. Only then do you plot y against x. I personally find this method of plotting ‘datum by datum’ fairly annoying but still, graphers are graphers and thus by definition awesome.

>> [x,y] = meshgrid([-10:0.01:10],[-10:0.01:10]);

>> z = sin(x./y);

>> mesh(x,y,z);

>> z = sin(sqrt(x.^2 + y.^2));

>> z = 1 ./ (nthroot(x.^2 + y.^2, 2)) .* sin(sqrt(x.^2 + y.^2));

Awesemo, huh?

‘z = y^2 – x^2’ is part of virgin media’s logo

Is there a cylindrical polar co-ordinates (or any non cartesian) function? – that would have made the last two fractionally easier to plot and interpret. And enable easy creation of more pretty shapes!

Does it make any reference to problems at r=0 of sin(r)/r? Or does it know how to take limits?

There is polar coord functionality available which I’m currently in the process of discovering that would indeed make certain pretty objects easier to plot. In fact I think I [ctrl]+[C] [ctrl]+[V]’ed MATLAB code for a funky looking, er, thing (my powers of description fail me) in polar coordinates though I had no clue what the code was doing and wasn’t initiated enough to take it apart. Forgive me, I had 24 to watch :D

sin(0)/0 of course came out ‘NaN’ and a plot of sin(x)/x went AWOL at 0 – fair enough. But limits are featured:

`>> limit(sin(z)/z, z, 0)`

ans =

`1`

You should get a life, really this is mind blowingly lame, get some exercise done, be normal and stop being a nerd