As I said it's for a hobby project of mine, so I don't need cross platform compatibility. The reason I'm still using agk is because I already know and have it.
The matrices were just an example, but matrices of that size could be needed. I wanna make a neural network ( I'm assuming you know what those are) and I store the weights in matrices. If your input size is 5 and your first layer size is 3, you need a 5x3 matrix. The matrix of the weights of layer 2 = (FirstLayerSize x SecondLayerSize).
Quote: "//#define ARMA_USE_BLAS
//// Comment out the above line if you don't have BLAS or a high-speed replacement for BLAS,
//// such as OpenBLAS, GotoBLAS, Intel MKL, AMD ACML, or the Accelerate framework.
//// BLAS is used for matrix multiplication.
//// Without BLAS, matrix multiplication will still work, but might be slower."
This is what the config file of armadillo says. I commented the line and it used the built-in version of armadillo matrix multiplication which isn't pre-built, so it can compile in any way I want. I think I can also download a 32-bit version of BLAS, but I don't need the speed as I'm using fairly small matrices. LAPACK is used for matrix decompositions, but I'm not using that so I just undefined ARMA_USE_LAPACK.