3.30.2011

Optimization

I've been working on a Monte Carlo method particle simulator for a class I'm taking. I've been wondering about the most efficient way to write it. I know premature optimization is the end of all things, but I can still see a few problems ahead.

Right now, I am using MATLAB. I have decided to go the object-oriented route, because each particle has a number of properties that I'd like to keep together for clarity's sake, such as energy and momentum. So, I defined a class to contain all of these variables. The program needs to track a fairly large number of particles (~10000 or so) in order for the statistics of the Monte Carlo method to work out. The question I have regards the efficiency of the numerical methods applied.

As I understand it, most linear algebra libraries understand that if a bunch of independent operations can be applied in parallel to a large set of data, those operations will be applied as efficiently as possible, using multiple cores as needed. However, there is a chance that the library may not recognize my code as being a case where parallel execution is possible. In other words, if I had an array of numbers that I wanted to perform an operation on, the library would recognize that it could operate on the array in a parallel manner. However, if I have an array of objects, and want to operate on all the members of those classes, the library may not optimize this in the same way.

Basically, I wonder if MATLAB will know to treat my array of particles in an efficient method. I would like to rewrite this simulator in Python for code clarity anyway. Wonder how the linear algebra methods in NumPy would handle this.

No comments:

Post a Comment