Page 1 of 1

Optimal computer for MLwiN

Posted: Wed Mar 11, 2015 6:25 pm
by leap
Dear MLwiN users,

I have been using MLwiN to run simultaneous multi-state multilevel discrete event history models (similar to e.g. Steele et al. 2005, Steele et al. 2006). As some of you will know, such models are quite computing intensive and, usually, slow to iterate.

As a research group, we are looking into investing in a super computer. We would like to know, in your experience what combinations of processors, disk type and RAM would optimize MLwiN convergence time?

Thank you for your time.

Léa

Re: Optimal computer for MLwiN

Posted: Wed Mar 25, 2015 5:34 pm
by ChrisCharlton
I haven't fitted models of this type, so can only provide some general characteristics of MLwiN.

Estimation in MLwiN is currently single-threaded so having more than one processor will only improve speed if you are running more than one instance of the software at once.

All data is held in memory, so once loaded the disk type won't make a difference, unless you are short of RAM and the computer has to swap to disk.

The GUI version of MLwiN is a 32-bit application, and so will only be able to use a maximum of 4Gb of RAM (assuming that you are running a 64-bit operating system). This is per instance, so if you are running multiple instances this limit will apply to each of them. The scripting versions that can be called from runmlwin (http://www.bris.ac.uk/cmm/software/runmlwin/) or R2MLwiN (http://www.bris.ac.uk/cmm/software/r2mlwin/) do have 64-bit versions and will therefore not be subject to this memory limitation.

You will therefore need to think about how many simultaneous models you are likely to use, and on how much data. For single models you will find that you should get better performance from processors with higher instruction speed, rather than more cores.