Step 4 of 2LevelImpute

Welcome to the forum for Stat-JR users. Feel free to post your question about Stat-JR here. The Centre for Multilevel Modelling take no responsibility for the accuracy of these posts, we are unable to monitor them closely. Do go ahead and post your question and thank you in advance if you find the time to post any answers!

We will add further support to the Stat-JR website, such as FAQs and tutorials, as soon as it is available; the Stat-JR website can be found here: http://www.bristol.ac.uk/cmm/software/statjr/
Post Reply
shakespeare
Posts: 70
Joined: Thu Feb 14, 2013 11:12 pm

Step 4 of 2LevelImpute

Post by shakespeare » Thu Jul 31, 2014 6:53 pm

I'm trying to translate what I did in Realcom to Stat-JR using the 2LevelImpute template. My model is a 2 level model with all the variables at level one and location as the clustering variable. Outcome is binary and there are about 10-15 predictors in the MOI, depending on how I set things up. In the imputation model, I want to treat everything as a response, so for each variable I'll have something like y=B+u+e.

In Realcom I used a burn in of 2000 iterations and 1000 iterations between imputations. That burn in was probably excessive, but I wasn't sure how much was enough, so I erred on the side of caution based on my reading of the literature. There does not seem to be a burn in for the imputation model, but there is a burn in and iterations for the MOI. How is the burn in for the imputation model optimized? And what recommendations can be made about burn in and iterations for the MOI in my case?

ChrisCharlton
Posts: 1111
Joined: Mon Oct 19, 2009 10:34 am

Re: Step 4 of 2LevelImpute

Post by ChrisCharlton » Fri Aug 01, 2014 12:52 pm

The burnin for the imputation model is hardcoded to 1000 (see the line

Code: Select all

estinputs['burnin'] = '1000'
in 2LevelImpute.py), although this is after an adaptation phase of 5000 iterations. To choose the best values for the burnin and iterations for the MOI you will need to look at the model diagnostics after the model is run. As the model running process may take a while it is probably better to err on the side of caution to reduce the likelihood that you will need to run it again.

shakespeare
Posts: 70
Joined: Thu Feb 14, 2013 11:12 pm

Re: Step 4 of 2LevelImpute

Post by shakespeare » Fri Aug 01, 2014 1:28 pm

Ok. That makes sense. A couple of other questions. Since I'm primarily a SAS and Stata programmer, I'm used to generating command files that I can save and run again at a later date. Is it possible to do the same by saving the input string?

I see in the 2LevelImpute example that there is a procedure to recover the imputed data files. It's not clear if the procedure copies or cuts and pastes the files (I haven't tried it yet). If the original files are left behind, it would be nice to know where those files are physically saved since after a few models are run, I might want to clean up my disk and delete unwanted files.

ChrisCharlton
Posts: 1111
Joined: Mon Oct 19, 2009 10:34 am

Re: Step 4 of 2LevelImpute

Post by ChrisCharlton » Fri Aug 01, 2014 1:54 pm

Yes, if you copy the input string from the bottom of the page then as long as you have the same template and data loaded you can paste this into the input box at a later date to automatically fill in the answers to the questions.

The imputed data files are currently only stored in memory and will disappear when you close the application. You can download them by either clicking the download button after a model run, or by selecting them using Choose in the Dataset menu and then using Download, also in the Dataset menu. If you want to clear them from memory the Dataset menu has a Drop option where you can choose which data to discard.

shakespeare
Posts: 70
Joined: Thu Feb 14, 2013 11:12 pm

Re: Step 4 of 2LevelImpute

Post by shakespeare » Fri Aug 01, 2014 2:34 pm

Understand. Appreciate your help.

Post Reply