Welcome to the forum for R2MLwiN users. Feel free to post your question about R2MLwiN here. The Centre for Multilevel Modelling take no responsibility for the accuracy of these posts, we are unable to monitor them closely. Do go ahead and post your question and thank you in advance if you find the time to post any answers!

Here Relief is an ordered response with a range of [1:5], and District stores the names of group at level-2.

When I try to run this code, a window named "Error detected by MLN". And it says
"error while obeying batch file C:/Users/Diva~1/AppData/Local/Temp/RtmpgnO85h/macrofile_411c5ed77ee3.txt at line number 79:
NeXT

design vector at level 2 is the wrong length."

And when I clicked the OK at that window, in R, it shows
"Error in read.dta(chainfile) :
unable to open file: 'No such file or directory"

Just want to know what happened during this procedure. I have checked that there is no missing value in my dataset.

Do you get the same error if you try different nonlinear options?

In MLwiN versions prior to 3.01 it would replace very small residual values with the missing code, which could sometimes cause similar problems. I would suggest updating to this version if you haven't already to see if you get the problem there too.

Another thing to try would be to change the base category to see whether this makes estimation any more successful.

Thanks for your answer. The problem is solved. It may be caused by some duplicated level-2 items. Just wander if there is some function I could used in R to do the hierarchy viewer like in MLwiN. Thanks.

If you just want the number of units at each level, along with the minimum, maximum and mean number of records within each unit then this is already reported in the R2MLwiN output, as well as being available in the @Hierarchy slot of the model object.

If you would like more detailed information then you could adapt the code that calculates above information. This can be found at the following location in the runMLwiN.R file: https://github.com/rforge/r2mlwin/blob/ ... iN.R#L2673.

Thank you for your comments. I have tried to run the model using MCMC, but it shows EESs are all very small with respect to iteration number (5000) although I used orthogonal parameterization. The result is showed as below. And just wander if there is some method to improve the mixing.

MLwiN (version: 2.36) multilevel model (Multinomial)
N min mean max N_complete min_complete mean_complete max_complete
DISTRICT 115 2 8.808696 25 115 2 8.808696 25
Estimation algorithm: MCMC Elapsed time : 21.07s
Number of obs: 1013 (from total 1013) Number of iter.: 5000 Chains: 1 Burn-in: 500
Bayesian Deviance Information Criterion (DIC)
Dbar D(thetabar) pD DIC
2589.358 2487.432 101.925 2691.283
---------------------------------------------------------------------------------------------------
The model formula:
logit(Y, cons, 5) ~ 1 + (1[1:4] | DISTRICT)
Level 2: DISTRICT Level 1: l1id
---------------------------------------------------------------------------------------------------
The fixed part estimates:
Coef. Std. Err. z Pr(>|z|) [95% Cred. Interval] ESS
Intercept_1 -1.55175 0.18080 -8.58 9.263e-18 *** -1.92694 -1.22738 50
Intercept_2 0.12467 0.17880 0.70 0.4857 -0.22464 0.46970 39
Intercept_3 1.04400 0.18617 5.61 2.048e-08 *** 0.68504 1.41288 45
Intercept_4 2.60164 0.20323 12.80 1.609e-37 *** 2.21325 3.01465 52
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
---------------------------------------------------------------------------------------------------
The random part estimates at the DISTRICT level:
Coef. Std. Err. [95% Cred. Interval] ESS
var_Intercept_1234 2.78119 0.50209 1.92571 3.88142 594
---------------------------------------------------------------------------------------------------
The random part estimates at the l1id level:
Coef. Std. Err. [95% Cred. Interval] ESS
bcons_1 1.00000 1e-05 1.00000 1.00000 5000

As your example model only contains an intercept term the orthogonal parameterisation option will make no difference in this case.

Have you looked at the traces of your parameter chains to see whether they look like they have converged to a distribution? If so it may just be that you need to increase the number of iterations to get to the ESS values that you want.

Thanks for your answer. I will try to increase the iteration number or build up more chains. Another problem is that for the new version of R2MLwiN, is there any function I could use to test the proportional odds assumption? And is it possible to compare two models estimated by IGLS (PQL2).

To compare the two discrete models fit using IGLS you could perform a Wald test using the linearHypothesis function provided by R. For an example of this see chapter 9 of the MLwiN manual examples: http://www.bristol.ac.uk/cmm/media/r2ml ... rGuide09.R.

I have used the linearHypothesis to check the proportional odds (PO) assumption in my 2-level model.

Result of Wald test using linearHypothesis:
.......................chisq.........Pr(>Chisq)......
FP_Age2_1234 6.311407258 0.097404346
FP_Age3_1234 11.96573783 0.007501455
FP_Age4_1234 9.438055025 0.023999565

And the testing result shows that, for an ordinal variable (Age: levels 1<2<3<4), FP_Age3_1234 and FP_Age4_1234 show significantly difference between the proportional-odds cumulative logit model and the separate cumulative logit model. However, when I check their effects on response variable, there is no significant effects. On this condition, just wander whether I should treat them with common coefficient or with separate coefficients, or even just drop this insignificant variable (My focus is on the effects of group-level variables). And if I need to add them into the model with separate coefficients, how can I only make FP_Age3_1234 and FP_Age4_1234 with separate coefficients rather than FP_Age_1234 (the whole Age variable) in R2MLwiN.

The model results of the proportional-odds cumulative logit model (Table1) and of the separate cumulative logit model (Table2) are shown in the attachment.