[Rsiena-help] Error in x$FRAN(zsmall, xsmall) : Unlikely to terminate this epoch: more than 1000000 steps
Philip Leifeld
philip.leifeld at uni-konstanz.de
Tue Apr 8 15:56:06 CEST 2014
Hi,
I am relatively new to RSiena, so hello to everybody on this list. I am
currently replicating a two-mode RSiena analysis that was published a
while ago by somebody else. I have the data and I have the code and I
can reasonably well replicate two of the three models reported, but I
keep getting an error message during the estimation of the third model.
The probability that this error message shows up seems to vary across
RSiena versions. I thought maybe you could let me know your thoughts on
what may be causing the problem or how to avoid it. Here is what I get
after about five minutes:
Phase 2 Subphase 1 Iteration 3 Progress: 14%
Error in x$FRAN(zsmall, xsmall) :
Unlikely to terminate this epoch: more than 1000000 steps
Calls: siena07 ... proc2subphase -> doIterations -> <Anonymous> -> .Call
I found an old thread in the archive of this mailing list where somebody
had the same problem. In that case, the advice was to try unconditional
estimation. Here is the message:
http://lists.r-forge.r-project.org/pipermail/rsiena-help/2012-March/000237.html
In my case, there is also only one dependent network, so I tried
sienaModelCreate(cond = FALSE), but it did not change anything. The
other potential reason reported in the thread was that the composition
change was possibly too substantial. However, in the data I am dealing
with, the vast majority of both node types persists between consecutive
time steps, and apparently the estimation worked in the original
analysis, otherwise the authors couldn't have reported their results.
Interestingly, the problem varies across RSiena versions. First, I was
using r267 (the most recent build on R-Forge) and r232 (the current
stable release on CRAN) on my desktop computer, and the problem showed
up every single time and usually around the tenth iteration. Then I
started to use r169, which is the version the original authors were
using in their original analysis (as reported in the paper), and the
problem disappeared, except for every 10th estimation or so on average
(still during the tenth iteration). Then I thought, OK, fine, this is
not optimal, but I can live with the problem as long as it shows up only
every 10th run. So I installed r169 also on the HPC cluster I have
access to, and there it seems to produce the error message every single
time during the third iteration (as in the example message printed above).
I really have no clue (a) what is causing the problem and (b) why its
probability of occurrence seems to vary across versions and computers.
In one of the earlier e-mails cited above, Tom Snijders stated that the
problem is most likely due to "fitting a model which is too complicated
for your data" and that one should go one step back and build a simpler
model, and then add other model terms step by step. While I agree that
this is a good strategy for model-building, this is not really
satisfactory for a replication because the very goal is to test whether
the same model with the same data leads approximately to the same
coefficients and standard errors. I was also wondering whether this
error message is basically a sign of degeneracy, similar to estimating a
degenerate model in statnet, but with an error message rather than
hard-to-detect convergence issues.
I would be happy to receive feedback about what exactly is going on or
how to avoid the problem. Thanks very much in advance!
Philip
--
Postdoctoral Fellow
University of Konstanz, Germany, and
Eawag, ETH domain, Switzerland
More information about the Rsiena-help
mailing list