[R list package] Computational Problems Using 'List' Package

David Szakonyi ds2875 at columbia.edu
Fri Mar 7 00:10:02 CET 2014


Thanks. I tried running ictregBayes as well, but I've run into problems
with the multiple sensitive item design.

Here's the code for what I've been trying to run. Note this is another
multiple sensitive item list experiment that I can run ML on. It passes the
design test too.


ml.results <- ictreg(full ~ male + lage + ordeduc + logincome +townsize+
firmsize, data = true_subset, treat = "treatstat", J=4, method = "ml",
constrained=TRUE)

draws <- mvrnorm(n=3, mu = coef(ml.results), Sigma=vcov(ml.results)*9)

b1 <- ictregBayes(full ~ male + lage + ordeduc + logincome +townsize+
firmsize, data= true_subset, treat="treatstat", J=4, n.draws=100000, burnin=
10000, thin=5, delta.start=list(draws[1,1:7],draws2[3,8:14]), psi.start=
draws1[1, 15:21], delta.tune=diag(0.00075,7), psi.tune=diag(0.00007,7),
verbose=T, constrained.multi = TRUE)


I keep getting the error: "Error in as.integer(ceiling) : 'ceiling' is
missing"


I also ran the sample code you provided in the help file using the 'race'
data. The same error pops up when I run the multiple sensitive item design.
I've also tried setting 'delta.tune' as a list with no luck either, as well
as no luck with turning on and off the constrained function. We also looked
into the package code and found the function ictregBayesMulti.fit which
requires a ceiling parameter, but we couldn't figure out how to feed it one.


Thanks for your help!


David


On Mon, Mar 3, 2014 at 8:24 PM, Kosuke Imai <kimai at princeton.edu> wrote:

> Good.  Our standard error calculation relies upon the numerical evaluation
> of hessian matrix, which in some cases is not very stable.  We probably
> need to improve this in the next version but for now I think you can do the
> Bayesian model, which should be more stable.
>
> Kosuke Imai
> Department of Politics
> Princeton University
> http://imai.princeton.edu
>
>
> On Mon, Mar 3, 2014 at 2:20 PM, David Szakonyi <ds2875 at columbia.edu>wrote:
>
>> Yes, it does pass the design test (p-value of 0.9384607). I'm going to
>> try the Bayes model. Any other thoughts about why this would be happening?
>>
>> Thanks,
>>
>> David
>>
>>
>> On Wed, Feb 26, 2014 at 9:41 AM, Kosuke Imai <kimai at princeton.edu> wrote:
>>
>>> Does your data pass the design test we've developed?  Sometimes, the
>>> skewed data leads to this type of problems.  Another possibility is to use
>>> a Bayesian model, which is available through ictregBayes().
>>>
>>> Kosuke Imai
>>> Department of Politics
>>> Princeton University
>>> http://imai.princeton.edu
>>>
>>>
>>> On Tue, Feb 25, 2014 at 6:44 PM, David Szakonyi <ds2875 at columbia.edu>wrote:
>>>
>>>> Hello!
>>>>
>>>> I've been analyzing a two treatment list experiment that has been
>>>> properly randomly assigned.
>>>>
>>>> I just ran the following regression with one simple explanatory
>>>> bivariate variable (gender), but received the following error:
>>>>
>>>> > ml.resultsnp <- ictreg(outcome ~ male, data = true_subsetnp, treat =
>>>> "treatstatnp", J=4, method = "ml")
>>>>
>>>> Error in solve.default(-MLEfit$hessian) :
>>>>   system is computationally singular: reciprocal condition number =
>>>> 7.02376e-18
>>>>
>>>> I then added an additional covariate (logged age), and the model
>>>> converges, but with exploding standard errors:
>>>>
>>>> Item Count Technique Regression
>>>>
>>>> Call: ictreg(formula = outcome ~ male + lage, data = true_subsetnp,
>>>>     treat = "treatstatnp", J = 4, method = "ml")
>>>>
>>>> Sensitive item (1)
>>>>                 Est.    S.E.
>>>> (Intercept)  3.94811 4.15077
>>>> male        -1.43966 1.21671
>>>> lage        -1.78966 1.14337
>>>>
>>>> Sensitive item (2)
>>>>                  Est.       S.E.
>>>> (Intercept)  -7.93707   10.31522
>>>> *male        -16.06552 2965.82521*
>>>> lage          0.99201    2.63957
>>>>
>>>> Control items
>>>>                 Est.    S.E.
>>>> (Intercept)  0.21267 0.25312
>>>> male         0.00017 0.05277
>>>> lage        -0.04225 0.06603
>>>>
>>>> Log-likelihood: -2039.366
>>>>
>>>> Number of control items J set to 4. Treatment groups were indicated by
>>>> '1' and '2' and the control group by '0'.
>>>>
>>>> A variety of other model specifications return very similar results:
>>>> either failing to converge/compute or returning point estimates and very
>>>> large standard errors (sometimes nearly exactly the same values for not at
>>>> all correlated variables).
>>>>
>>>> Does anyone have any suggestions about what might be going wrong?
>>>>
>>>> Thanks,
>>>>
>>>> David
>>>>
>>>>
>>>> --
>>>> David Szakonyi
>>>> Ph.D Candidate - Comparative Politics
>>>> Columbia University
>>>> ds2875 at columbia.edu
>>>>
>>>> _______________________________________________
>>>> listpackage-discuss mailing list
>>>> listpackage-discuss at lists.r-forge.r-project.org
>>>>
>>>> https://lists.r-forge.r-project.org/cgi-bin/mailman/listinfo/listpackage-discuss
>>>>
>>>
>>>
>>
>>
>> --
>> David Szakonyi
>> Ph.D Candidate - Comparative Politics
>> Columbia University
>> ds2875 at columbia.edu
>>
>
>


-- 
David Szakonyi
Ph.D Candidate - Comparative Politics
Columbia University
ds2875 at columbia.edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.r-forge.r-project.org/pipermail/listpackage-discuss/attachments/20140306/931710e0/attachment.html>


More information about the listpackage-discuss mailing list