Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error attempting Logistic Regression #5

Closed
dhimmel opened this issue May 13, 2014 · 6 comments
Closed

Error attempting Logistic Regression #5

dhimmel opened this issue May 13, 2014 · 6 comments

Comments

@dhimmel
Copy link

dhimmel commented May 13, 2014

I am encountering an error when trying to run a binomial logistic ridge regression. The error does not occur when using the default least squares regression.

import GLMNet
cv_guass = GLMNet.glmnetcv(X, y, GLMNet.Normal(), alpha=0.0)
cv_binom = GLMNet.glmnetcv(X, y, GLMNet.Binomial(), alpha=0.0)

The final line of code produces the following error:

no method glmnet!(Array{Any,1},Array{Float64,2},Array{Float64,1},Binomial)
at In[100]:3
 in glmnet at /home/dhimmels/.julia/v0.2/GLMNet/src/GLMNet.jl:346
 in glmnetcv at /home/dhimmels/.julia/v0.2/GLMNet/src/GLMNet.jl:382

I am new to julia, so perhaps I'm missing something. Thanks.

@simonster
Copy link
Member

From the docs:

For logistic models, y is a m x 2 matrix, where the first column is the count of negative responses for each row in X and the second column is the count of positive responses.

but ideally we'd support y given as a vector (see #3)

@dhimmel
Copy link
Author

dhimmel commented May 13, 2014

Converting my y vector to the matrix did the trick:

y_mat = hcat(abs(y - 1), y)
cv_binom = GLMNet.glmnetcv(X, y_mat, GLMNet.Binomial(), alpha=0.0)

In the past when using cv.glment in R, I've used the lambda.1se (largest lambda at which the MSE is within one standard error of the minimal MSE) for lambda. Is there any way to easily retrieve this value, see the coefficients for this lambda, and make predictions using that model?

Thanks,
Daniel

@IainNZ
Copy link

IainNZ commented Feb 12, 2015

@simonster I'd be interested about the lambda.1se thing too, if you know.

@simonster
Copy link
Member

I would guess lambda.1se is something like:

minloss, index = findmin(cv.meanloss)
max(cv.lambda[cv.meanloss .<= minloss+cv.stdloss[index]])

but I'll double-check with the R interface tomorrow.

@simonster
Copy link
Member

This is the R code that computes lambda.1se. What I wrote above seems right. I'll include this when I get around to making a general cross validation framework for my various Lasso packages.

@dhimmel
Copy link
Author

dhimmel commented Feb 16, 2015

Great, thanks for the interim solution to calculate lambda using the 'one-standard-error' rule.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants