  C RUBY-ON-RAILS MYSQL ASP.NET DEVELOPMENT RUBY .NET LINUX SQL-SERVER REGEX WINDOWS ALGORITHM ECLIPSE VISUAL-STUDIO STRING SVN PERFORMANCE APACHE-FLEX UNIT-TESTING SECURITY LINQ UNIX MATH EMAIL OOP LANGUAGE-AGNOSTIC VB6 MSBUILD # Multinomial probit regression with mixed type explanatory variables  » r » Multinomial probit regression with mixed type explanatory variables

By : user2172869
Date : October 22 2020, 08:10 PM
hope this fix your issue I have a data frame called aggregates composed of numerical columns each having a significant amount of zero values. I want to fit a probit model for each column by regressing them on another data frame called exp_vars. exp_vars is composed of factors, ordered factors, integers and numbers. I tried this: , The problem is indeed about the formula. The following works code :
``````lapply(aggregates , function(y) glm(y ~ . ,family = binomial(link = "probit"), data = cbind(y = y, exp_vars)))
``````
``````data = cbind(y = y, exp_vars)
``````
``````cbind(y = 1:5, exp_vars)
#   y exp1 exp2 exp3
# 1 1   21   11    1
# 2 2   22   12    0
# 3 3   23   21    0
# 4 4   24   22    1
# 5 5   25   23    1
`````` ## Linear Regression in R with variable number of explanatory variables

By : user3776390
Date : March 29 2020, 07:55 AM
around this issue , Three ways, in increasing level of flexibility.
Method 1
code :
``````fit <- lm( Y ~ . , data=dat )
``````
``````dat <- cbind(data.frame(Y=Y),as.data.frame(X))
``````
``````fit <- lm( Y~. , data=dat )
``````
``````model1.form.text <- paste("Y ~",paste(xvars,collapse=" + "),collapse=" ")
model1.form <- as.formula( model1.form.text )
model1 <- lm( model1.form, data=dat )
`````` ## Poisson regression with both response and explanatory variables as counting

By : elviiis04
Date : March 29 2020, 07:55 AM
wish helps you The problem seems like it could be well modeled with Poisson regression. The residual variance should NOT be "homogeneous". The Poisson model assumes that the variance is proportional to the mean. You have options if that asumption is violated. The quasi-biniomial and the negative binomial models can also be used and they allow some relaxation of the dispersion parameter estimates.
If the number of quota units owned by fishers sets an upper bound on the number used then I would not think that should be used as an explanatory variable, but might better be entered as offset=log(quota_units). It will change the interpretation of the estimates, such that they are estimates of the log(usage_rate). ## linear regression with compositional explanatory variables

By : Carlos Eduardo Ferre
Date : March 29 2020, 07:55 AM
will help you You can remove any one of the variables and perform standard linear regression. The reason is that given n-1 variables, you can uniquely determine your nth variable. Thus, the nth variable is not required. ## Is there a way of identifying the values of explanatory variables in a logistic regression at the end of a function?

By : user2735301
Date : March 29 2020, 07:55 AM
this one helps. I have a matrix called all.confusion.tables which contains a table of predicted values versus actual values for explanatory variables. I then apply a misclassification rate function to this and it gives me an output which looks like , Simple stuff:
code :
``````x <- rnorm(8) # some dummy data
setNames(x, c("age","lwt", "race", "smoke", "ptl", "ht","ui","ftv"))
`````` ## Naming explanatory variables in regression output

By : La Vara
Date : March 29 2020, 07:55 AM
I wish this helpful for you Searching through the source, it appears the summary() method does support using your own names for explanatory variables. So:
code :
``````results = sm.OLS(y, X).fit()
print results.summary(xname=['Fred', 'Mary', 'Ethel', 'Bob'])
``````
``````                                OLS Regression Results
==============================================================================
Dep. Variable:                      y   R-squared:                       0.535
Method:                 Least Squares   F-statistic:                     7.281
Date:                Mon, 11 Apr 2016   Prob (F-statistic):            0.00191
Time:                        22:22:47   Log-Likelihood:                -26.025
No. Observations:                  23   AIC:                             60.05
Df Residuals:                      19   BIC:                             64.59
Df Model:                           3
Covariance Type:            nonrobust
==============================================================================
coef    std err          t      P>|t|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
Fred           0.2424      0.139      1.739      0.098        -0.049     0.534
Mary           0.2360      0.149      1.587      0.129        -0.075     0.547
Ethel         -0.0618      0.145     -0.427      0.674        -0.365     0.241
Bob            1.5704      0.633      2.481      0.023         0.245     2.895
==============================================================================
Omnibus:                        6.904   Durbin-Watson:                   1.905
Prob(Omnibus):                  0.032   Jarque-Bera (JB):                4.708
Skew:                          -0.849   Prob(JB):                       0.0950
Kurtosis:                       4.426   Cond. No.                         38.6
==============================================================================

Warnings:
 Standard Errors assume that the covariance matrix of the errors is correctly specified.
`````` 