统计代写|广义线性模型代写generalized linear model代考|ML estimation

Doug I. Jones

Doug I. Jones

Lorem ipsum dolor sit amet, cons the all tetur adiscing elit

如果你也在 怎样代写广义线性模型Generalized linear model这个学科遇到相关的难题,请随时右上角联系我们的24/7代写客服。广义线性模型Generalized linear model在统计学中,是普通线性回归的灵活概括。广义线性模型通过允许线性模型通过一个链接函数与响应变量相关,并允许每个测量值的方差大小是其预测值的函数,从而概括了线性回归。

广义线性模型Generalized linear model是由John Nelder和Robert Wedderburn提出的,作为统一其他各种统计模型的一种方式,包括线性回归、逻辑回归和泊松回归。他们提出了一种迭代加权的最小二乘法,用于模型参数的最大似然估计。最大似然估计仍然很流行,是许多统计计算软件包的默认方法。其他方法,包括贝叶斯方法和最小二乘法对方差稳定反应的拟合,已经被开发出来。

couryes-lab™ 为您的留学生涯保驾护航 在代写广义线性模型generalized linear model方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写广义线性模型generalized linear model代写方面经验极为丰富,各种代写广义线性模型generalized linear model相关的作业也就用不着说。

统计代写|广义线性模型代写generalized linear model代考|ML estimation

统计代写|广义线性模型代写generalized linear model代考|ML estimation

For ML estimation of a model, the log-likelihood function (rather than the deviance) plays a paramount role in estimation. It is the log-likelihood function that is maximized, hence the term “maximum likelihood”.

We can illustrate the log-likelihood function for a particular link function (parameterization) by substituting the inverse link function $g^{-1}(\eta)$ for $\mu$. For example, the gamma canonical link (reciprocal function) is
$$
\mu=\frac{1}{x \beta}
$$
For each instance of a $\mu$ in the gamma $\mathcal{L}$ function, we replace $\mu$ with the inverse link of the linear predictor $1 / x \boldsymbol{\beta}$.
$$
\begin{aligned}
\mathcal{L} & =\sum_{i=1}^n\left{\frac{y_i / \mu_i-\left(-\ln \mu_i\right)}{-\phi}+\frac{1-\phi}{\phi} \ln y_i-\frac{\ln \phi}{\phi}-\ln \Gamma\left(\frac{1}{\phi}\right)\right} \
& =\sum_{i=1}^n\left{\frac{y_i x_i \boldsymbol{\beta}-\ln \left(x_i \boldsymbol{\beta}\right)}{-\phi}+\frac{1-\phi}{\phi} \ln y_i-\frac{\ln \phi}{\phi}-\ln \Gamma\left(\frac{1}{\phi}\right)\right}
\end{aligned}
$$

统计代写|广义线性模型代写generalized linear model代考|Log-gamma models

We mentioned before that the reciprocal link estimates the rate per unit of the model response, given a specific set of explanatory variables or predictors. The log-linked gamma represents the log-rate of the response. This model specification is identical to exponential regression. Such a specification, of course, estimates data with a negative exponential decline. However, unlike the exponential models found in survival analysis, we cannot use the log-gamma model with censored data. We see, though, that uncensored exponential models can be fit with GLM specifications. We leave that to the end of this chapter.
The log-gamma model, like its reciprocal counterpart, is used with data in which the response is greater than 0 . Examples can be found in nearly every discipline. For instance, in health analysis, length of stay (LOS) can generally be estimated using log-gamma regression because stays are always constrained to be positive. LOS data are generally estimated using Poisson or negative binomial regression because the elements of LOS are discrete. However, when there are many LOS elements – that is, many different LOS values — many researchers find the gamma or inverse Gaussian models to be acceptable and preferable.

Before GLM, data that are now estimated using log-gamma techniques were generally estimated using Gaussian regression with a log-transformed response. Although the results are usually similar between the two methods, the loggamma technique, which requires no external transformation, is easier to interpret and comes with a set of residuals with which to evaluate the worth of the model. Hence, the log-gamma technique is finding increased use among researchers who once used Gaussian techniques.

The IRLS algorithm for the log-gamma model is the same as that for the canonical-link model except that the link and inverse link become $\ln (\mu)$ and $\exp (\eta)$, respectively, and the derivative of $g$ is now $1 / \mu$. The ease with which we can change between models is one of the marked beauties of GLM. However, because the log link is not canonical, the IRLS and modified ML algorithms will give different standard errors. But, except in extreme cases, differences in standard errors are usually minimal. Except perhaps when working with small datasets, the method of estimation used generally makes little inferential difference.

统计代写|广义线性模型代写generalized linear model代考|ML estimation

广义线性模型代考

统计代写|广义线性模型代写generalized linear model代考|ML estimation

对于模型的ML估计,对数似然函数(而不是偏差)在估计中起着至关重要的作用。它是最大的对数似然函数,因此称为“最大似然”。

我们可以通过将反向链接函数$g^{-1}(\eta)$替换为$\mu$来说明特定链接函数(参数化)的对数似然函数。例如,正则链接(互反函数)是
$$
\mu=\frac{1}{x \beta}
$$
对于gamma $\mathcal{L}$函数中$\mu$的每个实例,我们将$\mu$替换为线性预测器$1 / x \boldsymbol{\beta}$的逆链接。
$$
\begin{aligned}
\mathcal{L} & =\sum_{i=1}^n\left{\frac{y_i / \mu_i-\left(-\ln \mu_i\right)}{-\phi}+\frac{1-\phi}{\phi} \ln y_i-\frac{\ln \phi}{\phi}-\ln \Gamma\left(\frac{1}{\phi}\right)\right} \
& =\sum_{i=1}^n\left{\frac{y_i x_i \boldsymbol{\beta}-\ln \left(x_i \boldsymbol{\beta}\right)}{-\phi}+\frac{1-\phi}{\phi} \ln y_i-\frac{\ln \phi}{\phi}-\ln \Gamma\left(\frac{1}{\phi}\right)\right}
\end{aligned}
$$

统计代写|广义线性模型代写generalized linear model代考|Log-gamma models

我们在前面提到,在给定一组特定的解释变量或预测因子的情况下,互反链接估计模型响应的每单位速率。log-linked表示响应的log-rate。该模型规范与指数回归相同。当然,这样的规范以负指数下降来估计数据。然而,与生存分析中的指数模型不同,我们不能使用带有删减数据的log-gamma模型。然而,我们看到,未经审查的指数模型可以与GLM规范相适应。我们把这个留到本章的末尾。
log-gamma模型与其对等的对应模型一样,用于响应大于0的数据。几乎在每个学科中都可以找到这样的例子。例如,在健康分析中,通常可以使用log-gamma回归来估计住院时间(LOS),因为住院时间总是被限制为正的。由于LOS的元素是离散的,因此通常使用泊松或负二项回归来估计LOS数据。然而,当存在许多LOS元素时,即有许多不同的LOS值时,许多研究人员发现伽马或逆高斯模型是可接受的和优选的。

在GLM之前,现在使用log-gamma技术估计的数据通常使用具有对数变换响应的高斯回归进行估计。虽然两种方法的结果通常相似,但不需要外部转换的对数技术更容易解释,并且带有一组残差,可以用来评估模型的价值。因此,曾经使用高斯技术的研究人员越来越多地使用对数-伽马技术。

log-gamma模型的IRLS算法与标准链接模型的IRLS算法相同,只是链接和反向链接分别变成$\ln (\mu)$和$\exp (\eta)$, $g$的导数现在变成$1 / \mu$。我们可以轻松地在模型之间切换,这是GLM的显著优点之一。然而,由于日志链接不是规范的,IRLS和修改后的ML算法会给出不同的标准误差。但是,除极端情况外,标准误差的差异通常很小。除了在处理小数据集时,所使用的估计方法通常不会产生推论差异。

统计代写请认准statistics-lab™. statistics-lab™为您的留学生涯保驾护航。

Days
Hours
Minutes
Seconds

hurry up

15% OFF

On All Tickets

Don’t hesitate and buy tickets today – All tickets are at a special price until 15.08.2021. Hope to see you there :)