统计代写|回归分析作业代写Regression Analysis代考|Application of the Theory: The Graduate Student GPA Data Analysis, Revisited

Doug I. Jones

Doug I. Jones

Lorem ipsum dolor sit amet, cons the all tetur adiscing elit

如果你也在 怎样代写回归分析Regression Analysis 这个学科遇到相关的难题,请随时右上角联系我们的24/7代写客服。回归分析Regression Analysis回归中的概率观点具体体现在给定X数据的特定固定值的Y数据的可变性模型中。这种可变性是用条件分布建模的;因此,副标题是:“条件分布方法”。回归的整个主题都是用条件分布来表达的;这种观点统一了不同的方法,如经典回归、方差分析、泊松回归、逻辑回归、异方差回归、分位数回归、名义Y数据模型、因果模型、神经网络回归和树回归。所有这些都可以方便地用给定特定X值的Y条件分布模型来看待。

回归分析Regression Analysis条件分布是回归数据的正确模型。它们告诉你,对于变量X的给定值,可能存在可观察到的变量Y的分布。如果你碰巧知道这个分布,那么你就知道了你可能知道的关于响应变量Y的所有信息,因为它与预测变量X的给定值有关。与基于R^2统计量的典型回归方法不同,该模型解释了100%的潜在可观察到的Y数据,后者只解释了Y数据的一小部分,而且在假设几乎总是被违反的情况下也是不正确的。

couryes-lab™ 为您的留学生涯保驾护航 在代写回归分析Regression Analysis方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写回归分析Regression Analysis代写方面经验极为丰富,各种代写回归分析Regression Analysis相关的作业也就用不着说。

统计代写|回归分析作业代写Regression Analysis代考|Application of the Theory: The Graduate Student GPA Data Analysis, Revisited

统计代写|回归分析作业代写Regression Analysis代考|Application of the Theory: The Graduate Student GPA Data Analysis, Revisited

Here is how the concepts presented in this chapter apply to this concrete situation.

All estimates and standard errors are matrix functions of the observed data set, as described and calculated above.

The fitted function is (Predicted GPA) $=2.7506999+0.0013572 \times$ GMAT +0.1793805 $\times \mathrm{PhD}$. This function defines the plane that minimizes the sum of squared vertical deviations from individual GPA values to the plane. Figure 7.4 is obtained using the same code that produced Figure 7.1.

The processes at work that gave these data on $n=494$ students’ GPAs could have given rise to a completely different set of $n=494$ GPAs, even with exact same $\mathrm{PhD}$ and GMAT data values as in the current data set. These other data are potentially observable only and do not refer to specific, existing other students. These other possible data simply reflect other possibilities that might have occurred at that particular point in time and place. Unbiasedness of the parameter estimates means that, while the estimates will be different for every other data set, on average they will be neither systematically above nor below the targets $\beta_0, \beta_1$ and $\beta_2$ that govern the production of the GPA data. In other words, unbiasedness implies that your estimates, $2.7506999,0.0013572$, and 0.1793805 , are randomly sampled values from distributions whose means are precisely $\beta_0, \beta_1$ and $\beta_2$, respectively.

The same conclusion regarding unbiasedness holds when you imagine the other data sets all having different $\mathrm{PhD}$ and GMAT data (the random- $X$ viewpoint). While this way of looking at the other data sets makes it easier to view them as simply belonging to a different set of $n=494$ students, it is still best not to think about it that way, because there never existed another set of 494 students coming from the same processes that produced these students. Rather, again, you should view these other possible data sets as potentially observable, just as in the fixed- $X$ viewpoint.

Again assuming the data-generating processes just described are well modelled via the classical model, then the standard deviations of all the other parameter estimates you would get from all these other data sets, assuming the same $\mathrm{PhD}$ and GMAT data for all data sets (the conditional- $x$ framework), would be approximately $0.1191639363,0.0002155794$, and 0.0503072073 . Thus, since data values from a distribution are typically within \pm two standard deviations of the mean, and because the means of the distributions of the estimated $\beta$ ‘s are in fact the true $\beta$ ‘s (by unbiasedness), you can expect, for example, that the true $\beta_2$ (measuring the true mean difference between GPA’s of PhD and Masters student who share a common GMAT) will be within the range $0.1793805 \pm 2(0.0503072073)$. In other words, you can claim confidently that $0.07876609<\beta_2<0.2799949$ (grade points). Under the assumptions of the classical model, an exact $95 \%$ confidence version of this interval uses the $T$ distribution to get the multiplier rather than using 2.0; the more precise interval is $0.0805365726<\beta_2<0.27822450$.

$95 \%$ of data sets in the conditional- $x$ samples will have the true $\beta_2$ inside of similarly constructed intervals; the same conclusion holds in the unconditional case because of the Law of Total Expectation: Over all possible random- $X$ samples, the average coverage level is the average of the conditional coverage levels $95 \%, 95 \%, \ldots$, etc. Because the average of a constant is just that constant, the interpretation of “95\%” holds in both the fixed- $X$ and random- $X$ frameworks.

统计代写|回归分析作业代写Regression Analysis代考|The $R$-Squared Statistic

Recall that the true $R^2$ statistic was introduced in Chapter 6 as
$$
\Omega^2=1-\mathrm{E}{v(X)} / \operatorname{Var}(Y),
$$
where $v(x)$ is the conditional variance of $Y$ given $X=x$, written as $v(x)=\operatorname{Var}(Y \mid X=x)$.
The number $\Omega^2$ is a measure of how well the ” $X$ ” variable(s) predict(s) your ” $Y$ ” variable. You can understand this concept in terms of separation of the distributions $p(y \mid X=x)$, for the two cases (i) $X=\mathrm{a}$ “low” value, and (ii) $X=\mathrm{a}$ “high” value. When these distributions are well-separated, then $X$ is a good predictor of $Y$.
For example, suppose the true model is
$$
Y=6+0.2 X+\varepsilon,
$$
where $X \sim \mathrm{N}\left(20,5^2\right)$ and $\operatorname{Var}(\varepsilon)=\sigma^2$. Then $\operatorname{Var}(Y)=0.2^2 \times 5^2+\sigma^2=1+\sigma^2$, and $v(x)=$ $\operatorname{Var}(Y \mid X=x)=\sigma^2$, implying that $\Omega^2=1-\sigma^2 /\left(1+\sigma^2\right)=1 /\left(1+\sigma^2\right)$ is the true $R^2$.

Three cases to consider are (i) $\sigma^2=9.0$, implying a low $\Omega^2=0.1$, (ii) $\sigma^2=1.0$, implying a medium value $\Omega^2=0.5$, and (iii) $\sigma^2=1 / 9$, implying a high $\Omega^2=0.9$. In all cases, let’s say a “low” value of $X$ is 15.0, one standard deviation below the mean, and a “high” value of $X$ is 25.0, one standard deviation above the mean.

Now, when $X=15$, the distribution $p(y \mid X=15)$ is the $\mathrm{N}\left(9.0, \sigma^2\right)$ distribution; and when $X=25$, the distribution $p(y \mid X=25)$ is the $\mathrm{N}\left(11.0, \sigma^2\right)$ distribution. Figure 8.1 displays these distributions for the three cases above, where the true $R^2$ is either $0.1,0.5$, or 0.9 (which happens in this study when $\sigma^2$ is either 9.0,1.0, or 1/9). Notice that there is greater separation of the distributions $p(y \mid x)$ when the true $R^2$ is higher.

统计代写|回归分析作业代写Regression Analysis代考|Application of the Theory: The Graduate Student GPA Data Analysis, Revisited

回归分析代写

统计代写|回归分析作业代写Regression Analysis代考|Application of the Theory: The Graduate Student GPA Data Analysis, Revisited

以下是本章提出的概念如何适用于这种具体情况。

所有估计和标准误差都是观测数据集的矩阵函数,如上所述和计算。

拟合函数为(expected GPA) $=2.7506999+0.0013572 \times$ GMAT +0.1793805 $\times \mathrm{PhD}$。该函数定义了使各个GPA值到该平面的垂直偏差平方和最小的平面。图7.4是使用与图7.1相同的代码得到的。

提供$n=494$学生gpa数据的过程可能会产生一组完全不同的$n=494$ gpa,即使与当前数据集中的$\mathrm{PhD}$和GMAT数据值完全相同。这些其他数据可能只是可观察到的,并不涉及具体的、现有的其他学生。这些其他可能的数据只是反映了在那个特定的时间和地点可能发生的其他可能性。参数估计的无偏性意味着,虽然对每个其他数据集的估计是不同的,但平均而言,它们既不会系统地高于也不会低于控制GPA数据产生的目标$\beta_0, \beta_1$和$\beta_2$。换句话说,无偏性意味着您的估计值$2.7506999,0.0013572$和0.1793805是随机抽样的值,它们的均值分别是$\beta_0, \beta_1$和$\beta_2$。

当你想象其他数据集都有不同的$\mathrm{PhD}$和GMAT数据(随机- $X$观点)时,关于无偏性的相同结论也成立。虽然这种看待其他数据集的方式可以更容易地将它们视为属于另一组$n=494$学生,但最好还是不要这样想,因为从来没有另一组494名学生来自与产生这些学生相同的过程。相反,您应该再次将这些其他可能的数据集视为潜在的可观察数据集,就像在fixed- $X$观点中一样。

再次假设刚才描述的数据生成过程是通过经典模型很好地建模的,那么您将从所有这些其他数据集获得的所有其他参数估计的标准差,假设所有数据集的相同$\mathrm{PhD}$和GMAT数据(条件- $x$框架),将近似为$0.1191639363,0.0002155794$和0.0503072073。因此,由于分布的数据值通常在平均值的\pm两个标准差范围内,并且由于估计的$\beta$ ‘s分布的平均值实际上是真实的$\beta$ ‘s(通过无偏倚),因此您可以预期,例如,真实的$\beta_2$(测量共享GMAT的博士和硕士学生GPA之间的真实平均值差)将在$0.1793805 \pm 2(0.0503072073)$范围内。换句话说,你可以自信地宣称$0.07876609<\beta_2<0.2799949$(成绩点)。在经典模型的假设下,该区间的精确$95 \%$置信度版本使用$T$分布得到乘数,而不是使用2.0;更精确的间隔是$0.0805365726<\beta_2<0.27822450$。

$95 \%$ 条件- $x$样本中的数据集将在相似构造的区间内具有真实的$\beta_2$;由于总期望定律,在无条件情况下也得出同样的结论:在所有可能的随机- $X$样本中,平均覆盖水平是条件覆盖水平的平均值$95 \%, 95 \%, \ldots$,等等。因为一个常数的平均值就是那个常数,所以“95%”的解释在固定- $X$和随机- $X$框架中都成立。

统计代写|回归分析作业代写Regression Analysis代考|The $R$-Squared Statistic

回想一下,真正的$R^2$统计量在第6章中介绍为
$$
\Omega^2=1-\mathrm{E}{v(X)} / \operatorname{Var}(Y),
$$
其中$v(x)$为$Y$给定$X=x$的条件方差,写成$v(x)=\operatorname{Var}(Y \mid X=x)$。
数字$\Omega^2$衡量的是“$X$”变量对“$Y$”变量的预测程度。您可以根据分布$p(y \mid X=x)$的分离来理解这个概念,对于两种情况(i) $X=\mathrm{a}$“低”值和(ii) $X=\mathrm{a}$“高”值。当这些分布很好地分开时,$X$是$Y$的一个很好的预测器。
例如,假设真实的模型是
$$
Y=6+0.2 X+\varepsilon,
$$
其中$X \sim \mathrm{N}\left(20,5^2\right)$和$\operatorname{Var}(\varepsilon)=\sigma^2$。然后是$\operatorname{Var}(Y)=0.2^2 \times 5^2+\sigma^2=1+\sigma^2$和$v(x)=$$\operatorname{Var}(Y \mid X=x)=\sigma^2$,暗示$\Omega^2=1-\sigma^2 /\left(1+\sigma^2\right)=1 /\left(1+\sigma^2\right)$是真实的$R^2$。

需要考虑的三种情况是(i) $\sigma^2=9.0$,表示低$\Omega^2=0.1$, (ii) $\sigma^2=1.0$,表示中等值$\Omega^2=0.5$,以及(iii) $\sigma^2=1 / 9$,表示高$\Omega^2=0.9$。在所有情况下,假设“低”值$X$是15.0,比平均值低一个标准差,而“高”值$X$是25.0,比平均值高一个标准差。

现在,当$X=15$,分布$p(y \mid X=15)$是$\mathrm{N}\left(9.0, \sigma^2\right)$分布;当$X=25$时,分布$p(y \mid X=25)$是$\mathrm{N}\left(11.0, \sigma^2\right)$分布。图8.1显示了上述三种情况的分布,其中真实的$R^2$为$0.1,0.5$或0.9(在本研究中,当$\sigma^2$为9.0、1.0或1/9时发生这种情况)。请注意,当真实的$R^2$更高时,分布之间的分离程度更大$p(y \mid x)$。


统计代写请认准statistics-lab™. statistics-lab™为您的留学生涯保驾护航。

Days
Hours
Minutes
Seconds

hurry up

15% OFF

On All Tickets

Don’t hesitate and buy tickets today – All tickets are at a special price until 15.08.2021. Hope to see you there :)