# 统计代写|回归分析作业代写Regression Analysis代考|The Regression Model in Matrix Form

#### Doug I. Jones

Lorem ipsum dolor sit amet, cons the all tetur adiscing elit

couryes™为您提供可以保分的包课服务

couryes-lab™ 为您的留学生涯保驾护航 在代写回归分析Regression Analysis方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写回归分析Regression Analysis代写方面经验极为丰富，各种代写回归分析Regression Analysis相关的作业也就用不着说。

## 统计代写|回归分析作业代写Regression Analysis代考|The Regression Model in Matrix Form

The model representation $Y=\mathbf{X} \beta+\varepsilon$ is not complete because it states nothing about the assumptions. The following expression is a complete representation of the classical model; notice how simple the model looks when expressed in matrix form.
The classical model in matrix form
$$\boldsymbol{Y} \mid \mathbf{X}=\mathbf{x} \sim \mathrm{N}_n\left(\mathbf{x} \boldsymbol{\beta}, \sigma^2 \mathbf{I}\right)$$
Here, the $\mathbf{X}=\mathbf{x}$ condition refers to a specific realized matrix $\mathbf{x}$ of the random matrix $\mathbf{X}$ and is a simple generalization of the $X=x$ condition we have used repeatedly to its matrix form. The matrix $\mathbf{X}$ contains potentially observable (random) $X$ values, as well as fixed values for any non-random $X$ data. The first column of $\mathbf{X}$ is ordinarily the column of 1 ‘s needed to capture the intercept term $\beta_0$, and this column is not random.
In Appendix A of Chapter 1, we introduced the bivariate normal distribution, which is a distribution of two variables. Here, the symbol ” $\mathrm{N}_n\left(\mathbf{x} \boldsymbol{\beta}, \sigma^2 \mathbf{I}\right)$ ” refers to a multivariate normal distribution. The ” $n$ ” subscript identifies that it is a distribution of the $n$ variables $Y_1, Y_2, \ldots$, $Y_n$. The $\mathbf{x} \boldsymbol{\beta}$ term refers to the mean vector of the distribution, and the term $\sigma^2 \mathbf{I}$ refers to its covariance matrix (explained in detail below).
All assumptions in the classical regression model are embodied in the concise matrix form of the model: The correct functional specification assumption is embodied in the mean vector $(\mathbf{x} \boldsymbol{\beta})$ specification, the constant variance and independence assumptions are implied by specification of $\sigma^2 \mathbf{I}$ as covariance matrix, as will be described below, and the normality assumption is embodied in the multivariate normal specification.
A covariance matrix is a matrix that contains all the variances and covariances among a set of random variables. For example, if $\left(W_1, W_2, W_3\right)$ are jointly distributed random variables, then the covariance matrix of $\boldsymbol{W}=\left(W_1, W_2, W_3\right)$ is given by
$$\operatorname{Cov}(W)=\left[\begin{array}{ccc} \operatorname{Var}\left(W_1\right) & \operatorname{Cov}\left(W_1, W_2\right) & \operatorname{Cov}\left(W_1, W_3\right) \ \operatorname{Cov}\left(W_2, W_1\right) & \operatorname{Var}\left(W_2\right) & \operatorname{Cov}\left(W_2, W_3\right) \ \operatorname{Cov}\left(W_3, W_1\right) & \operatorname{Cov}\left(W_3, W_2\right) & \operatorname{Var}\left(W_3\right) \end{array}\right]$$
Notice that the row/column combination tells you which pair of variables are involved, or which variable is involved in the case of the diagonal elements. Note also that the covariance of a variable with itself is just the variance of that variable, which explains why the variances are on the diagonal of the covariance matrix.

## 统计代写|回归分析作业代写Regression Analysis代考|Unbiasedness of the OLS Estimator $\hat{\beta}$ Under the Gauss-Markov Model

Strangely, it is easier to prove the unbiasedness of the estimators $\hat{\beta}$ using their matrix form. We will assume the Gauss-Markov model, thus the mathematical theorem is stated as follows: If the data are produced by the Gauss-Markov model, then the OLS $\hat{\boldsymbol{\beta}}$ is an unbiased estimator of $\boldsymbol{\beta}$. Since the Gauss-Markov model includes the (normality-assuming) classical model as a special case, the proof of unbiasedness of the estimators $\hat{\beta}$ in the Gauss-Markov model implies, a fortiori, unbiasedness of the estimators $\hat{\beta}$ in the classical model as well.
There are two parts to the unbiasedness argument. The first is that the estimates are unbiased, conditional on the observed values of the random $X$ variables. The second is to note that, by the law of total expectation, the estimates are also unbiased when considered over all possible samples of random $X$ data.

The $\mathbf{X}$ matrix has random and fixed elements; the first column of the ones, for example, contains fixed (non-random) elements. Also, if there are $X$ variables that are fixed in advance of observing the $Y$ data, as occurs for example in designed experiments and in some kinds of stratified sampling, then these $X$ variables are also fixed, not random.

# 回归分析代写

## 统计代写|回归分析作业代写Regression Analysis代考|The Regression Model in Matrix Form

$$\boldsymbol{Y} \mid \mathbf{X}=\mathbf{x} \sim \mathrm{N}_n\left(\mathbf{x} \boldsymbol{\beta}, \sigma^2 \mathbf{I}\right)$$

$$\operatorname{Cov}(W)=\left[\begin{array}{ccc} \operatorname{Var}\left(W_1\right) & \operatorname{Cov}\left(W_1, W_2\right) & \operatorname{Cov}\left(W_1, W_3\right) \ \operatorname{Cov}\left(W_2, W_1\right) & \operatorname{Var}\left(W_2\right) & \operatorname{Cov}\left(W_2, W_3\right) \ \operatorname{Cov}\left(W_3, W_1\right) & \operatorname{Cov}\left(W_3, W_2\right) & \operatorname{Var}\left(W_3\right) \end{array}\right]$$

## 统计代写|回归分析作业代写Regression Analysis代考|Unbiasedness of the OLS Estimator $\hat{\beta}$ Under the Gauss-Markov Model

$\mathbf{X}$矩阵具有随机和固定的元素;例如，1的第一列包含固定(非随机)元素。同样，如果在观察Y数据之前有X变量是固定的，例如在设计实验和某些分层抽样中，那么这些X变量也是固定的，而不是随机的。

Days
Hours
Minutes
Seconds

# 15% OFF

## On All Tickets

Don’t hesitate and buy tickets today – All tickets are at a special price until 15.08.2021. Hope to see you there :)