计算机代写|机器学习代写machine learning代考|COMP5318

Doug I. Jones

Lorem ipsum dolor sit amet, cons the all tetur adiscing elit

couryes-lab™ 为您的留学生涯保驾护航 在代写机器学习 machine learning方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写机器学习 machine learning代写方面经验极为丰富，各种代写机器学习 machine learning相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
couryes™为您提供可以保分的包课服务

计算机代写|机器学习代写machine learning代考|Limiting Spectrum of the Null Model

As previously mentioned, it is convenient to start by investigating the null model innerproduct kernel matrix $\mathbf{K}=\mathbf{K}N$ with $$[\mathbf{K}]{i j}= \begin{cases}f\left(\mathbf{z}i^{\top} \mathbf{z}_j / \sqrt{p}\right) / \sqrt{p} & \text { for } i \neq j \ 0 & \text { for } i=j\end{cases}$$ for i.i.d. $\mathbf{z}_i \sim \mathcal{N}\left(\mathbf{0}, \mathbf{I}_p\right){ }^{10}$ We are, as usual, interested in the associated resolvent $$\mathbf{Q}(z) \equiv\left(\mathbf{K}-z \mathbf{I}_n\right)^{-1} \in \mathbb{R}^{n \times n} .$$ Following the Marčenko-Pastur and Bai-Silverstein approaches (in Theorems $2.4$ and 2.6, respectively), we first remove the $i$ th row and the $i$ th column of the symmetric matrix $\mathbf{K}$ to decompose it, up to permutation, as \begin{aligned} \mathbf{K} & =\left[\begin{array}{cc} \mathbf{K}{-i} & f\left(\mathbf{Z}{-i}^{\top} \mathbf{z}_i / \sqrt{p}\right) / \sqrt{p} \ f\left(\mathbf{z}_i^{\top} \mathbf{Z}{-i} / \sqrt{p}\right) / \sqrt{p} & 0 \end{array}\right] \ \text { with } \mathbf{K}{-i} & \equiv f\left(\mathbf{Z}{-i}^{\top} \mathbf{Z}{-i} / \sqrt{p}\right) / \sqrt{p}-\operatorname{diag}(\cdot) \in \mathbb{R}^{(n-1) \times(n-1)}, \end{aligned} (i.e., with zeros on the diagonal of $\left.\mathbf{K}{-i}\right)$, where $\mathbf{Z}{-i} \in \mathbb{R}^{p \times(n-1)}$ is the Gaussian matrix $\mathbf{Z}$ without the $i$ th column $\mathbf{z}_i$. As such, $\mathbf{K}{-i}$ is (i) independent of $\mathbf{z}i$, and is (ii) asymptotically close to $\mathbf{K}$ for $n$ large. We similarly define the resolvent of $\mathbf{K}{-i}$ as
$$\mathbf{Q}{-i} \equiv\left(\mathbf{K}{-i}-z \mathbf{I}{n-1}\right)^{-1} \in \mathbb{R}^{(n-1) \times(n-1)} .$$ With Lemma 2.6, the $(i, i)$ th diagonal entry of $\mathbf{Q}$ is given by $$[\mathbf{Q}]{i i}=\frac{1}{-z-\frac{1}{p} f\left(\mathbf{z}i^{\top} \mathbf{Z}{-i} / \sqrt{p}\right) \mathbf{Q}{-i} f\left(\mathbf{Z}{-1}^{\top} \mathbf{z}i / \sqrt{p}\right)} \equiv \frac{1}{-z-\delta_i},$$ where we recall that the diagonals of both $\mathbf{K}$ and $\mathbf{K}{-i}$ are zero. To evaluate the Stieltjes transform $m_n(z)=\frac{1}{n} \operatorname{tr} \mathbf{Q}(z)=\frac{1}{n} \sum_{i=1}^n \mathbf{Q}{i i}(z)$ of the spectral measure of $\mathbf{K}$, the key object is thus the (nonlinear) quadratic form $$\delta_i \equiv \frac{1}{p} f\left(\mathbf{z}_i^{\top} \mathbf{Z}{-i} / \sqrt{p}\right) \mathbf{Q}{-i} f\left(\mathbf{Z}{-i}^{\top} \mathbf{z}_i / \sqrt{p}\right)$$

计算机代写|机器学习代写machine learning代考|Properly Scaling Random Kernel Matrices

Having covered the analysis of the (pure-noise or null model) kernel matrix $\mathbf{K}N$, we present in this section the “information-plus-noise” random (asymptotic) equivalent for the kernel matrix $\mathbf{K}$, again under the nontrivial classification assumptions on the $k$-class mixture model defined in (4.20) (as for the $\alpha-\beta$ kernel studied in Section 4.2.4). The main idea for this “information-plus-noise” decomposition comes in two steps: (i) first, by an expansion of $\mathbf{x}_i^{\top} \mathbf{x}_j$ as a function of $\mathbf{z}_i, \mathbf{z}_j$ and the statistical mixture model parameters $\left{\boldsymbol{\mu}_a, \mathbf{E}_a\right}{a=1}^k$, the inner products $\mathbf{x}_i^{\top} \mathbf{x}_j$ are developed into successive orders of magnitudes with respect to $p$; this further allows for a Taylor expansion of $f\left(\mathbf{x}_i^{\top} \mathbf{x}_j / \sqrt{p}\right)$ for at least twice differentiable functions $f$ around its dominant term $f\left(\mathbf{z}_i^{\top} \mathbf{z}_j / \sqrt{p}\right)$. Then, (ii) relying on the orthogonal polynomial approach of the previous section, one may “linearize” the resulting matrix terms $\left{f\left(\mathbf{x}_i^{\top} \mathbf{x}_j / \sqrt{p}\right)\right},\left{f^{\prime}\left(\mathbf{x}_i^{\top} \mathbf{x}_j / \sqrt{p}\right)\right}$ and $\left{f^{\prime \prime}\left(\mathbf{x}_i^{\top} \mathbf{x}_j / \sqrt{p}\right)\right}$ (all terms corresponding to higher-order derivatives asymptotically vanish) and use Assumption 2 to extend the result to all square-summable $f$. The precise derivations may be found in Liao and Couillet [2019a].

The main conclusion is that the kernel matrix $\mathbf{K}$ asymptotically behaves like the sum $\tilde{\mathbf{K}}=\mathbf{K}_N+\tilde{\mathbf{K}}_I$ of the full-rank “noise” matrix $\mathbf{K}_N$ (characterized in Theorems $4.4$ and 4.5) and a low-rank “information” matrix $\tilde{\mathbf{K}}_I$. This is stated in the following theorem.

Theorem 4.6 (Random equivalent for properly scaling kernel, Liao and Couillet [2019a]). Let Assumption 2 hold and $\mathbf{K} \in \mathbb{R}^{n \times n}$ be the properly scaling kernel defined in (4.19) with $\mathbf{x}i=\mu_a+\left(\mathbf{I}_p+\mathbf{E}_a\right)^{\frac{1}{2}} \mathbf{z}_i$, for $\mathbf{z}_i$ having i.i.d. zero-mean, unit-variance and subexponential entries, $\mathbf{x}_i \in \mathcal{C}_a$ satisfying the following growth rate conditions $$\mathbf{M}=O{|\cdot|}(1), \mathbf{t}=\frac{1}{\sqrt{p}}\left{\operatorname{tr} \mathbf{E}a\right}{a=1}^k=O_{|\cdot|}(1), \mathbf{S}^{\circ}=\frac{1}{\sqrt{p}}\left{\operatorname{tr} \mathbf{E}a \mathbf{E}_b\right}{a, b=1}^k=O_{|\cdot|}(1) .$$
Then, as $n, p \rightarrow \infty$ with $p / n \rightarrow c \in(0, \infty)$,
$$|\mathbf{K}-\tilde{\mathbf{K}}| \stackrel{\text { a.s. }}{\longrightarrow} 0, \quad \tilde{\mathbf{K}}=\mathbf{K}_N+\mathbf{V A V}^{\top}$$
with $\mathbf{K}_N$ defined in (4.21) and
\begin{aligned} & \mathbf{A}=\left[\begin{array}{cc} a_1 \cdot \mathbf{M}^{\top} \mathbf{M}+\frac{a_2}{\sqrt{2}} \cdot\left(\mathbf{t 1}_k^{\top}+\mathbf{1}_k \mathbf{t}^{\top}+\mathbf{S}^{\circ}\right) & a_1 \mathbf{I}_k \ a_1 \mathbf{I}_k & \mathbf{0} \end{array}\right] \ & \end{aligned}
where we recall that $a_1$ and $a_2$ are the first two Hermite coefficients $a_1=\mathbb{E}[\xi f(\xi)]$ and $a_2=\mathbb{E}\left[\left(\xi^2-1\right) f(\xi)\right] / \sqrt{2}$ for $\xi \sim \mathcal{N}(0,1)$, as defined in (4.23).

机器学习代考

计算机代写|机器学习代写machine learning代考|Limiting Spectrum of the Null Model

$[\mathbf{K}] i j=\left{f\left(\mathbf{z} i^{\top} \mathbf{z}j / \sqrt{p}\right) / \sqrt{p} \quad\right.$ for $i \neq j 0 \quad$ for $i=j$ 对于 $\mathrm{iid} \mathbf{z}_i \sim \mathcal{N}\left(0, \mathbf{I}_p\right)^{10}$ 像往常一样，我们对相关的解决方案感兴 趣 $$\mathbf{Q}(z) \equiv\left(\mathbf{K}-z \mathbf{I}_n\right)^{-1} \in \mathbb{R}^{n \times n}$$ 遭循 Marčenko-Pastur 和 Bai-Silverstein 方法 (在定理中 $2.4$ 和 $2.6$ ，分别)，我们首先删除 $i$ 第行和 $i$ 对称矩阵的第列 $\mathbf{K}$ 分解它，直 到排列，如 (即，在对角线上有零 $\mathbf{K}-i$ )，在哪里 $\mathbf{Z}-i \in \mathbb{R}^{p \times(n-1)}$ 是高斯矩 阵 $\mathbf{Z}$ 没有 $i$ 第 列 $\mathbf{z}_i$. 像这样， $\mathbf{K}-i$ (i) 独立于 $\mathbf{z} i$, 并且 (ii) 渐近接近 $\mathbf{K}$ 为 了 $n$ 大。我们类似地定义的解决方案 $\mathbf{K}-i$ 作为 $$\mathbf{Q}-i \equiv(\mathbf{K}-i-z \mathbf{I} n-1)^{-1} \in \mathbb{R}^{(n-1) \times(n-1)} \text {. }$$ 使用引理 2.6，(i,i)的第 对角线条目 $\mathbf{Q}$ 是（谁） 给的 $$[\mathbf{Q}] i i=\frac{1}{-z-\frac{1}{p} f\left(\mathbf{z} i^{\top} \mathbf{Z}-i / \sqrt{p}\right) \mathbf{Q}-i f\left(\mathbf{Z}-1^{\top} \mathbf{z} i / \sqrt{p}\right)} \equiv$$ 我们记得两者的对角线 $\mathbf{K}$ 和 $\mathbf{K}-i$ 为雴。评估 Stieltjes 变换 $m_n(z)=\frac{1}{n} \operatorname{tr} \mathbf{Q}(z)=\frac{1}{n} \sum{i=1}^n \mathbf{Q} i i(z)$ 的光谱测量 $\mathbf{K}$ ，关键对象 因此是 (非线性) 二次形式
$$\delta_i \equiv \frac{1}{p} f\left(\mathbf{z}_i^{\top} \mathbf{Z}-i / \sqrt{p}\right) \mathbf{Q}-i f\left(\mathbf{Z}-i^{\top} \mathbf{z}_i / \sqrt{p}\right)$$

计算机代写|机器学习代写machine learning代考|Properly Scaling Random Kernel Matrices

$\backslash$ mathbf ${\mathrm{M}}=O{|\backslash \mathrm{Idot}|}(1)$, \mathbf ${\mathrm{t}}=\backslash \mathrm{frac}{1}{\backslash \mathrm{sqrt}{\mathrm{p}}} \backslash$ eft ${$ \operatorn

$$|\mathbf{K}-\tilde{\mathbf{K}}| \stackrel{\text { a.s. }}{\longrightarrow} 0, \quad \tilde{\mathbf{K}}=\mathbf{K}_N+\mathbf{V A} \mathbf{V}^{\top}$$

$$\mathbf{A}=\left[a_1 \cdot \mathbf{M}^{\top} \mathbf{M}+\frac{a_2}{\sqrt{2}} \cdot\left(\mathbf{t} \mathbf{1}_k^{\top}+\mathbf{1}_k \mathbf{t}^{\top}+\mathbf{S}^{\circ}\right) \quad a_1 \mathbf{I}_k a_1 \mathbf{I}_k\right.$$

有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

Days
Hours
Minutes
Seconds

15% OFF

On All Tickets

Don’t hesitate and buy tickets today – All tickets are at a special price until 15.08.2021. Hope to see you there :)