## 数学代写|凸优化作业代写Convex Optimization代考|ELEN90026

2023年3月31日

couryes-lab™ 为您的留学生涯保驾护航 在代写凸优化Convex Optimization方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写凸优化Convex Optimization代写方面经验极为丰富，各种代写凸优化Convex Optimization相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
couryes™为您提供可以保分的包课服务

## 数学代写|凸优化作业代写Convex Optimization代考|MAP with perfect linear measurements

Suppose $x \in \mathbf{R}^n$ is a vector of parameters to be estimated, with prior density $p_x$. We have $m$ perfect (noise free, deterministic) linear measurements, given by $y=A x$. In other words, the conditional distribution of $y$, given $x$, is a point mass with value one at the point $A x$. The MAP estimate can be found by solving the problem
$$\begin{array}{ll} \text { maximize } & \log p_x(x) \ \text { subject to } & A x=y \end{array}$$
If $p_x$ is log-concave, this is a convex problem.
If under the prior distribution, the parameters $x_i$ are IID with density $p$ on $\mathbf{R}$, then the MAP estimation problem has the form
$$\begin{array}{ll} \operatorname{maximize} & \sum_{i=1}^n \log p\left(x_i\right) \ \text { subject to } & A x=y \end{array}$$
which is a least-penalty problem $((6.6)$, page 304$)$, with penalty function $\phi(u)=$ $-\log p(u)$
Conversely, we can interpret any least-penalty problem,
$$\begin{array}{ll} \text { minimize } & \phi\left(x_1\right)+\cdots+\phi\left(x_n\right) \ \text { subject to } & A x=b \end{array}$$
as a MAP estimation problem, with $m$ perfect linear measurements (i.e., $A x=b)$ and $x_i$ IID with density
$$p(z)=\frac{e^{-\phi(z)}}{\int e^{-\phi(u)} d u}$$

## 数学代写|凸优化作业代写Convex Optimization代考|Prior information

Many types of prior information about $p$ can be expressed in terms of linear equality constraints or inequalities. If $f: \mathbf{R} \rightarrow \mathbf{R}$ is any function, then
$$\mathbf{E} f(X)=\sum_{i=1}^n p_i f\left(\alpha_i\right)$$
is a linear function of $p$. As a special case, if $C \subseteq \mathbf{R}$, then $\operatorname{prob}(X \in C)$ is a linear function of $p$ :
$$\operatorname{prob}(X \in C)=c^T p, \quad c_i= \begin{cases}1 & \alpha_i \in C \ 0 & \alpha_i \notin C\end{cases}$$
It follows that known expected values of certain functions (e.g., moments) or known probabilities of certain sets can be incorporated as linear equality constraints on $p \in \mathbf{R}^n$. Inequalities on expected values or probabilities can be expressed as linear inequalities on $p \in \mathbf{R}^n$.

For example, suppose we know that $X$ has mean $\mathbf{E} X=\alpha$, second moment $\mathbf{E} X^2=\beta$, and $\operatorname{prob}(X \geq 0) \leq 0.3$. This prior information can be expressed as
$$\mathbf{E} X=\sum_{i=1}^n \alpha_i p_i=\alpha, \quad \mathbf{E} X^2=\sum_{i=1}^n \alpha_i^2 p_i=\beta, \quad \sum_{\alpha_i \geq 0} p_i \leq 0.3,$$
which are two linear equalities and one linear inequality in $p$.
We can also include some prior constraints that involve nonlinear functions of $p$. As an example, the variance of $X$ is given by
$$\operatorname{var}(X)=\mathbf{E} X^2-(\mathbf{E} X)^2=\sum_{i=1}^n \alpha_i^2 p_i-\left(\sum_{i=1}^n \alpha_i p_i\right)^2 .$$
The first term is a linear function of $p$ and the second term is concave quadratic in $p$, so the variance of $X$ is a concave function of $p$. It follows that a lower bound on the variance of $X$ can be expressed as a convex quadratic inequality on $p$.
As another example, suppose $A$ and $B$ are subsets of $\mathbf{R}$, and consider the conditional probability of $A$ given $B$ :
$$\operatorname{prob}(X \in A \mid X \in B)=\frac{\operatorname{prob}(X \in A \cap B)}{\operatorname{prob}(X \in B)}$$

## 数学代写|凸优化作业代写Convex Optimization代考|MAP with perfect linear measurements

$$\text { maximize } \sum_{i=1}^n \log p\left(x_i\right) \text { subject to } \quad A x=y$$

minimize $\phi\left(x_1\right)+\cdots+\phi\left(x_n\right)$ subject to $A x$

$$p(z)=\frac{e^{-\phi(z)}}{\int e^{-\phi(u)} d u}$$

## 数学代写|凸优化作业代写Convex Optimization代考|Prior information

$$\mathbf{E} f(X)=\sum_{i=1}^n p_i f\left(\alpha_i\right)$$

$$\operatorname{prob}(X \in C)=c^T p, \quad c_i=\left{1 \quad \alpha_i \in C 0 \quad \alpha_i \notin C\right.$$

$$\mathbf{E} X=\sum_{i=1}^n \alpha_i p_i=\alpha, \quad \mathbf{E} X^2=\sum_{i=1}^n \alpha_i^2 p_i=\beta, \quad \sum_{\alpha_i \geq 0}$$

$$\operatorname{var}(X)=\mathbf{E} X^2-(\mathbf{E} X)^2=\sum_{i=1}^n \alpha_i^2 p_i-\left(\sum_{i=1}^n \alpha_i p_i\right)^2$$

$$\operatorname{prob}(X \in A \mid X \in B)=\frac{\operatorname{prob}(X \in A \cap B)}{\operatorname{prob}(X \in B)}$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。