数学代写|凸优化作业代写Convex Optimization代考|SUBGRADIENTS OF CONVEX REAL-VALUED FUNCTIONS

Doug I. Jones

Doug I. Jones

Lorem ipsum dolor sit amet, cons the all tetur adiscing elit

如果你也在 怎样代写凸优化Convex optimization 这个学科遇到相关的难题,请随时右上角联系我们的24/7代写客服。凸优化Convex optimization由于在大规模资源分配、信号处理和机器学习等领域的广泛应用,人们对凸优化的兴趣越来越浓厚。本书旨在解决凸优化问题的算法的最新和可访问的发展。

凸优化Convex optimization无约束可以很容易地用梯度下降(最陡下降的特殊情况)或牛顿方法解决,结合线搜索适当的步长;这些可以在数学上证明收敛速度很快,尤其是后一种方法。[22]如果目标函数是二次函数,也可以使用KKT矩阵技术求解具有线性等式约束的凸优化(它推广到牛顿方法的一种变化,即使初始化点不满足约束也有效),但通常也可以通过线性代数消除等式约束或解决对偶问题来解决。

couryes-lab™ 为您的留学生涯保驾护航 在代写凸优化Convex Optimization方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写凸优化Convex Optimization代写方面经验极为丰富,各种代写凸优化Convex Optimization相关的作业也就用不着说。

数学代写|凸优化作业代写Convex Optimization代考|SUBGRADIENTS OF CONVEX REAL-VALUED FUNCTIONS

数学代写|凸优化作业代写Convex Optimization代考|SUBGRADIENTS OF CONVEX REAL-VALUED FUNCTIONS

Given a proper convex function $f: \Re^n \mapsto(-\infty, \infty)$, we say that a vector $g \in \Re^n$ is a subgradient of $f$ at a point $x \in \operatorname{dom}(f)$ if
$$
f(z) \geq f(x)+g^{\prime}(z-x), \quad \forall z \in \Re^n
$$
see Fig. 3.1.1. The set of all subgradients of $f$ at $x \in \Re^n$ is called the subdifferential of $f$ at $x$, and is denoted by $\partial f(x)$. For $x \notin \operatorname{dom}(f)$ we use the convention $\partial f(x)=\varnothing$. Figure 3.1.2 provides some examples of subdifferentials. Note that $\partial f(x)$ is a closed convex set, since based on Eq. $(3.1)$, it is the intersection of a collection of closed halfspaces (one for each $\left.z \in \Re^n\right)$.

It is generally true that $\partial f(x)$ is nonempty for all $x \in \operatorname{ri}(\operatorname{dom}(f))$, the relative interior of the domain of $f$, but it is possible that $\partial f(x)=\varnothing$ at some points in the relative boundary of $\operatorname{dom}(f)$. The properties of subgradients of extended real-valued functions are summarized in Section 5.4 of Appendix B. When $f$ is real-valued, however, stronger results can be shown: $\partial f(x)$ is not only closed and convex, but also nonempty and compact for all $x \in \Re^n$. Moreover the proofs of this and other related results are generally simpler than for the extended real-valued case. For this reason, we will provide an independent development of the results that we need for the case where $f$ is real-valued (which is the primary case of interest in algorithms).
To this end, we recall the definition of the directional derivative of $f$ at a point $x$ in a direction $d$ :
$$
f^{\prime}(x ; d)=\lim _{\alpha \downarrow 0} \frac{f(x+\alpha d)-f(x)}{\alpha}
$$
(cf. Section 5.4.4 of Appendix B). The ratio on the right-hand side is monotonically nonincreasing to $f^{\prime}(x ; d)$, as shown in Section 5.4.4 of Appendix B; also see Fig. 3.1.3.
Our first result shows some basic properties, and provides the connection between $\partial f(x)$ and $f^{\prime}(x ; d)$ for real-valued $f$. A related and more refined result is given in Prop. 5.4.8 in Appendix B for extended real-valued $f$. Its proof, however, is more intricate and includes some conditions that are unnecessary for the case where $f$ is real-valued.

数学代写|凸优化作业代写Convex Optimization代考|Characterization of the Subdifferential

The characterization and computation of $\partial f(x)$ may not be convenient in general. It is, however, possible in some special cases. Principal among these is when
$$
f(x)=\sup {z \in Z} \phi(x, z) $$ where $x \in \Re^n, z \in \Re^m, \phi: \Re^n \times \Re^m \mapsto \Re$ is a function, $Z$ is a compact subset of $\Re^m, \phi(\cdot, z)$ is convex and differentiable for each $z \in Z$, and $\nabla_x \phi(x, \cdot)$ is continuous on $Z$ for each $x$. Then the form of $\partial f(x)$ is given by Danskin’s Theorem [Dan67], which states that $$ \partial f(x)=\operatorname{conv}\left{\nabla_x \phi(x, z) \mid z \in Z(x)\right}, \quad x \in \Re^n, $$ where $Z(x)$ is the set of maximizing points in Eq. (3.9), $$ Z(x)=\left{\bar{z} \mid \phi(x, \bar{z})=\max {z \in Z} \phi(x, z)\right} .
$$
The proof is somewhat long, so it is relegated to the exercises.
An important special case of Eq. (3.10) is when $Z$ is a finite set, so $f$ is the maximum of $m$ differentiable convex functions $\phi_1, \ldots, \phi_m$ :
$$
f(x)=\max \left{\phi_1(x), \ldots, \phi_m(x)\right}, \quad x \in \Re^n .
$$
Then we have
$$
\partial f(x)=\operatorname{conv}\left{\nabla \phi_i(x) \mid i \in I(x)\right}
$$
where $I(x)$ is the set of indexes $i$ for which the maximum is attained, i.e., $\phi_i(x)=f(x)$. Another important special case is when $\phi(\cdot, z)$ is differentiable for all $z \in Z$, and the supremum in Eq. (3.9) is attained at a unique point, so $Z(x)$ consists of a single point $z(x)$. Then $f$ is differentiable at $x$ and
$$
\nabla f(x)=\nabla \phi(x, z(x))
$$

数学代写|凸优化作业代写Convex Optimization代考|SUBGRADIENTS OF CONVEX REAL-VALUED FUNCTIONS

凸优化代写

数学代写|凸优化作业代写Convex Optimization代考|SUBGRADIENTS OF CONVEX REAL-VALUED FUNCTIONS

给定一个适当的凸函数$f: \Re^n \mapsto(-\infty, \infty)$,我们说一个向量$g \in \Re^n$是$f$在一点$x \in \operatorname{dom}(f)$ if处的子梯度
$$
f(z) \geq f(x)+g^{\prime}(z-x), \quad \forall z \in \Re^n
$$
见图3.1.1。$f$ at $x \in \Re^n$的所有子梯度的集合称为$f$ at $x$的子微分,用$\partial f(x)$表示。对于$x \notin \operatorname{dom}(f)$,我们使用约定$\partial f(x)=\varnothing$。图3.1.2给出了一些子微分的例子。注意$\partial f(x)$是一个封闭凸集,因为基于等式$(3.1)$,它是一个封闭半空间集合(每个$\left.z \in \Re^n\right)$一个)的交集。

通常情况下,对于所有$x \in \operatorname{ri}(\operatorname{dom}(f))$ ($f$域的相对内部),$\partial f(x)$都是非空的,但也有可能$\partial f(x)=\varnothing$在$\operatorname{dom}(f)$的相对边界的某些点上是空的。扩展实值函数的子梯度的性质在附录b的5.4节中进行了总结。然而,当$f$为实值时,可以得到更强的结果:$\partial f(x)$不仅是闭合的凸的,而且对于所有$x \in \Re^n$都是非空的紧致的。而且,这一结果和其他相关结果的证明通常比扩展实值情况的证明更简单。出于这个原因,我们将提供一个独立的开发结果,用于$f$是实值的情况(这是算法中最主要的情况)。
为此,我们回顾一下$f$在$d$方向上的一点$x$处的方向导数的定义:
$$
f^{\prime}(x ; d)=\lim _{\alpha \downarrow 0} \frac{f(x+\alpha d)-f(x)}{\alpha}
$$
(参见附录B第5.4.4节)。右侧的比率单调不增加到$f^{\prime}(x ; d)$,如附录B第5.4.4节所示;见图3.1.3。
我们的第一个结果显示了一些基本属性,并为实值$f$提供了$\partial f(x)$和$f^{\prime}(x ; d)$之间的连接。关于扩展实值$f$,在附录B中的Prop. 5.4.8给出了一个相关的更精细的结果。然而,它的证明更加复杂,并且包含了一些对于$f$是实值的情况来说不必要的条件。

数学代写|凸优化作业代写Convex Optimization代考|Characterization of the Subdifferential

一般来说,$\partial f(x)$的表征和计算可能不方便。然而,在某些特殊情况下,这是可能的。其中最主要的是什么时候
$$
f(x)=\sup {z \in Z} \phi(x, z) $$其中$x \in \Re^n, z \in \Re^m, \phi: \Re^n \times \Re^m \mapsto \Re$是一个函数,$Z$是一个紧凑的子集$\Re^m, \phi(\cdot, z)$对于每个$z \in Z$是凸的和可微的,$\nabla_x \phi(x, \cdot)$对于每个$x$在$Z$上是连续的。然后,$\partial f(x)$的形式由Danskin定理[Dan67]给出,该定理指出$$ \partial f(x)=\operatorname{conv}\left{\nabla_x \phi(x, z) \mid z \in Z(x)\right}, \quad x \in \Re^n, $$,其中$Z(x)$是公式(3.9)中最大化点的集合,$$ Z(x)=\left{\bar{z} \mid \phi(x, \bar{z})=\max {z \in Z} \phi(x, z)\right} .
$$
这个证明有点长,所以留给练习题。
(3.10)式的一个重要特例是$Z$是有限集,因此$f$是$m$可微凸函数$\phi_1, \ldots, \phi_m$的最大值:
$$
f(x)=\max \left{\phi_1(x), \ldots, \phi_m(x)\right}, \quad x \in \Re^n .
$$
然后我们有
$$
\partial f(x)=\operatorname{conv}\left{\nabla \phi_i(x) \mid i \in I(x)\right}
$$
其中$I(x)$为达到最大值的索引集$i$,即$\phi_i(x)=f(x)$。另一个重要的特殊情况是$\phi(\cdot, z)$对于所有$z \in Z$都是可微的,并且Eq.(3.9)中的极值是在一个唯一的点上获得的,因此$Z(x)$由单个点$z(x)$组成。那么$f$在$x$和处可导
$$
\nabla f(x)=\nabla \phi(x, z(x))
$$

统计代写请认准statistics-lab™. statistics-lab™为您的留学生涯保驾护航。

Days
Hours
Minutes
Seconds

hurry up

15% OFF

On All Tickets

Don’t hesitate and buy tickets today – All tickets are at a special price until 15.08.2021. Hope to see you there :)