数学代写|凸优化作业代写Convex Optimization代考|Optimality Condition with Directional Derivatives

Doug I. Jones

Doug I. Jones

Lorem ipsum dolor sit amet, cons the all tetur adiscing elit

如果你也在 怎样代写凸优化Convex optimization 这个学科遇到相关的难题,请随时右上角联系我们的24/7代写客服。凸优化Convex optimization由于在大规模资源分配、信号处理和机器学习等领域的广泛应用,人们对凸优化的兴趣越来越浓厚。本书旨在解决凸优化问题的算法的最新和可访问的发展。

凸优化Convex optimization无约束可以很容易地用梯度下降(最陡下降的特殊情况)或牛顿方法解决,结合线搜索适当的步长;这些可以在数学上证明收敛速度很快,尤其是后一种方法。[22]如果目标函数是二次函数,也可以使用KKT矩阵技术求解具有线性等式约束的凸优化(它推广到牛顿方法的一种变化,即使初始化点不满足约束也有效),但通常也可以通过线性代数消除等式约束或解决对偶问题来解决。

couryes-lab™ 为您的留学生涯保驾护航 在代写凸优化Convex Optimization方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写凸优化Convex Optimization代写方面经验极为丰富,各种代写凸优化Convex Optimization相关的作业也就用不着说。

数学代写|凸优化作业代写Convex Optimization代考|Optimality Condition with Directional Derivatives

数学代写|凸优化作业代写Convex Optimization代考|Optimality Condition with Directional Derivatives

The purpose of this exercise is to express the necessary and sufficient condition for optimality of Prop. 3.1.4 in terms of the directional derivative of the cost function. Consider the minimization of a convex function $f: \Re^n \mapsto \Re$ over a convex set $X \subset \Re^n$. For any $x \in X$, the set of feasible directions of $f$ at $x$ is defined to be the convex cone
$$
D(x)={\alpha(\bar{x}-x) \mid \bar{x} \in X, \alpha>0} .
$$
Show that a vector $x$ minimizes $f$ over $X$ if and only if $x \in X$ and
$$
f^{\prime}(x ; d) \geq 0, \quad \forall d \in D(x) .
$$
Note: In words, this condition says that $x$ is optimal if and only if there is no feasible descent direction of $f$ at $x$. Solution: Let $\overline{D(x)}$ denote the closure of $D(x)$. By Prop. 3.1.4, $x$ minimizes $f$ over $X$ if and only if there exists $g \in \partial f(x)$ such that
$$
g^{\prime} d \geq 0, \quad \forall d \in D(x)
$$
which is equivalent to
$$
g^{\prime} d \geq 0, \quad \forall d \in \overline{D(x)}
$$
Thus, $x$ minimizes $f$ over $X$ if and only if
$$
\max {g \in \partial f(x)} \min {|d| \leq 1, d \in \overline{D(x)}} g^{\prime} d \geq 0
$$

数学代写|凸优化作业代写Convex Optimization代考|Subdifferential of an Extended Real-Valued Function

Extended real-valued convex functions arising in algorithmic practice are often of the form
$$
f(x)= \begin{cases}h(x) & \text { if } x \in X, \ \infty & \text { if } x \notin X,\end{cases}
$$
where $h: \Re^n \mapsto \Re$ is a real-valued convex function and $X$ is a nonempty convex set. The purpose of this exercise is to show that the subdifferential of such functions admits a more favorable characterization compared to the case where $h$ is extended real-valued.
(a) Use Props. 3.1.3 and 3.1.4 to show that the subdifferential of such a function is nonempty for all $x \in X$, and has the form
$$
\partial f(x)=\partial h(x)+N_X(x), \quad \forall x \in X,
$$
where $N_X(x)$ is the normal cone of $X$ at $x \in X$. Note: If $h$ is convex but extended-real valued, this formula requires the assumption $\operatorname{ri}(\operatorname{dom}(h)) \cap$ $\operatorname{ri}(X) \neq \varnothing$ or some polyhedral conditions on $h$ and $X$; see Prop. 5.4.6 of Appendix B. Proof: By the subgradient inequality (3.1), we have $g \in \partial f(x)$ if and only if $x$ minimizes $p(z)=h(z)-g^{\prime} z$ over $z \in X$, or equivalently, some subgradient of $p$ at $x$ [i.e., a vector in $\partial h(x)-{g}$, by Prop. 3.1.3] belongs to $-N_X(x)$ (cf. Prop. 3.1.4).
(b) Let $f(x)=-\sqrt{x}$ if $x \geq 0$ and $f(x)=\infty$ if $x<0$. Verify that $f$ is a closed convex function that cannot be written in the form (3.35) and does not have a subgradient at $x=0$.
(c) Show the following formula for the subdifferential of the sum of functions $f_i$ that have the form (3.35) for some $h_i$ and $X_i$ :
$$
\partial\left(f_1+\cdots+f_m\right)(x)=\partial h_1(x)+\cdots+\partial h_m(x)+N_{X_1 \cap \cdots \cap X_m}(x),
$$
for all $x \in X_1 \cap \cdots \cap X_m$. Demonstrate by example that in this formula we cannot replace $N_{X_1 \cap \cdots \cap X_m}(x)$ by $N_{X_1}(x)+\cdots+N_{X_m}(x)$. Proof: Write $f_1+\cdots+f_m=h+\delta_X$, where $h=h_1+\cdots+h_m$ and $X=X_1 \cap \cdots \cap X_m$. For a counterexample, let $m=2$, and $X_1$ and $X_2$ be unit spheres in the plane with centers at $(-1,0)$ and $(1,0)$, respectively.

数学代写|凸优化作业代写Convex Optimization代考|Optimality Condition with Directional Derivatives

凸优化代写

数学代写|凸优化作业代写Convex Optimization代考|Optimality Condition with Directional Derivatives

本练习的目的是用成本函数的方向导数来表达Prop. 3.1.4的最优性的充分必要条件。考虑一个凸函数$f: \Re^n \mapsto \Re$在一个凸集$X \subset \Re^n$上的最小化。对于任意$x \in X$,将$f$在$x$处的可行方向集定义为凸锥
$$
D(x)={\alpha(\bar{x}-x) \mid \bar{x} \in X, \alpha>0} .
$$
证明向量$x$使$f$ / $X$最小当且仅当$x \in X$和
$$
f^{\prime}(x ; d) \geq 0, \quad \forall d \in D(x) .
$$
注:换句话说,这个条件表示当且仅当$f$在$x$处没有可行的下降方向时,$x$是最优的。解决方案:让$\overline{D(x)}$表示$D(x)$的闭包。根据提案3.1.4,当且仅当存在$g \in \partial f(x)$时,$x$使$f$比$X$最小
$$
g^{\prime} d \geq 0, \quad \forall d \in D(x)
$$
它等价于
$$
g^{\prime} d \geq 0, \quad \forall d \in \overline{D(x)}
$$
因此,当且仅当$x$使$f$比$X$最小
$$
\max {g \in \partial f(x)} \min {|d| \leq 1, d \in \overline{D(x)}} g^{\prime} d \geq 0
$$

数学代写|凸优化作业代写Convex Optimization代考|Subdifferential of an Extended Real-Valued Function

在算法实践中出现的扩展实值凸函数通常是这样的形式
$$
f(x)= \begin{cases}h(x) & \text { if } x \in X, \ \infty & \text { if } x \notin X,\end{cases}
$$
其中$h: \Re^n \mapsto \Re$为实值凸函数,$X$为非空凸集。本练习的目的是表明,与$h$是扩展实值的情况相比,此类函数的子微分具有更有利的表征。
(a)使用第3.1.3和3.1.4节来证明该函数的子微分对于所有$x \in X$都是非空的,并且具有如下形式
$$
\partial f(x)=\partial h(x)+N_X(x), \quad \forall x \in X,
$$
其中$N_X(x)$是$X$和$x \in X$的正常锥体。注:如果$h$是凸实扩展值,则此公式需要假设$\operatorname{ri}(\operatorname{dom}(h)) \cap$$\operatorname{ri}(X) \neq \varnothing$或在$h$和$X$上的一些多面体条件;证明:通过子梯度不等式(3.1),我们有$g \in \partial f(x)$当且仅当$x$在$z \in X$上最小化$p(z)=h(z)-g^{\prime} z$,或者等价地,$p$在$x$上的某个子梯度[即,在$\partial h(x)-{g}$上的向量,由Prop. 3.1.3]属于$-N_X(x)$(参见Prop. 3.1.4)。
(b)设$f(x)=-\sqrt{x}$为$x \geq 0$, $f(x)=\infty$为$x<0$。验证$f$是一个封闭的凸函数,不能以(3.35)的形式编写,并且在$x=0$上没有子梯度。
(c)对于某些$h_i$和$X_i$,给出形式为(3.35)的函数$f_i$和的次微分公式:
$$
\partial\left(f_1+\cdots+f_m\right)(x)=\partial h_1(x)+\cdots+\partial h_m(x)+N_{X_1 \cap \cdots \cap X_m}(x),
$$
对于所有$x \in X_1 \cap \cdots \cap X_m$。举例说明,在这个公式中,我们不能用$N_{X_1}(x)+\cdots+N_{X_m}(x)$代替$N_{X_1 \cap \cdots \cap X_m}(x)$。证明:写$f_1+\cdots+f_m=h+\delta_X$,其中$h=h_1+\cdots+h_m$和$X=X_1 \cap \cdots \cap X_m$。作为反例,设$m=2$、$X_1$和$X_2$为平面上的单位球,中心分别为$(-1,0)$和$(1,0)$。

统计代写请认准statistics-lab™. statistics-lab™为您的留学生涯保驾护航。

Days
Hours
Minutes
Seconds

hurry up

15% OFF

On All Tickets

Don’t hesitate and buy tickets today – All tickets are at a special price until 15.08.2021. Hope to see you there :)