## 经济代写|博弈论代写Game Theory代考|ECOS3012

2023年3月24日

couryes-lab™ 为您的留学生涯保驾护航 在代写博弈论Game Theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写博弈论Game Theory代写方面经验极为丰富，各种代写博弈论Game Theory相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
couryes™为您提供可以保分的包课服务

## 经济代写|博弈论代写Game Theory代考|Resilience and Interpretability

Multiple methods have been proposed in the literature to generate as well as defend against adversarial examples. Adversarial example generation methods include both whitebox and black-box attacks on neural networks (Szegedy et al. 2013; Goodfellow et al. 2014; Papernot et al. 2015, 2017), targeting feedforward classification networks (Carlini and Wagner 2016), generative networks (Kos et al. 2017), and recurrent neural networks (Papernot et al. 2016). These methods leverage gradient-based optimization for normal examples to discover perturbations that lead to mispredictionthe techniques differ in defining the neighborhood in which perturbation is permitted and the loss function used to guide the search. For example, one of the earliest attacks (Goodfellow et al. 2014) used a fast sign gradient method (FGSM) that looks for a similar image $x^{\prime}$ in a “small” $L^{\infty}$ neighborhood of $x$. Given a loss function $\operatorname{Loss}(x, l)$ specifying the cost of classifying the point $x$ as label $l$, the adversarial example $x^{\prime}$ is calculated as
$$x^{\prime}=x+\epsilon \cdot \operatorname{sign}\left(\nabla_x \operatorname{Loss}\left(x, l_x\right)\right.$$
FGSM was improved to iterative gradient sign approach (IGSM) in Kurakin et al. (2016) by using a finer iterative optimization strategy where the attack performs FGSM with a smaller step-width $\alpha$, and clips the updated result so that the image stays within the $\epsilon$ boundary of $x$. In this approach, the $i$-th iteration computes the following:
$$x_{1+1}^{\prime}=\operatorname{clip}{e x}\left(x_i^{\prime}+\alpha \cdot \operatorname{sign}\left(\nabla_x \operatorname{Loss}\left(x, l_x\right)\right)\right)$$ In contrast to FGSM and IGSM, DeepFool (Moosavi-Dezfooli et al. 2016) attempts to find a perturbed image $x^{\prime}$ from a normal image $x$ by finding the closest decision boundary and crossing it. In practice, DeepFool relies on local linearized approximation of the decision boundary. Another attack method that has received a lot of attention is Carlini attack that relies on finding a perturbation that minimizes change as well as the hinge loss on the logits (presoftmax classification result vector). The attack is generated by solving the following optimization problem: $$\min \delta\left[|\delta|_2+c \cdot \max \left(Z\left(x^{\prime}\right){l_x}-\max Z\left(x^{\prime}\right)_i: i \neq l_x,-\kappa\right)\right]$$ where $Z$ denotes the logits, $l_x$ is the ground truth label, $\kappa$ is the confidence (raising this will force the search towards larger perturbations), and $c$ is a hyperparameter that balances the perturbation and the hinge loss. Another attack method is projected gradient descent method (PGD) proposed in Madry et al. (2017). PGD attempts to solve this constrained optimization problem: $$\max {\left|x^{2 d v}-x\right|_{\infty} \leq} \operatorname{Loss}\left(x^{\mathrm{adv}}, l_x\right)$$

## 经济代写|博弈论代写Game Theory代考|Manifold-based Defense

Our approach (Jha et al. 2018) relies on just identifying the manifold of typical data which need not be even labeled and hence, this method is more practical in contexts where labeled training data is very difficult to obtain. Manifold-based explanations for adversarial attacks have also motivated defense mechanisms that try to exploit the manifold property of training data. Bhagoji et al. (2018) propose a defense comprising of transforming data to eliminate perturbations in nonprincipal components. Meng and Chen (2017) consider using autoencoders to project an input image to low dimensional encoding and then decode it back to remove perturbations. Xu et al. (2017) propose feature squeezing for images in the input space that effectively forces the data to lie in a low dimensional manifold. Song et al. (2017) also note that the distribution of log-likelihoods show considerable difference between perturbed adversarial images and the training data set which can be used to detect adversarial attacks.

Our approach relies on computing the distance of the new sample point from the manifold of training data. The kernel density estimation can be used to measure the distance $d(x)$ of $x$ from the data manifold of training set. Specifically, $d(x)=\frac{1}{|X|} \sum_{x_i \in X} k\left(x_i, x\right)$, where $X$ is the full data set and $k(\cdot, \cdot)$ is a kernel function. In case of using Gaussian kernel, the bandwidth $\sigma$ needs to be carefully selected to avoid spiky density estimate or an overly smooth density estimate. A typical good choice for bandwidth is a value that maximizes the log-likelihood of the training data (Jones et al. 1996). Further, we can restrict the set of training points to be considered from the full set $X$ to a set of immediate neighbors of $x$. The neighborhood can be defined using the maximum distance or bound on the number of neighbors. In our experiments, we use $L_{\infty}$ norm with bound on the number of neighbors which yielded good results.

It has been hypothesized in literature (Bengio et al. 2013; Gardner et al. 2015) that the deeper layers of a deep neural network provide more linear and unwrapped manifolds in comparison to the input space. Thus, the task of identifying the manifold becomes easier as we progress from the input space to the more abstract feature spaces all the way to the logit space. But the adversarial perturbations are harder to detect at higher levels and might get hidden by the lower layers of the neural network. In our experiments, we learned manifolds in input space as well as the logit space. We evaluated our approach on MNIST dataset (LeCun 1998) and CIFAR10 dataset (Krizhevsky et al. 2014).

As the norm bound in the PGD method for generating adversarial examples is increased, the distance of adversarial examples from the manifold increases. While the success of attack on the neural network increases with high norm bound, it also becomes easier to detect these adversarial examples. We observed this behavior to be common across MNIST and CIFAR10 data set as illustrated in Figures 16.6 and 16.7. The distance from manifold monotonically increases in the input space but in the logit space, higher norm bound beyond a threshold allows the attack method to find examples that decrease the distance from logit manifold even though they are farther from the input manifold.

# 博弈论代考

## 经济代写|博弈论代写Game Theory代考|Resilience and Interpretability

$$x^{\prime}=x+\epsilon \cdot \operatorname{sign}\left(\nabla_x \operatorname{Loss}\left(x, l_x\right)\right.$$
Kurakin 等人将 FGSM 改进为迭代梯度符号方法 (IGSM)。(2016) 通过使用更精细的迭代优化策略，攻击 以更小的步长执行 FGSM $\alpha$ ，并裁剪更新的结果，使图像 保持在 $\epsilon$ 的边界 $x$. 在这种方法中， $i$-th 迭代计算以下内 容:
$$x_{1+1}^{\prime}=\operatorname{clip} e x\left(x_i^{\prime}+\alpha \cdot \operatorname{sign}\left(\nabla_x \operatorname{Loss}\left(x, l_x\right)\right)\right)$$与 FGSM 和 IGSM 相比，DeepFool (Moosavi-Dezfooli et al. 2016) 试图找到扰动图像 $x^{\prime}$ 从正常图像 $x$ 通过找到 最近的决策边界并越过它。在实践中，DeepFool 依赖于 决策边界的局部线性化近似。另一种受到广泛关注的攻击 方法是 Carlini 攻击，它依赖于找到最小化变化的扰动以 及 logits (presoftmax 分类结果向量) 上的铰链损失。 攻击是通过解决以下优化问题产生的:
$$\min \delta\left[|\delta|2+c \cdot \max \left(Z\left(x^{\prime}\right) l_x-\max Z\left(x^{\prime}\right)_i: i \neq l_x\right.\right.$$ 其中 $Z$ 表示 logits， $l_x$ 是真实标签， $\kappa$ 是置信度（提高它 会迫使搜索向更大的扰动)， $c$ 是平衡扰动和铰链损失的 超参数。另一种攻击方法是 Madry等人提出的投影梯度 下降法 (PGD)。(2017)。PGD 试图解决这个约束优化问 题: $$\max \left|x^{2 d v}-x\right|{\infty} \leq \operatorname{Loss}\left(x^{\text {adv }}, l_x\right)$$

## 经济代写|博弈论代写Game Theory代考|Manifold-based Defense

Bhagoji 等人。(2018) 提出了一种防御措施，包括转换数 据以消除非主成分中的扰动。Meng 和 Chen (2017) 考虑 使用自动编码器将输入图像投影到低维编码，然后将其解 码回以消除扰动。许等。(2017) 提出对输入空间中的图 像进行特征压缩，有效地迫使数据位于低维流形中。宋 等。

CIFAR10 数据集中很常见，如图 16.6 和 16.7 所示。与 流形的距离在输入空间中单调增加，但在逻辑空间中，超 出阈值的更高范数界限允许攻击方法找到减少与逻辑流形 距离的示例，即使它们离输入流形更远。

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。