Monday , November 23 2020

Academy NIPS 2018 Visual Confrontation Challenge Results Announced: CMU Xingbo Team Wins Two Champions | Xing Bo Model Challenge



From medium

Author: Willand Brandle

Heart of the machine

Participation: Zhang Chang, Wang Shuting

Today, NIPS 2018 anti-visual challenge results have been announced. The game is divided into three units: defense, untargeted attacks and targeted attacks. CMU team Xingbo won two championships, the other champion was captured by the LIVIA team from Canada, and Tsinghua TSAIL team won the winner of the "unfocused attack". This article describes the layout of the method for these teams, but the details will be revealed at the NIPS Competition Seminar on December 7 at 9: 15-10: 30.

NIPS 2018 Visual Challenge Challenge Address: https://www.crowdai.org/challenges/nips-2018-adversarial-vision-challenge-robust-model-track

Today, the NIPS 2018 NIPS Adversarial Vision Challenge 2018 featured results, with more than 3,000 participating teams submitting more than 3,000 models and attack methods. This year's competition focused on real-world scenarios, with a small amount of access to the model (up to 1000 per sample). The model returns only the end result they give instead of the gradient or confidence score. This approach simulates the typical threat scenarios faced by deploying machine learning systems, and is likely to promote the development of effective decision-based attack methods to build more robust models.

The track was completed on the CrowdAI platform.

All winners perform at least a better order than a standard base line (such as a standard or borderline attack) (depending on the size of the median L2). We asked for the contours of their approach on the first three of each game (defense, unfocused attack, focused attack). The winners will present their approach to the NIPS Competition Seminar on December 7 at 9: 10-10: 30.

The common theme of attacking track winners is a low frequency version of the border attack and the combination of different defense methods as an alternative model. In the model orbit, the winners used a strong new model approach (details may not be known until the seminar) and a new gradient-based Iteration L2 attack on combat training. In the coming weeks, we will post again to post more details about the results, including the imaging of the anti-generated sample for the protection model. The winning team will be announced in a few weeks.

protection

First place: Team Petuum-CMU (codename "91YXLT" in the leaderboard)

Author: Yaodong Yu *, Hongian Zhang *, Xu Susu, Hongbao Zhang, Pengtao Xie and Eric P. Xing (* equal representation), respectively, from Petuum Inc., Carnegie Mellon University, University of Virginia ;

In order to study the deep intensity network of anti-sample, the authors analyzed the inclusion performance of the strong model against the sample. Based on his analysis, the author suggests a new formula to study strong models with inclusion and vigilance guarantees.

Second place: The Wilson team (not yet received from the team)

Third place: Livia team (codenamed "R rum" on the leaderboard)

Author: Ron Rooney & Louis Gustavo Hafemann, from Montreal, Quebec, Canada High Technical School (ETS Montreal, Canada)

The authors have prepared a solid model using a new iteration based Decoupled Direct and Norm (DDN) configuration, fast enough for use in training. At each training step, the author finds a conflict sample (using DDN) close to the decision boundary and minimizes the cross entropy of this example. The model architecture has not changed and has no effect on the reasoning time.

An unfocused attack

First place: LIVIA team (code name "RUM R" on the leaderboard)

Author: Ron Rooney & Louis Gustavo Hafemann, from Montreal, Quebec, Canada High Technical School

The attack method is based on several proxy models (including the new attack method proposed by the author – a strong model of DDN training). For each model, the author chose two directions for the attack: the cross-entropy losses of the original category, and the direction given by the DDN attack. For each direction, the author performs a binary search on the norm to find the decision boundary. The author takes the best attack and improves it by attack attack border method and decisions based attacks: reliable attacks against black machine learning models.

Second place: Team TSAIL (codenamed "csy530216" on the leaderboard)

Author: Shuyu Cheng & Yinpeng Dong

The author uses a heuristic search algorithm to improve the anti-sample, which is similar to the method of attacking the boundary. The BIM attack used the Adversarial Logit Pairing login lines to go through and find the starting point. In each iteration, the random disturbance is sampled from a Gaussian distribution with a diagonal quadrion matrix that is updated by past successful experiments to simulate the direction of the search. The connector limits the interference to the center 40 * 40 * 3 area of ​​64 * 64 * 3 image. It first generates noise of 10 * 10 * 3 and then adjusts it to 40 * 40 * 3 using bilinear interpolation. Limiting the search space makes the algorithm more efficient.

Third place: Petuum-CMU team (code name "91YXLT" on the leaderboard)

Author: Yaodong Yu *, Hongian Zhang *, Xu Susu, Hongbao Zhang, Pengtao Xie and Eric P. Xing (* equal representation), respectively, from Petuum Inc., Carnegie Mellon University, University of Virginia ;

The authors are integrated into various robust models and measures against different by attack by some distance distance metrics from Foolbox to produce anti-interference. In addition, they chose the best attack method to minimize maximum distance when attacking powerful models under different distance indices.

Targeted attack

First place: Team Petuum-CMU (codename "91YXLT" in the leaderboard)

Author: Yaodong Yu *, Hongian Zhang *, Xu Susu, Hongbao Zhang, Pengtao Xie and Eric P. Xing (* equal representation), respectively, from Petuum Inc., Carnegie Mellon University, University of Virginia ;

The authors used Foolbox to combine various powerful models and various anti-sample methods to create anti-interference. They found that the integration approach makes the target attack model more efficient for different robust models.

Second place: Team Fortiss (codenamed "ttbrunner" on the leaderboard)

Author: Thomas Brunner & Frederic Deehl, Institute GmbH from Germany Fortiss

This attack method is similar to a border attack, but is not sampled from normal random distribution. In addition, the connector uses a low frequency mode that is well migrated and not easily filtered by the shield. The author also uses projection projection of the surrogate model as a pre-sampling. In this way, they combine the benefits of both (PGD and boundary attack) into a flexible attack method and an efficient sample.

Third place: Livia team (codenamed "R rum" on the leaderboard)

Author: Ron Rooney & Louis Gustavo Hafemann, from Montreal, Quebec, Canada High Technical School

M

<! –

M

Disclaimer: Cena's exclusive manuscript, unauthorized reproduction is prohibited.

->


Source link