Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
RoCo-NAS: Robust and Compact Neural Architecture Search
University of Tehran, Iran.
RISE Research Institutes of Sweden, Digital Systems, Industrial Systems. Mälardalen University, Sweden.ORCID iD: 0000-0001-5951-9374
University of Tehran, Iran; School of Computer Science, Iran.
University of Tehran, Iran; Tallinn University of Technology, Estonia.
2021 (English)In: Proceedings of the International Joint Conference on Neural Networks, Vol. JulyArticle in journal (Refereed) Published
Abstract [en]

Deep model compression has been studied widely, and state-of-the-art methods can now achieve high compression ratios with minimum accuracy loss. Recent advances in adversarial attacks reveal the inherent vulnerability of deep neural networks to slightly perturbed images called adversarial examples. Since then, extensive efforts have been performed to enhance deep networks’ robustness via specialized loss functions and learning algorithms. Previous works suggest that network size and robustness against adversarial examples contradict on most occasions. In this paper, we investigate how to optimize compactness and robustness to adversarial attacks of neural network architectures while maintaining the accuracy using multi-objective neural architecture search. We propose the use of previously generated adversarial examples as an objective to evaluate the robustness of our models in addition to the number of floating-point operations to assess model complexity i.e. compactness. Experiments on some recent neural architecture search algorithms show that due to their limited search space they fail to find robust and compact architectures. By creating a novel neural architecture search (RoCo-NAS), we were able to evolve an architecture that is up to 7% more accurate against adversarial samples than its more complex architecture counterpart. Thus, the results show inherently robust architectures regardless of their size. This opens up a new range of possibilities for the exploration and design of deep neural networks using automatic architecture search.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc. , 2021. Vol. July
Keywords [en]
Complex networks; Digital arithmetic; Learning algorithms; Network architecture, Accuracy loss; High compression ratio; Loss functions; Model compression; Multi objective; Network robustness; Network size; Neural architectures; Neural network architecture; State-of-the-art methods, Deep neural networks
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:ri:diva-67471DOI: 10.1109/IJCNN52387.2021.9534460Scopus ID: 2-s2.0-85116466137OAI: oai:DiVA.org:ri-67471DiVA, id: diva2:1802995
Available from: 2023-10-06 Created: 2023-10-06 Last updated: 2023-10-06Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Sinaei, Sima

Search in DiVA

By author/editor
Sinaei, Sima
By organisation
Industrial Systems
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 8 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf