Deep model compression has been studied widely, and state-of-the-art methods can now achieve high compression ratios with minimum accuracy loss. Recent advances in adversarial attacks reveal the inherent vulnerability of deep neural networks to slightly perturbed images called adversarial examples. Since then, extensive efforts have been performed to enhance deep networks’ robustness via specialized loss functions and learning algorithms. Previous works suggest that network size and robustness against adversarial examples contradict on most occasions. In this paper, we investigate how to optimize compactness and robustness to adversarial attacks of neural network architectures while maintaining the accuracy using multi-objective neural architecture search. We propose the use of previously generated adversarial examples as an objective to evaluate the robustness of our models in addition to the number of floating-point operations to assess model complexity i.e. compactness. Experiments on some recent neural architecture search algorithms show that due to their limited search space they fail to find robust and compact architectures. By creating a novel neural architecture search (RoCo-NAS), we were able to evolve an architecture that is up to 7% more accurate against adversarial samples than its more complex architecture counterpart. Thus, the results show inherently robust architectures regardless of their size. This opens up a new range of possibilities for the exploration and design of deep neural networks using automatic architecture search.