Fuzz testing (“fuzzing”) is a widely-used and effective dynamic technique to discover crashes and security vulnerabilities in software, supported by numerous tools, which keep improving in terms of their detection capabilities and speed of execution. In this paper, we report our findings from using state-of-the-art mutation-based and hybrid fuzzers (AFL, Angora, Honggfuzz, Intriguer, MOpt-AFL, QSym, and SymCC) on a non-trivial code base, that of Contiki-NG, to expose and fix serious vulnerabilities in various layers of its network stack, during a period of more than three years. As a by-product, we provide a Git-based platform which allowed us to create and apply a new, quite challenging, open-source bug suite for evaluating fuzzers on real-world software vulnerabilities. Using this bug suite, we present an impartial and extensive evaluation of the effectiveness of these fuzzers, and measure the impact that sanitizers have on it. Finally, we offer our experiences and opinions on how fuzzing tools should be used and evaluated in the future.