Over the last years, the number of indoor localization solutions has grown exponentially and a wide variety of different technologies and approaches is being explored. Unfortunately, there is currently no established standardized evaluation method for comparing their performance. As a result, each solution is evaluated in a different environment using proprietary evaluation metrics. Consequently, it is currently extremely hard to objectively compare the performance of multiple localization solutions with each other. To address the problem, we present the EVARILOS Benchmarking Platform, which enables an automated evaluation and comparison of multiple solutions in different environments and using multiple evaluation metrics. We propose a testbed independent benchmarking platform, combined with multiple testbed dependent plug-ins for executing experiments and storing performance results. The platform implements the standardized evaluation method described in the EVARILOS Benchmarking Handbook, which is aligned with the upcoming ISO/IEC 18305 standard “Test and Evaluation of Localization and Tracking Systems”. The platform and the plugins can be used in real-time on existing wireless testbed facilities, while also supporting a remote offline evaluation method using precollected data traces. Using these facilities, and by analyzing and comparing the performance of three different localization solutions, we demonstrate the need for objective evaluation methods that consider multiple evaluation criteria in different environments.