Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Platform for Benchmarking of RF-based Indoor Localization Solutions
Ghent University, Belgium.
Ghent University, Belgium.
Technical University of Berlin, Germany.
Technical University of Berlin, Germany.
Show others and affiliations
2015 (English)In: IEEE Communications Magazine, ISSN 0163-6804, E-ISSN 1558-1896, Vol. 53, no 9, p. 126-133Article in journal (Refereed) Published
Abstract [en]

Over the last years, the number of indoor localization solutions has grown exponentially and a wide variety of different technologies and approaches is being explored. Unfortunately, there is currently no established standardized evaluation method for comparing their performance. As a result, each solution is evaluated in a different environment using proprietary evaluation metrics. Consequently, it is currently extremely hard to objectively compare the performance of multiple localization solutions with each other. To address the problem, we present the EVARILOS Benchmarking Platform, which enables an automated evaluation and comparison of multiple solutions in different environments and using multiple evaluation metrics. We propose a testbed independent benchmarking platform, combined with multiple testbed dependent plug-ins for executing experiments and storing performance results. The platform implements the standardized evaluation method described in the EVARILOS Benchmarking Handbook, which is aligned with the upcoming ISO/IEC 18305 standard “Test and Evaluation of Localization and Tracking Systems”. The platform and the plugins can be used in real-time on existing wireless testbed facilities, while also supporting a remote offline evaluation method using precollected data traces. Using these facilities, and by analyzing and comparing the performance of three different localization solutions, we demonstrate the need for objective evaluation methods that consider multiple evaluation criteria in different environments.

Place, publisher, year, edition, pages
2015, 6. Vol. 53, no 9, p. 126-133
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:ri:diva-24505DOI: 10.1109/MCOM.2015.7263356Scopus ID: 2-s2.0-84957052677OAI: oai:DiVA.org:ri-24505DiVA, id: diva2:1043589
Projects
EvarilosAvailable from: 2016-10-31 Created: 2016-10-31 Last updated: 2023-06-08Bibliographically approved

Open Access in DiVA

fulltext(1045 kB)256 downloads
File information
File name FULLTEXT01.pdfFile size 1045 kBChecksum SHA-512
69a7cbb73921ace13b6fae2417d0678404f62d40778cf61d21322388581740fd4dafaf1cc10e02e981c74001c4570b01f7a419dd9137cbb8bd25822d5be9d812
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Wirström, NiclasVoigt, Thiemo

Search in DiVA

By author/editor
Wirström, NiclasVoigt, Thiemo
By organisation
SICSComputer Systems Laboratory
In the same journal
IEEE Communications Magazine
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 256 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 115 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf