Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 31) Show all publications
Brunnstrom, K., Dima, E., Andersson, M., Sjöström, M., Qureshi, T. & Johanson, M. (2019). Quality of Experience of hand controller latency in a Virtual Reality simulator. In: Damon Chandler, Mark McCourt and Jeffrey Mulligan (Ed.), Human Vision and Electronic Imaging 2019: . Paper presented at Human Vision and Electronic Imaging 2019. , Article ID 3068450.
Open this publication in new window or tab >>Quality of Experience of hand controller latency in a Virtual Reality simulator
Show others...
2019 (English)In: Human Vision and Electronic Imaging 2019 / [ed] Damon Chandler, Mark McCourt and Jeffrey Mulligan, 2019, article id 3068450Conference paper, Published paper (Refereed)
Abstract [en]

In this study, we investigate a VR simulator of a forestry crane used for loading logs onto a truck, mainly looking at Quality of Experience (QoE) aspects that may be relevant for task completion, but also whether there are any discomfort related symptoms experienced during task execution. A QoE test has been designed to capture both the general subjective experience of using the simulator and to study task performance. Moreover, a specific focus has been to study the effects of latency on the subjective experience, with regards to delays in the crane control interface. A formal subjective study has been performed where we have added controlled delays to the hand controller (joystick) signals. The added delays ranged from 0 ms to 800 ms. We found no significant effects of delays on the task performance on any scales up to 200 ms. A significant negative effect was found for 800 ms added delay. The Symptoms reported in the Simulator Sickness Questionnaire (SSQ) was significantly higher for all the symptom groups, but a majority of the participants reported only slight symptoms. Two out of thirty test persons stopped the test before finishing due to their symptoms.

Series
Electronic Imaging, ISSN 2470-1173
Keywords
Quality of Experience, Virtual Reality, Simulator, QoE, Delay
National Category
Communication Systems Telecommunications Media Engineering
Identifiers
urn:nbn:se:ri:diva-37738 (URN)
Conference
Human Vision and Electronic Imaging 2019
Funder
Knowledge Foundation, 20160194
Available from: 2019-02-07 Created: 2019-02-07 Last updated: 2019-02-08Bibliographically approved
Allison, R., Brunnstrom, K., Chandler, D., Colett, H., Corriveau, P., Daly, S., . . . Zhang, Y. (2018). Perspectives on the definition of visually lossless quality for mobile and large format displays. Journal of Electronic Imaging, 27(5), Article ID 053035.
Open this publication in new window or tab >>Perspectives on the definition of visually lossless quality for mobile and large format displays
Show others...
2018 (English)In: Journal of Electronic Imaging, ISSN 10179909, Vol. 27, no 5, article id 053035Article in journal (Refereed) Published
Abstract [en]

Advances in imaging and display engineering have given rise to new and improved image and videoapplications that aim to maximize visual quality under given resource constraints (e.g., power, bandwidth).Because the human visual system is an imperfect sensor, the images/videos can be represented in a mathematicallylossy fashion but with enough fidelity that the losses are visually imperceptible—commonly termed“visually lossless.” Although a great deal of research has focused on gaining a better understanding ofthe limits of human vision when viewing natural images/video, a universally or even largely accepted definitionof visually lossless remains elusive. Differences in testing methodologies, research objectives, and targetapplications have led to multiple ad-hoc definitions that are often difficult to compare to or otherwise employ inother settings. We present a compendium of technical experiments relating to both vision science and visualquality testing that together explore the research and business perspectives of visually lossless image quality,as well as review recent scientific advances. Together, the studies presented in this paper suggest that a singledefinition of visually lossless quality might not be appropriate; rather, a better goal would be to establish varyinglevels of visually lossless quality that can be quantified in terms of the testing paradigm.

Keywords
visual lossless; visual lossy; image quality; industrial perspective; mobile screen; large format displays
National Category
Communication Systems Telecommunications Media Engineering
Identifiers
urn:nbn:se:ri:diva-35314 (URN)10.1117/1.JEI.27.5.053035 (DOI)2-s2.0-85054964928 (Scopus ID)
Note

Copyright (2018) Society of Photo-Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.

Available from: 2018-10-15 Created: 2018-10-15 Last updated: 2018-12-21Bibliographically approved
Brunnström, K., Sjöström, M., Muhammad, I., Magnus, P. & Johanson, M. (2018). Quality of Experience for a Virtual Reality simulator. In: Rogowitz, B.;Pappas, T.;De Ridder H. (Ed.), Human Vision and Electronic Imaging 2018: . Paper presented at IS&T Human Vision and Electronic Imaging 2018, Burlingame, California USA, 28 Jan. - 2 Feb, 2018. The Society for Imaging Science and Technology
Open this publication in new window or tab >>Quality of Experience for a Virtual Reality simulator
Show others...
2018 (English)In: Human Vision and Electronic Imaging 2018 / [ed] Rogowitz, B.;Pappas, T.;De Ridder H., The Society for Imaging Science and Technology, 2018Conference paper, Published paper (Refereed)
Abstract [en]

In this study, we investigate a VR simulator of a forestry crane used for loading logs onto a truck, mainly looking at Quality of Experience (QoE) aspects that may be relevant for task completion, but also whether there are any discomfort related symptoms experienced during task execution. The QoE test has been designed to capture both the general subjective experience of using the simulator and to study task completion rate. Moreover, a specific focus has been to study the effects of latency on the subjective experience, with regards both to delays in the crane control interface as well as lag in the visual scene rendering in the head mounted display (HMD). Two larger formal subjective studies have been performed: one with the VR-system as it is and one where we have added controlled delay to the display update and to the joystick signals. The baseline study shows that most people are more or less happy with the VR-system and that it does not have strong effects on any symptoms as listed in the Simulator Sickness Questionnaire (SSQ). In the delay study we found significant effects on Comfort Quality and Immersion Quality for higher Display delay (30 ms), but very small impact of joystick delay. Furthermore, the Display delay had strong influence on the symptoms in the SSQ, and causing test subjects to decide not to continue with the complete experiments. We found that this was especially connected to the longer added Display delays (≥ 20 ms).

Place, publisher, year, edition, pages
The Society for Imaging Science and Technology, 2018
Keywords
Quality of Experience, Virtual Reality, Simulator, QoE, Delay
National Category
Engineering and Technology Communication Systems Computer Systems
Identifiers
urn:nbn:se:ri:diva-35187 (URN)
Conference
IS&T Human Vision and Electronic Imaging 2018, Burlingame, California USA, 28 Jan. - 2 Feb, 2018
Funder
Knowledge Foundation, 20160194
Available from: 2018-09-18 Created: 2018-09-18 Last updated: 2018-12-21Bibliographically approved
Brunnstrom, K. & Barkowsky, M. (2018). Statistical quality of experience analysis: on planning the sample size and statistical significance testing. Journal of Electronic Imaging (JEI), 27(5), Article ID 053013.
Open this publication in new window or tab >>Statistical quality of experience analysis: on planning the sample size and statistical significance testing
2018 (English)In: Journal of Electronic Imaging (JEI), ISSN 1017-9909, E-ISSN 1560-229X, Vol. 27, no 5, article id 053013Article in journal (Refereed) Published
Abstract [en]

This paper analyzes how an experimenter can balance errors in subjective video quality tests betweenthe statistical power of finding an effect if it is there and not claiming that an effect is there if the effect is not there,i.e., balancing Type I and Type II errors. The risk of committing Type I errors increases with the number ofcomparisons that are performed in statistical tests. We will show that when controlling for this and at thesame time keeping the power of the experiment at a reasonably high level, it is unlikely that the number oftest subjects that are normally used and recommended by the International Telecommunication Union (ITU),i.e., 15 is sufficient but the number used by the Video Quality Experts Group (VQEG), i.e., 24 is more likelyto be sufficient. Examples will also be given for the influence of Type I error on the statistical significance ofcomparing objective metrics by correlation. We also present a comparison between parametric and nonparametricstatistics. The comparison targets the question whether we would reach different conclusions on the statisticaldifference between the video quality ratings of different video clips in a subjective test, based on thecomparison between the student T-test and the Mann–Whitney U-test. We found that there was hardly a differencewhen few comparisons are compensated for, i.e., then almost the same conclusions are reached. Whenthe number of comparisons is increased, then larger and larger differences between the two methods arerevealed. In these cases, the parametric T-test gives clearly more significant cases, than the nonparametrictest, which makes it more important to investigate whether the assumptions are met for performing a certaintest.

Place, publisher, year, edition, pages
SPIE/IS&T: , 2018
Keywords
Type-I error; video quality; statistical significance; quality of experience; Student T-test; Bonferroni; Mann–Whitney U-test; parametric versus nonparametric test.
National Category
Telecommunications Communication Systems Media Engineering
Identifiers
urn:nbn:se:ri:diva-35233 (URN)10.1117/1.JEI.27.5.053013 (DOI)2-s2.0-85054069504 (Scopus ID)
Funder
Knowledge Foundation, 20160194
Note

Copyright (2018) Society of Photo-Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.

Available from: 2018-10-03 Created: 2018-10-03 Last updated: 2018-12-21Bibliographically approved
Sedano, I., Prieto, G., Brunnstrom, K., Kihl, M. & Montalban, J. (2017). Application of full-reference video quality metrics in IPTV. In: IEEE International Symposium on Broadband Multimedia Systems and Broadcasting, BMSB: . Paper presented at 12th IEEE International Symposium on Broadband Multimedia Systems and Broadcasting, BMSB 2017, 7 June 2017 through 9 June 2017.
Open this publication in new window or tab >>Application of full-reference video quality metrics in IPTV
Show others...
2017 (English)In: IEEE International Symposium on Broadband Multimedia Systems and Broadcasting, BMSB, 2017Conference paper, Published paper (Refereed)
Abstract [en]

Executing an accurate full-reference metric such as VQM can take minutes in an average computer for just one user. Therefore, it can be unfeasible to analyze all the videos received by users in an IPTV network for example consisting of 10.000 users using a single computer running the VQM metric. One solution can be to use a lightweight no-reference metrics in addition to the full-reference metric mentioned. Lightweight no-reference metrics can be used for discarding potential situations to evaluate because they are accurate enough for that task, and then the full-reference metric VQM can be used when more accuracy is needed. The work in this paper is focused on determining the maximum number of situations/users that can be analyzed simultaneously using the VQM metric in a computer with good performance. The full-reference metric is applied on the transmitter using a method specified in the recommendation ITU BT.1789. The best performance achieved was 112.8 seconds per process.

Keywords
IPTV & Internet TV, Objective evaluation techniques, Performance evaluation, QoE, Broadband networks, Multimedia systems, Television broadcasting, Full references, Internet tv, IPTV networks, No-reference metrics, Single computer, Video quality, IPTV
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-30833 (URN)10.1109/BMSB.2017.7986191 (DOI)2-s2.0-85027248799 (Scopus ID)9781509049370 (ISBN)
Conference
12th IEEE International Symposium on Broadband Multimedia Systems and Broadcasting, BMSB 2017, 7 June 2017 through 9 June 2017
Available from: 2017-09-07 Created: 2017-09-07 Last updated: 2019-02-04Bibliographically approved
Brunnström, K. & Barkowsky, M. (2017). Balancing type I errors and statistical power in video quality assessment. In: IS and T International Symposium on Electronic Imaging Science and Technology: . Paper presented at Human Vision and Electronic Imaging 2017, HVEI 2017, 29 January 2017 through 2 February 2017 (pp. 91-96).
Open this publication in new window or tab >>Balancing type I errors and statistical power in video quality assessment
2017 (English)In: IS and T International Symposium on Electronic Imaging Science and Technology, 2017, p. 91-96Conference paper, Published paper (Refereed)
Abstract [en]

This paper analyzes how an experimenter can balance errors in subjective video quality tests between the statistical power of finding an effect if it is there and not claiming that an effect is there if the effect it is not there i.e. balancing Type I and Type II errors. The risk of committing Type I errors increases with the number of comparisons that are performed in statistical tests. We will show that when controlling for this and at the same time keeping the power of the experiment at a reasonably high level, it will require more test subjects than are normally used and recommended by international standardization bodies like the ITU. Examples will also be given for the influence of Type I error on the statistical significance of comparing objective metrics by correlation.

Keywords
International standardization, Objective metrics, Statistical power, Statistical significance, Subjective video quality, Time keeping, Type I and type II errors, Video quality assessment, Errors
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-35363 (URN)10.2352/ISSN.2470-1173.2017.14.HVEI-122 (DOI)2-s2.0-85041524573 (Scopus ID)
Conference
Human Vision and Electronic Imaging 2017, HVEI 2017, 29 January 2017 through 2 February 2017
Available from: 2018-10-23 Created: 2018-10-23 Last updated: 2018-10-23Bibliographically approved
Hermann, D., Djupsjöbacka, A., Andrén, B., Brunnström, K. & Rydell, N. (2017). Display panel certification system for the vehicle industry. In: Digest of Technical Papers - SID International Symposium: . Paper presented at SID Symposium, Seminar, and Exhibition 2017, Display Week 2017, 21 May 2017 through 26 May 2017 (pp. 471-474). , 48(1)
Open this publication in new window or tab >>Display panel certification system for the vehicle industry
Show others...
2017 (English)In: Digest of Technical Papers - SID International Symposium, 2017, Vol. 48, no 1, p. 471-474Conference paper, Published paper (Refereed)
Abstract [en]

The ever-increasing need for displaying in-vehicle visual information in a non-distracting way requires a high visual performance of automotive displays. For their procurement, deep technical and supply-chain knowledge is required. Therefore, based on our comparisons of in-vehicle and laboratory visual-performance measurements, we propose a certification system for automotive display panels.

Keywords
Car displays, Certification, Display, Measurements, Panels, Requirements, Safety, Visual performance, Accident prevention, Display devices, Measurement, Supply chains, Automotive displays, Certification systems, Supply chain knowledge, Visual information, Vehicle performance
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-35368 (URN)10.1002/sdtp.11653 (DOI)2-s2.0-85044459309 (Scopus ID)
Conference
SID Symposium, Seminar, and Exhibition 2017, Display Week 2017, 21 May 2017 through 26 May 2017
Available from: 2018-10-23 Created: 2018-10-23 Last updated: 2018-10-23Bibliographically approved
Søgaard, J., Shahid, M., Pokhrel, J. & Brunnström, K. (2017). On subjective quality assessment of adaptive video streaming via crowdsourcing and laboratory based experiments. Multimedia tools and applications, 76(15), 16727-16748
Open this publication in new window or tab >>On subjective quality assessment of adaptive video streaming via crowdsourcing and laboratory based experiments
2017 (English)In: Multimedia tools and applications, ISSN 1380-7501, E-ISSN 1573-7721, Vol. 76, no 15, p. 16727-16748Article in journal (Refereed) Published
Abstract [en]

Video streaming services are offered over the Internet and since the service providers do not have full control over the network conditions all the way to the end user, streaming technologies have been developed to maintain the quality of service in these varying network conditions i.e. so called adaptive video streaming. In order to cater for users’ Quality of Experience (QoE) requirements, HTTP based adaptive streaming solutions of video services have become popular. However, the keys to ensure the users a good QoE with this technology is still not completely understood. User QoE feedback is therefore instrumental in improving this understanding. Controlled laboratory based perceptual quality experiments that involve a panel of human viewers are considered to be the most valid method of the assessment of QoE. Besides laboratory based subjective experiments, crowdsourcing based subjective assessment of video quality is gaining popularity as an alternative method. This article presents insights into a study that investigates perceptual preferences of various adaptive video streaming scenarios through crowdsourcing based and laboratory based subjective assessment. The major novel contribution of this study is the application of Paired Comparison based subjective assessment in a crowdsourcing environment. The obtained results provide some novel indications, besides confirming the earlier published trends, of perceptual preferences for adaptive scenarios of video streaming. Our study suggests that in a network environment with fluctuations in the bandwidth, a medium or low video bitrate which can be kept constant is the best approach. Moreover, if there are only a few drops in bandwidth, one can choose a medium or high bitrate with a single or few buffering events.

Keywords
Adaptive video streaming, Crowdsourcing, Quality of experience, Subjective quality assessment, Adaptive control systems, Bandwidth, HTTP, Human computer interaction, Laboratories, Multimedia systems, Quality control, Quality of service, Controlled laboratories, Quality of experience (QoE), Streaming technology, Subjective assessments, Subjective experiments, Subjective quality assessments, Video streaming services, Video streaming
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-30826 (URN)10.1007/s11042-016-3948-3 (DOI)2-s2.0-84988585391 (Scopus ID)
Available from: 2017-09-06 Created: 2017-09-06 Last updated: 2019-02-05Bibliographically approved
Engelke, U., Darcy, D. P., Mulliken, G. H., Bosse, S., Martini, M. G., Arndt, S., . . . Brunnstrom, K. (2017). Psychophysiology-Based QoE Assessment: A Survey. IEEE Journal on Selected Topics in Signal Processing, 11(1), 6-21, Article ID 7569001.
Open this publication in new window or tab >>Psychophysiology-Based QoE Assessment: A Survey
Show others...
2017 (English)In: IEEE Journal on Selected Topics in Signal Processing, ISSN 1932-4553, E-ISSN 1941-0484, Vol. 11, no 1, p. 6-21, article id 7569001Article in journal (Refereed) Published
Abstract [en]

We present a survey of psychophysiology-based assessment for quality of experience (QoE) in advanced multimedia technologies. We provide a classification of methods relevant to QoE and describe related psychological processes, experimental design considerations, and signal analysis techniques. We summarize multimodal techniques and discuss several important aspects of psychophysiology-based QoE assessment, including the synergies with psychophysical assessment and the need for standardized experimental design. This survey is not considered to be exhaustive but serves as a guideline for those interested to further explore this emerging field of research.

Keywords
Electrocardiography, electrodermal activity, electroencephalography, eye tracking, near-infrared spectroscopy, psychophysiology, pupillometry, quality of experience, Electrodes, Electrophysiology, Infrared devices, Multimedia systems, Near infrared spectroscopy, Physiology, Statistics, Surveys, Design considerations, Eye-tracking, Multi-modal techniques, Multimedia technologies, Psychological process, Quality of experience (QoE), Quality of service
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-29358 (URN)10.1109/JSTSP.2016.2609843 (DOI)2-s2.0-85015199281 (Scopus ID)
Available from: 2017-05-08 Created: 2017-05-08 Last updated: 2019-01-03Bibliographically approved
Brunnstrom, K., Wang, K., Tavakoli, S. & Andren, B. (2017). Symptoms analysis of 3D TV viewing based on Simulator SicknessQuestionnaires. Quality and User Experience, 2
Open this publication in new window or tab >>Symptoms analysis of 3D TV viewing based on Simulator SicknessQuestionnaires
2017 (English)In: Quality and User Experience, ISSN 2366-0139, E-ISSN 2366-0147, Vol. 2Article in journal (Refereed) Published
Abstract [en]

Stereoscopic 3D TV viewing puts differentvisual demands on the viewer compared to 2D TV viewing.Previous research has reported on viewers’ fatigue anddiscomfort and other negative effects. This study is toinvestigate further how severe and what symptoms mayarise from somewhat longish 3D TV viewing. The MPEG3DV project is working on the next-generation videoencoding standard and in this process, MPEG issued a callfor proposal of encoding algorithms. To evaluate thesealgorithms a large scale subjective test was performedinvolving Laboratories all over the world [(MPEG 2011;Baroncini 2012)]. For the participating Labs, it wasoptional to administer a slightly modified Simulator SicknessQuestionnaire (SSQ) before and after the test. One ofthe SSQ data sets described in this article is coming fromthis study. The SSQ data from the MPEG test is the largestdata set in this study and also contains the longest viewingtimes. Along with the SSQ data from the MPEG test, wehave also collected questionnaire data in three other 3D TVstudies. We did two on the same 3D TV (passive filmpattern retarder) as in the MPEG test, and one was using aprojector system. As comparison SSQ data from a 2Dvideo quality experiment is also presented. This investigationshows a statistically significant increase in symptomsafter viewing 3D TV primarily related to the visual or Oculomotor system. Surprisingly, 3D video viewing usingprojectors did not show this effect.

Keywords
Quality of experience, QoE, Visual discomfort Visual fatigue, 3D TV, MPEG 3DV, Simulator Sickness Questionnaires
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-33098 (URN)10.1007/s41233-016-0003-0 (DOI)
Available from: 2018-01-16 Created: 2018-01-16 Last updated: 2018-08-20Bibliographically approved
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-5060-9402

Search in DiVA

Show all publications
v. 2.35.6