MUSHRA stands for Multiple Stimuli with Hidden Reference and Anchor and is a methodology for conducting a codec listening test to evaluate the perceived quality of the output from lossy audio compression algorithms. It is defined by ITU-R recommendation BS.1534-3.[1] The MUSHRA methodology is recommended for assessing "intermediate audio quality". For very small audio impairments, Recommendation ITU-R BS.1116-3 (ABC/HR) is recommended instead.
The main advantage over the mean opinion score (MOS) methodology (which serves a similar purpose) is that MUSHRA requires fewer participants to obtain statistically significant results. This is because all codecs are presented at the same time, on the same samples, so that a paired t-test or a repeated measures analysis of variance can be used for statistical analysis. Also, the 0–100 scale used by MUSHRA makes it possible to rate very small differences.
In MUSHRA, the listener is presented with the reference (labeled as such), a certain number of test samples, a hidden version of the reference and one or more anchors. The recommendation specifies that a low-range and a mid-range anchor should be included in the test signals. These are typically a 7 kHz and a 3.5 kHz low-pass version of the reference. The purpose of the anchors is to calibrate the scale so that minor artifacts are not unduly penalized. This is particularly important when comparing or pooling results from different labs.
Listener behavior
Both, MUSHRA and ITU BS.1116 tests[2] call for trained expert listeners who know what typical artifacts sound like and where they are likely to occur. Expert listeners also have a better internalization of the rating scale which leads to more repeatable results than with untrained listeners. Thus, with trained listeners, fewer listeners are needed to achieve statistically significant results.
It is assumed that preferences are similar for expert listeners and naive listeners and thus results of expert listeners are also predictive for consumers. In agreement with this assumption Schinkel-Bielefeld et al.[3] found no differences in the rank order between expert listeners and untrained listeners when using test signals containing only timbre and no spatial artifacts. However, Rumsey et al.[4] showed that for signals containing spatial artifacts, expert listeners weigh spatial artifacts slightly stronger than untrained listeners, who primarily focus on timbre artifacts.
In addition to this, it has been shown that expert listeners make more use of the option to listen to smaller sections of the signals under test repeatedly and perform more comparisons between the signals under test and the reference.[3] In contrast to the naive listener who produces a preference rating, expert listeners therefore produce an audio quality rating, rating the differences between the signal under test and the uncompressed original, which is the actual goal of a MUSHRA-test.
Pre- or post-screening
The MUSHRA guideline mentions several possibilities to assess the reliability of a listener.
The easiest and most common is to disqualify listeners who rate the hidden reference below 90 MUSHRA points for more than 15 percent of all test items. The hidden reference should be rated with 100 MUSHRA points so this is obviously a mistake. While it can happen that the hidden reference and a high-quality signal are confused, a rating of lower than 90 should only be given when the listener is certain that the rated signal is different from the original reference.
The other possibility to assess a listener's performance is eGauge,[5] a framework based on the analysis of variance. It computes agreement, repeatability and discriminability, though only the latter two are recommended for pre or post screening. Agreement analyses how well a listener agrees with the rest of the listeners. Repeatability looks at the variance when rating the same test signal again in comparison to the variance of the other test signals and discriminability analyses if listeners can distinguish between test signals of different conditions. As eGauge requires listening to every test signal twice, it is more effort to apply this than to post screen listeners based on the ratings of the hidden reference. However, if a listener has proven a reliable listener using eGauge, he or she can also be considered a reliable listener for future listening tests, provided the character of the test does not change; A reliable listener for stereo listening test is not necessarily equally good in perceiving artifacts in 5.1 or 22.2 format test items.
Test items
It is important to choose critical test items; items that are difficult to encode and are likely to produce artifacts. At the same time, the test items should be ecologically valid; they should be representative of broadcast material and not some synthetic signals especially designed to be difficult to encode. A method to choose critical material is presented by Ekeroot et al. who propose a ranking by elimination procedure.[6] While this is a good way to choose the most critical test items, it does not ensure inclusion a variety of test items prone to different artifacts.
Ideally the character of a MUSHRA test item should have similar characteristics for the duration of that item. Otherwise, it can be difficult for the listener to decide on a rating if different parts of the items display different or stronger artifacts than others.[7] Often shorter items lead to less variability than longer ones, as they are more stationary.[8] However, even when trying to choose stationary items, ecologically valid stimuli will very often have sections that are slightly more critical than the rest of the signal. Thus, listeners who focus on different sections of the signal may evaluate it differently. In this case more critical listeners seem to be better at identifying the most critical regions of a stimulus than less critical listeners.[9]
Language of test items
While in ITU-T P.800 tests[10] which are commonly used to evaluate telephone quality codecs, the tested speech items should always be in the native language of the listeners, this is not necessary in MUSHRA tests. A study with Mandarin Chinese and German listeners found no significant difference between rating foreign language and native language test items. However, listeners needed more time and comparison opportunities when evaluating the foreign language items.[11] Such compensation is not possible in ITU-T P.800 ACR tests where items are heard only once and no comparison to the reference is possible. There, foreign language items are rated as being of lower quality when listeners' language proficiency is low.[12]
References
- ↑ ITU-R recommendation BS.1534
- ↑ ITU-R BS.1116 (February 2015). "Methods for the subjective assessment of small impairments in audio systems".
{{cite journal}}
: Cite journal requires|journal=
(help)CS1 maint: numeric names: authors list (link) - 1 2 Schinkel-Bielefeld, N., Lotze, N. and Nagel, F. (May 2013). "Audio quality evaluation by experienced and inexperienced listeners". The Journal of the Acoustical Society of America. 133 (5): 3246. Bibcode:2013ASAJ..133.3246S. doi:10.1121/1.4805210.
{{cite journal}}
: CS1 maint: multiple names: authors list (link) - ↑ Rumsey, Francis; Zielinski, Slawomir; Kassier, Rafael; Bech, Søren (2005-05-31). "Relationships between experienced listener ratings of multichannel audio quality and naïve listener preferences". The Journal of the Acoustical Society of America. 117 (6): 3832–3840. Bibcode:2005ASAJ..117.3832R. doi:10.1121/1.1904305. ISSN 0001-4966. PMID 16018485.
- ↑ Gaëtan, Lorho; Guillaume, Le Ray; Nick, Zacharov (2010-06-13). "eGauge—A Measure of Assessor Expertise in Audio Quality Evaluations". Proceedings of the Audio Engineering Society. 38th International Conference on Sound Quality Evaluation.
- ↑ Ekeroot, Jonas; Berg, Jan; Nykänen, Arne (2014-04-25). "Criticality of Audio Stimuli for Listening Tests – Listening Durations During a Ranking Task". 136th Convention of the Audio Engineering Society.
- ↑ Max, Neuendorf; Frederik, Nagel (2011-10-19). "Exploratory Studies on Perceptual Stationarity in Listening Test - Part I: Real World Signals from Custom Listening Tests".
{{cite journal}}
: Cite journal requires|journal=
(help) - ↑ Frederik, Nagel; Max, Neuendorf (2011-10-19). "Exploratory Studies on Perceptual Stationarity in Listening Test - Part II: Synthetic Signals with Time Varying Artifacts".
{{cite journal}}
: Cite journal requires|journal=
(help) - ↑ Nadja, Schinkel-Bielefeld (2017-05-11). "Audio Quality Evaluation in MUSHRA Tests–Influences between Loop Setting and a Listeners' Ratings". 142nd Convention of the Audio Engineering Society.
- ↑ ITU-T P.800 (August 1996). "P.800 : Methods for subjective determination of transmission quality".
{{cite journal}}
: Cite journal requires|journal=
(help)CS1 maint: numeric names: authors list (link) - ↑ Nadja, Schinkel-Bielefeld; Zhang, Jiandong; Qin, Yili; Katharina, Leschanowsky, Anna; Fu, Shanshan (2017-05-11). "Is it Harder to Perceive Coding Artifact in Foreign Language Items? – A Study with Mandarin Chinese and German Speaking Listeners".
{{cite journal}}
: Cite journal requires|journal=
(help)CS1 maint: multiple names: authors list (link) - ↑ Blašková, Lubica; Holub, Jan (2008). "How do Non-native Listeners Perceive Quality of Transmitted Voice?" (PDF). Communications. 10 (4): 11–15. doi:10.26552/com.C.2008.4.11-14. S2CID 196699038.
External links
- webMUSHRA: a MUSHRA compliant web audio API based experiment software, configurable using YAML
- RateIt: A GUI for performing MUSHRA experiments
- MUSHRAM - A Matlab interface for MUSHRA listening tests at the Wayback Machine (archived 2008-10-19)
- A Max/MSP interface for MUSHRA listening tests
- A Browser Based Audio Evaluation Tool, for running many different tests including MUSHRA - No coding needed
- BeaqleJS: HTML5 and JavaScript based framework for listening tests
- mushraJS+Server: based on mushraJS with mochiweb server, which is erlang web server