Data and supporting figures for JAES journal paper "Intelligent rendering of object-based audio: Elicitation of Expert Knowledge to Inform Object-Based Audio Rendering to Different Systems"
datasetposted on 23.01.2018 by James Stephen Woodcock, William J Davies, Trevor John Cox, Frank Melchior
Datasets usually provide raw data for analysis. This raw data often comes in spreadsheet form, but can be any collection of data, on which analysis can be performed.
This repository contains the stimuli and data underlying the JAES publication "Elicitation of Expert Knowledge to Inform Object-Based Audio Rendering to Different Systems".
The zip archive "stimuli.wav" contains the four clips that were used in the experiment. The channel order is as follows:
Ch 1 - 24: 9+10+3 VBAP render
Ch 27 - 35: 4+5+0 VBAP render
Ch 37 - 41: 0+5+0 VBAP render
Ch 43 - 44: 0+2+0 VBAP render
Ch 46 - 51: 0+5+0 Matrix downmix
Ch 52 - 53: 0+2+0 Matrix downmix
Note: All other channels are LFE channels which were not used in the experiment.
The R script PUBLIC_text_mining_results.R replicate the results of the text mining described in the paper. It requires the data contained in the zip archive "text_all.zip"
The R script PUBLIC_followup_interview_results.R replicates the results of the regression modelling described in the paper. It requires the data contained in "interview_data.csv" and "object_coding.csv".