S2S Raw Data.zip (260.04 kB)
Download file

Speech-To-Screen raw data zip file

Download (260.04 kB)
Speech-To-Screen listening experiment - raw data
Zip file containing 100 x .mat files (Matlab)
See also separate ReadMe text file for further details

20 participants, 5 files each:
* 1 x practice session
* 4 x actual sessions: SMN_0; SMN_1; SSN_0; SSN_1

16 x combinations of conditions tested:
2 x masking noise: - speech-modulated noise (SMN)
- speech-shaped noise (SSN)
2 x video conditions: - 0 / off - audio-only
- 1 / on - video + audio
4 x auralizations:
- INT - 'internalized' - speech + noise in stereo (control)
- NS - noise in stereo; speech auralized at screen
- SN - speech in stereo; noise auralized at screen
- EXT - 'externalized' - speech + noise auralized at screen

Example: 's1-SMN_0' = subject 1, speech-modulated noise, audio-only, and auralizations randomized within; 80 sentences.

Speech-In-Noise listening experiment.
GRID corpus audio-visual stimuli = 6-word sentences
Example: 'lbig4n' = 'lay blue in G4 now'
For each sentence played, participants had to identify and enter 4th + 5th words, i.e. letter-number combinations.
Playback and data entry via a Matlab GUI.

1 x 1 struct files:
- 'duration' = time taken (seconds) for section
- 'subj' = participant ID (1-20)
- 'sent' = 80 x 2 cell
- column 1: audio file played, e.g. 'lbig4n.wav'
- column 2: playback condition, e.g. SMN_0_EXT
- 'heard' = data inputted by partcipant, e.g. 'g4'
- 'rts' = reaction time (seconds)
- 'speaker' = GRID speaker ID






Funding

EP/L000539/1

History