Test of new concepts for subtitles and sign language service

In May and June we tested new approaches for subtitles and sign language representation in 360° videos at CCMA and RBB. The feedback we got from hard of hearing and deaf people in pilot phase 1 (autumn 2018) had led to new implementations and the aim of this test was to understand the preferences and get further ideas from the target group to improve both services.

Subtitles

We tested a new presentation mode for the subtitle service in which the subtitles are fixed to the speaker.

 
Figure 1: Arrow leads the user to speaker (ARTE: I,Philip) Figure 2: Subtitle displayed below the speaker (ARTE: I,Philip)

Another focus was on the representation of sound events with emojis. We wanted to get a preference either for the traditional representation with text or for emojis.

Sign language service

In previous tests we learned that the traditional way of identifying different speakers within sign language interpreting by moving the body to the left or right side does not work for 360° videos. Since the user has the freedom to look around in the scene, the movement of the interpreter might not indicate the right position of the speaker.

We tested two different approaches to identify the current speaker: 1) displaying the name of the speaker or 2) an emoji representing the speaker below the sign language video.

Figure 3: Sign language interpreter plus speaker name and arrow (ARTE: I,Philip)

Combination of sign language and subtitles service

In pilot phase 1 some of the users were interested in a combination of both services to get a better understanding of the content. The shown example was an excerpt of a recording of the Opera Romeo and Juliet at the Gran Teatre del Liceu (Barcelona).

Figure 4: Simultaneous display of subtitles and sign language

Tagged as: ,

Leave a Reply

Your email address will not be published. Required fields are marked *