Cognitive processing of the extra visual layer of live captioning in simultaneous interpreting. Triangulation of eye-tracked process and performance data
Closed captioning
DOI:
10.1016/j.amper.2023.100131
Publication Date:
2023-06-21T01:11:00Z
AUTHORS (2)
ABSTRACT
While real-time automatic captioning has become available on various online meeting platforms, it poses additional cognitive challenges for interpreters because adds an extra layer information processing in interpreting. Against this background, empirical study investigates the of live interpreting Zoom Meetings. 13 trainees a postgraduate professional training programme were recruited eye-tracking experiment simultaneous under two conditions: with and off. Their eye movement data performance collected during experiment. Three questions explored: 1) How do process visual from captioning? 2) Which types segments tax more resources 3) Is there significant difference accuracy between without The results showed following findings: Although participants observed to constantly shift their attention transcript area non-live area, they tended consciously keep when numbers proper names appeared. With on, required effort containing higher density than names. There was improvement number name captioning.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (42)
CITATIONS (2)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....