Mapping visual symbols onto spoken language along the ventral visual stream


Reading involves transforming arbitrary visual symbols into sounds and meanings. This study interrogated the neural representations in ventral occipitotemporal cortex (vOT) that support this transformation process. Twenty-four adults learned to read 2 sets of 24 novel words that shared phonemes and semantic categories but were written in different artificial orthographies. Following 2 wk of training, participants read the trained words while neural activity was measured with functional MRI. Representational similarity analysis on item pairs from the same orthography revealed that right vOT and posterior regions of left vOT were sensitive to basic visual similarity. Left vOT encoded letter identity and representations became more invariant to position along a posterior-to-anterior hierarchy. Item pairs that shared sounds or meanings, but were written in different orthographies with no letters in common, evoked similar neural patterns in anterior left vOT. These results reveal a hierarchical, posterior-to-anterior gradient in vOT, in which representations of letters become increasingly invariant to position and are transformed to convey spoken language information.

Publication DOI:
Divisions: Life & Health Sciences > Psychology
Additional Information: Copyright © 2019 the Author(s). Published by PNAS. This open access article is distributed under Creative Commons Attribution License 4.0 (CC BY).
Uncontrolled Keywords: Learning,Orthography,Reading,Representation,fMRI,General
Full Text Link:
Related URLs: https://www.pna ... 8/13/1818575116 (Publisher URL)
http://www.scop ... tnerID=8YFLogxK (Scopus URL)
PURE Output Type: Article
Published Date: 2019-09-03
Published Online Date: 2019-08-19
Accepted Date: 2019-07-03
Authors: Taylor, J S H ( 0000-0002-1109-8539)
Davis, Matthew H.
Rastle, Kathleen



Version: Published Version

License: Creative Commons Attribution

| Preview

Export / Share Citation


Additional statistics for this record