Orthographic uncertainty: An entropy-based measure of word form typicality.

Chris F. Westbury and Michelle Yang (Chris F. Westbury (University of Alberta) and Michelle Yang (McGill University))

Wed Jun 11, 15:00-16:15 (6 months ago)

Abstract: Measures of orthographic typicality have long been studied as predictors of lexical access. The best-known orthographic typicality measure is orthographic neighbourhood size (Coltheart’s N or ON), the number of words that are one letter different, by substitution, from the target word. A more recent related measure of orthographic typicality is orthographic Levenshtein distance 20 (OLD20), the average Levenshtein orthographic edit distance of a target word from its 20 closest neighbours (Yarkoni, Balota, and Yap, 2008). Both measures have been implicated in lexical access. We will discuss a family of measures of word form similarity we call orthographic uncertainty. These measures are based on Shannon entropy (Shannon, 1948), which has a long history of being considered psychologically relevant. Orthographic uncertainty measures are superior to ON and OLD20 at predicting word/nonword decision times and word reading times and accuracies. They are also superior to the older measures insofar as they are naturally tied to the widely-accepted quantification using Shannon Entropy of the psychological functions of familiarity, uncertainty, learnability, and representational and computational efficiency.

Computer scienceMathematics

Audience: researchers in the discipline


Seminar on Algorithmic Aspects of Information Theory

Series comments: This online seminar is a follow up of the Dagstuhl Seminar 22301, www.dagstuhl.de/en/program/calendar/semhp/?semnr=22301.

Organizer: Andrei Romashchenko*
*contact for this listing

Export talk to