A Computational Framework for Understanding Eye–Hand Coordination
Abstract
Although many studies have documented the robustness of
eye–hand coordination, the computational mechanisms underlying such
coordinated movements remain elusive. Here, we review the literature,
highlighting the differences between mostly phenomenological studies,
while emphasizing the need to develop a computational architecture
which can explain eye–hand coordination across different tasks. We
outline a recent computational approach which uses the accumulator
model framework to elucidate the mechanisms involved in coordination
of the two effectors. We suggest that, depending on the behavioral context,
one of the two independent mechanisms can be flexibly used for
the generation of eye and hand movements. When the context requires
a tight coupling between the effectors, a common command is instantiated
to drive both the effectors (common mode). Conversely, when the
behavioral context demands flexibility, separate commands are sent
to eye and hand effectors to initiate them flexibly (separate mode). We
hypothesize that a higher order executive controller assesses behavioral
context, allowing switching between the two modes. Such a computational
architecture can provide a conceptual framework that can explain
the observed heterogeneity in eye–hand coordination.
Keywords
Full Text:
PDFRefbacks
- There are currently no refbacks.