Motor coordination

From Infogalactic: the planetary knowledge core
(Redirected from Physical coordination)
Jump to: navigation, search

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Motor coordination is shown in this animated sequence by Eadweard Muybridge of himself throwing a disk

Motor coordination is the combination of body movements created with the kinematic (such as spatial direction) and kinetic (force) parameters that result in intended actions. Motor coordination is achieved when subsequent parts of the same movement, or the movements of several limbs or body parts are combined in a manner that is well timed, smooth, and efficient with respect to the intended goal. This involves the integration of proprioceptive information detailing the position and movement of the musculoskeletal system with the neural processes in the brain and spinal cord which control, plan, and relay motor commands. The cerebellum plays a critical role in this neural control of movement and damage to this part of the brain or its connecting structures and pathways results in impairment of coordination, known as ataxia.

Properties

Nonexact reproduction

Examples of motor coordination are the ease with which people can stand up, pour water into a glass, walk, and reach for a pen. These are created reliably, proficiently and repeatedly, but these movements rarely are reproduced exactly in their motor details, such as joint angles when pointing[1] or standing up from sitting.[2]

Combination

The complexity of motor coordination can be seen in the task of picking up a bottle of water and pouring it in a glass. This apparently simple task is actually a combination of complex tasks that are processed at different levels. The levels of processing include: (1) for the prehension movement to the bottle, the reach and hand configuration have to be coordinated, (2) when lifting the bottle, the load and the grip force applied by the fingers need to be coordinated to account for weight, fragility, and slippage of the glass, and (3) when pouring the water from the bottle to the glass, the actions of both arms, one holding the glass and the other that is pouring the water, need to be coordinated with each other. This coordination also involves all of the eye–hand coordination processes. The brain interprets actions as spatial-temporal patterns and when each hand performs a different action simultaneously, bimanual coordination is involved.[3] Additional levels of organization are required depending on whether the person will drink from the glass, give it to someone else, or simply put it on a table.[4]

Degree of freedom problem

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

The problem with understanding motor coordination arises from the biomechanical redundancy caused by the large number of musculoskeletal elements involved. These different elements create many degrees of freedom by which any action can be done because of the range of ways of arranging, turning, extending and combining the various muscles, joints, and limbs in a motor task. Several hypotheses have been developed in explanation of how the nervous system determines a particular solution from a large set of possible solutions that can accomplish the task or motor goals equally well.[5]

Theories

Muscle synergies

Nikolai Bernstein proposed the existence of muscle synergies as a neural strategy of simplifying the control of multiple degrees of freedom.[5] A functional muscle synergy is defined as a pattern of co-activation of muscles recruited by a single neural command signal.[6] One muscle can be part of multiple muscle synergies, and one synergy can activate multiple muscles. The current method of finding muscle synergies is to measure EMG (electromyography) signals from the muscles involved in a certain movement so that specific patterns of muscle activation can be identified. Statistical analyses are applied to the filtered EMG data to determine the number of muscle synergies that best represent the original EMG. A reduced number of control elements (muscle synergies) are combined to form a continuum of muscle activation for smooth motor control during various tasks. These synergies work together to produce movements such as walking or balance control. Directionality of a movement has an effect on how the motor task is performed (i.e. walking forward vs. walking backward, each uses different levels of contraction in different muscles). Researchers have measured EMG signals for perturbation applied in multiple directions in order to identify muscle synergies that are present for all directions.[7]

Initially, it was thought that the muscle synergies eliminated the redundant control of a limited number of degrees of freedom by constraining the movements of certain joints or muscles (flexion and extension synergies). However, whether these muscle synergies are a neural strategy or whether they are the result of kinematic constraints has been debated.[8] Recently the term of sensory synergy has been introduced supporting the assumption that synergies are the neural strategies to handle sensory and motor systems.[9]

Uncontrolled manifold hypothesis

A more recent hypothesis propose that the central nervous system does not eliminate the redundant degrees of freedom, but instead it uses all of them to ensure flexible and stable performance of motor tasks. The central nervous system makes use of this abundance from the redundant systems instead of restricting them like previously hypothesized. Uncontrolled Manifold (UCM) Hypothesis provides a way to quantify the muscle synergy.[10] This hypothesis defines "synergy" a little differently from that stated above; a synergy represents an organization of elemental variables (degrees of freedom) that stabilizes an important performance variable. Elemental variable is the smallest sensible variable that can be used to describe a system of interest at a selected level of analysis, and a performance variable refers to the potentially important variables produced by the system as a whole. For example, in multi-joint reaching task, the angles and the positions of certain joints are the elemental variables, and the performance variables are the endpoint coordinates of the hand.[10]

This hypothesis proposes that the controller (the brain) acts in the space of elemental variables (i.e. the rotations shared by the shoulder, elbow, and wrist in arm movements) and selects in the space of manifolds (i.e. sets of angular values corresponding to a final position). This hypothesis acknowledges that variability is always present in human movements, and it categorizes it into two types: (1) bad variability and (2) good variability. Bad variability affects the important performance variable and causes large errors in the final result of a motor task, and a good variability keeps the performance task unchanged and maintains successful outcome. An interesting example of the good variability was observed in the tongue's movements, which are responsible for the speech production.[11] The prescription of the stiffness' level to the tongue's body creates some variability (in terms of the acoustical parameters of speech, such as formants), which is, however, not significant for the quality of speech (at least, in the reasonable range of stiffness' levels).[12] One of the possible explanations might be that the brain only works to decrease the bad variability that hinders the desired final result, and it does so by increasing the good variability in the redundant domain.[10]

Types

Inter-limb

Inter-limb coordination concerns how movements are coordinated across limbs. J. A. Scott Kelso and colleagues have proposed that coordination can be modeled as coupled oscillators, a process that can be understood in the HKB (Haken, Kelso, and Bunz) model.[13] The coordination of complex inter-limb tasks is highly reliant on the temporal coordination. An example of such temporal coordination can be observed in the free pointing movement of the eyes, hands, and arms to direct at the same motor target. These coordination signals are sent simultaneously to their effectors. In bimanual tasks (tasks involving two hands), it was found that the functional segments of the two hands are tightly synchronized. One of the postulated theories for this functionality is the existence of a higher, "coordinating schema" that calculates the time it needs to perform each individual task and coordinates it using a feedback mechanism. There are several areas of the brain that are found to contribute to temporal coordination of the limbs needed for bimanual tasks, and these areas include the premotor cortex(PMC), the parietal cortex, the mesial motor cortices, more specifically the supplementary motor area (SMA), the cingulate motor cortex (CMC), the primary motor cortex (M1), and the cerebellum.[14]

Intra-limb

Intra-limb coordination involves the planning of trajectories in the Cartesian planes.[4] This reduces computational load and the degrees of freedom for a given movement, and it constrains the limbs to act as one unit instead of sets of muscles and joints. This concept is similar to "muscle synergies" and "coordinative structures." An example of such concept is the Hogan and Flash minimum-jerk model,[15] which predicts that the parameter that the nervous system controls is the spatial path of the hand, i.e. the end-effector (which implies that the movement is planned in the Cartesian coordinates). Other early studies showed that the end-effector follows a regularized kinematic pattern[16] relating movement's curvature to speed and that the central nervous system is devoted to its coding.[17] In contrast to this model, the joint-space model postulates that the motor system plans movements in joint coordinates. For this model, the controlled parameter is the position of each joint contributing to the movement. Control strategies for goal directed movement differ according to the task that the subject is assigned. This was proven by testing two different conditions: (1) subjects moved cursor in the hand to the target and (2) subjects move their free hand to the target. Each condition showed different trajectories: (1) straight path and (2) curved path.[18]

Eye–hand

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Eye–hand coordination concerns how eye movements are coordinated with and affect hand movements. Typical findings relate to the eye looking at an object before the hand starts moving towards that object.[19]

Learning

Bernstein proposed that individuals learn coordination first by restricting the degrees of freedom that they use. By controlling only a limited set of degrees of freedom, this enables the learner to simplify the dynamics of the body parts involved and the range of movement options. Once the individual has gained some proficiency, these restrictions can be relaxed so allowing them to use the full potential of their body.[5]

See also

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. 4.0 4.1 Lua error in package.lua at line 80: module 'strict' not found.
  5. 5.0 5.1 5.2 Bernstein N. (1967). The Coordination and Regulation of Movements. Pergamon Press. New York.OCLC 301528509
  6. Lua error in package.lua at line 80: module 'strict' not found.
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. Lua error in package.lua at line 80: module 'strict' not found.
  10. 10.0 10.1 10.2 Lua error in package.lua at line 80: module 'strict' not found.
  11. More precisely, the movements of tongue were modeled by means of a biomechanical tongue model, BTM, controlled by an optimum internal model, which minimizes the length of the path traveled in the internal space during the production of the sequences of tasks (see Blagouchine & Moreau).
  12. Iaroslav Blagouchine and Eric Moreau. Control of a Speech Robot via an Optimum Neural-Network-Based Internal Model with Constraints. IEEE Transactions on Robotics, vol. 26, no. 1, pp. 142—159, February 2010.
  13. Lua error in package.lua at line 80: module 'strict' not found.
  14. Lua error in package.lua at line 80: module 'strict' not found.
  15. Lua error in package.lua at line 80: module 'strict' not found.
  16. Lua error in package.lua at line 80: module 'strict' not found.
  17. Lua error in package.lua at line 80: module 'strict' not found.
  18. Lua error in package.lua at line 80: module 'strict' not found.
  19. Lua error in package.lua at line 80: module 'strict' not found.