Use of Human Motion Data to Train Wearable Robots

Article History:Received:11 november 2020; Accepted: 27 December 2020; Published online: 05 April 2021 Abstract: Development of wearable robots is accelerating. Walking robots mimic human behavior and must operate without accidents. Human motion data are needed to train these robots. We developed a system for extracting human motion data and displaying them graphically.We extracted motion data using a Perception Neuron motion capture system and used the Unity engine for the simulation. Several experiments were performed to demonstrate the accuracy of the extracted motion data.Of the various methods used to collect human motion data, markerless motion capture is highly inaccurate, while optical motion capture is very expensive, requiring several high-resolution cameras and a large number of markers. Motion capture using a magnetic field sensor is subject to environmental interference. Therefore, we used an inertial motion capture system. Each movement sequence involved four and was repeated 10 times. The data were stored and standardized. The motions of three individuals were compared to those of a reference person; the similarity exceeded 90% in all cases. Our rehabilitation robot accurately simulated human movements: individually tailored wearable robots could be designed based on our data. Safe and stable robot operation can be verified in advance via simulation. Walking stability can be increased using walking robots trained via machine learning algorithms.


Introduction
Wearable robots are used to rehabilitate individuals suffering from neurological and musculoskeletal disorders, and to assist elderly persons with muscle weakness. The use of wearable robots developed for various purposes has increased rapidly since 2010 [1].  Walking robots may be required by the elderly, amputees, and individuals with lower-body paralysis caused by spinal cord disorders. Robots with medica l and healthcare applications must effectively simulate human motion, for which data-processing software is needed [1]. In this study, we collected human motion data, simulated joint movements, and compared motions between pairs of individuals. Our data could facilitate the design and fabrication of individualized wearable robots for rehabilitative purposes. Section 2 deals discusses relevant studies on monitoring systems, motion capture, and data analysis. Section 3 presents a simulation of the motions of a real individual, shown via a graphical user interface (GUI). Section 4 analyses the motion data and provides conclusions.

Related Work
Human motion has been captured in various ways. "Human feature points" have been captured by cameras tracking markers or sensors attached to the body. Humanoids imitating human movements have also been fabricated, with the similarities between their movements and those of actual humans then compared.
Simulations of human movements have explored the structure of the neuromuscular skeletal system and movement dynamics.

Monitoring System
OpenSim software (Simbios, Stanford, CA, USA) can be used to simulate muscle and joint movement, strength and dynamics. Unlike a previous musculoskeletal structural model, our model considers joint angles [4]. Any study of human movement must consider how muscles interact during movement. In a previous study, only joint rotation or flexion was measured when an individual was walking, and only "muscle stimulation signals" and the energy consumed by muscles were simulated [5].

Motion Capture
One markerless motion capture study evaluated normal and abnormal human movements. However, the reference values varied according to the number of cameras used [6]. In contrast, we used a single reference value, and obtained more accurate data using our motion capture equipment. Another study extracted human motion data and applied them to humanoid robots, which then imitated human movements. The human motion data were modified prior to transfer to the robots through application of kinematic constraints [7]. In another study, human arm movements were emulated by robots capable of human-like actions. Motion data were extracted in a markerless manner. However, as only one joint angle was used, the desired motion was not achieved [8]. In this study, data were extracted much more accurately.

Motion Capture Data Analysis
Motion capture data derived from actors and 3D models have previously been compared. In one study, data were collected by several cameras, and joint angles and angular velocity were measured [9]. Another study compared two motions in terms of the spatial curve and relative joint positions [10]. However, those studies were not conducted with a view to real-world applications. In the present study, we analyzed the motions of several individuals, and compared them with those of a reference person.

Human Motion Simulation using Motion Capture Data
We used Unity software (Unity Technologies, San Francisco, CA, USA) for the simulation. The Unity engine can be used to create 3D and 2D virtual environments and animations [11]. A Perception Neuron motion capture system (Beijing, China) was used; each of its sensors has a three-axis accelerometer, a three-axis gyroscope, and a three-axis magnetic field sensor [12].

Equipment
The Perception Neuron motion capture equipment is worn on the body; a sensor chip is attached to each joint axis. Axis Neuron software (Perception Neuron) is required and runs on a PC via a hub.

Data Collection
After initializing Axis Neuron while wearing the motion capture device, the sensor positions were adjusted as necessary and motions were then performed. Normal operation was confirmed visually; recalibration was performed as necessary.

Figure 4 Conversion of motion data files
The 'bvh' files were converted to 'csv' files in Linux. The command used for conversion was 'bvh2csv' (Python). After conversion, 'rot.csv' and 'pos.csv' files were available; we used the latter files for data analysis.

Analysis of Motion Similarities
The 'Skeleton' function in Axis Neuron can extract data from a total of 59 positions [12]. Four individuals repeated a movement sequence motions 10 times; the data were averaged and visualized. The individual showing the most consistency in their motions among trials served as the reference person, against which the data of the other three individuals were compared. The similarity of the motions between each individual and the reference person was analyzed using the Euclidean distance method, based on the raw sensor position data.

Implementation
A typical screen from the GUI of the software is shown in Figure 5.

Figure 5 The GUI
The screen is divided into four sections. Section 1 (center) shows a video of the motions of one individual and the reference person. Section 2 (top) gives the names of the two individuals referred to in Section 1; these can be changed using the dropbox on the right. Section 3 (bottom left) numerically shows the similarity in sensor positions between the two individuals. Section 4 (bottom right) shows graphs of the data described in Section 3 for each sensor.

Analysis of Motions
Each movement sequence involved four motions and was repeated 10 times.

Figure 6
Data obtained from sensors attached to the back of the hands Figure 6 shows the locations of sensors attached to the back of both hands over time. The solid and dotted lines correspond to the right and left hands, respectively. The line colors correspond to the three axes. The first motion increases the "Y value" as the hand moves forward. During the second motion, both hands are raised above the head and the Y value is highest. When the arms are outstretched from the sides (third motion), the Y value is similar to the first motion. The Y value returns to baseline during the fourth motion. The Y value changes according to the height of the sensor. As shown in Figure 6, the Y value remains constant as the hand is turned 90º to the right relative to the reference point. However, the "X value" and "Z value" become negative or positive depending on the location of the sensor (Figure 7).  Figure 8 shows the data for the first movement phase shown in Figure 6 in the coordinate plane. The three phases are compared to the reference phase in a single chart. The graph shows the changes in X and Z values. The movement phases together comprise a single movement sequence. The Y axis data are not shown because height does not affect the Euclidean distance. Also, in Axis Neuron, the height affects the coordinate system ( Figure 7).

Figure 9
Similarity between the motion data of each participant and the reference person for each motion Figure 9 compares the data for H1, H2, and H3 in terms of their similarity with the data of the reference person. The similarity values for H1 and H2, for the right hand, both exceeded 90%. H3 exhibited a lower similarity than H1 and H2 due to greater movement. Markerless motion capture would have been inappropriate because our analysis considered joint angles; thus, inertial motion capture was appropriate, as reflected in the high similarity values (> 90%). Each motion can be described as a waveform; by comparing the motions of several individuals, a reference waveform can be selected. The motions of elderly persons with low muscle strength could be improved, and walking robots could aid persons with lower-body paralysis.

Conclusions
As the robot industry grows, the development of wearable robots is accelerating. Data accuracy is important to ensure that robots mimic human behaviors correctly, to avoid accidents. We used motion capture equipment to obtain human motion data for several individuals; the motion data were compared with those of a reference person using a GUI. These data could inform the design of individualized wearable robots. Also, stability can be tested via simulations before fitting the walking robot. Our data could inform the development of walking robots. Stability can be improved through machine learning algorithms.