Abstract: Shear elastic modulus of skeletal muscles can be
obtained by shear wave elastography (SWE) and has been
linearly related to muscle force. However, SWE is currently
implemented using array probes. Price and volumes of these probes
and their driving equipment prevent SWE from being used in
wearable human-machine interfaces (HMI). Moreover, beamforming
processing for array probes reduces the real-time performance. To
achieve SWE by wearable HMIs, a customized three-element probe
is adopted in this work, with one element for acoustic radiation
force generation and the others for shear wave tracking. In-phase
quadrature demodulation and 2D autocorrelation are adopted to
estimate velocities of tissues on the sound beams of the latter two
elements. Shear wave speeds are calculated by phase shift between
the tissue velocities. Three agar phantoms with different elasticities
were made by changing the weights of agar. Values of the shear
elastic modulus of the phantoms were measured as 8.98, 23.06 and
36.74 kPa at a depth of 7.5 mm respectively. This work verifies the
feasibility of measuring shear elastic modulus by wearable devices.
Abstract: By the development of IT systems, human-computer interaction is also developing even faster and newer communication methods become available in human-machine interaction. In this article, the application of a hand gesture controlled human-computer interface is being introduced through the example of a mobile robot. The control of the mobile robot is implemented in a realistic virtual environment that is advantageous regarding the aspect of different tests, parallel examinations, so the purchase of expensive equipment is unnecessary. The usability of the implemented hand gesture control has been evaluated by test subjects. According to the opinion of the testing subjects, the system can be well used, and its application would be recommended on other application fields too.
Abstract: New sensors and technologies – such as microphones,
touchscreens or infrared sensors – are currently making their
appearance in the automotive sector, introducing new kinds of
Human-Machine Interfaces (HMIs). The interactions with such tools
might be cognitively expensive, thus unsuitable for driving tasks.
It could for instance be dangerous to use touchscreens with a
visual feedback while driving, as it distracts the driver’s visual
attention away from the road. Furthermore, new technologies in
car cockpits modify the interactions of the users with the central
system. In particular, touchscreens are preferred to arrays of buttons
for space improvement and design purposes. However, the buttons’
tactile feedback is no more available to the driver, which makes
such interfaces more difficult to manipulate while driving. Gestures
combined with an auditory feedback might therefore constitute an
interesting alternative to interact with the HMI. Indeed, gestures can
be performed without vision, which means that the driver’s visual
attention can be totally dedicated to the driving task. In fact, the
auditory feedback can both inform the driver with respect to the task
performed on the interface and on the performed gesture, which might
constitute a possible solution to the lack of tactile information. As
audition is a relatively unused sense in automotive contexts, gesture
sonification can contribute to reducing the cognitive load thanks
to the proposed multisensory exploitation. Our approach consists
in using a virtual object (VO) to sonify the consequences of the
gesture rather than the gesture itself. This approach is motivated
by an ecological point of view: Gestures do not make sound, but
their consequences do. In this experiment, the aim was to identify
efficient sound strategies, to transmit dynamic information of VOs to
users through sound. The swipe gesture was chosen for this purpose,
as it is commonly used in current and new interfaces. We chose
two VO parameters to sonify, the hand-VO distance and the VO
velocity. Two kinds of sound parameters can be chosen to sonify the
VO behavior: Spectral or temporal parameters. Pitch and brightness
were tested as spectral parameters, and amplitude modulation as a
temporal parameter. Performances showed a positive effect of sound
compared to a no-sound situation, revealing the usefulness of sounds
to accomplish the task.
Abstract: In this paper, we present a low cost design for a smart glove that can perform sign language recognition to assist the speech impaired people. Specifically, we have designed and developed an Assistive Hand Gesture Interpreter that recognizes hand movements relevant to the American Sign Language (ASL) and translates them into text for display on a Thin-Film-Transistor Liquid Crystal Display (TFT LCD) screen as well as synthetic speech. Linear Bayes Classifiers and Multilayer Neural Networks have been used to classify 11 feature vectors obtained from the sensors on the glove into one of the 27 ASL alphabets and a predefined gesture for space. Three types of features are used; bending using six bend sensors, orientation in three dimensions using accelerometers and contacts at vital points using contact sensors. To gauge the performance of the presented design, the training database was prepared using five volunteers. The accuracy of the current version on the prepared dataset was found to be up to 99.3% for target user. The solution combines electronics, e-textile technology, sensor technology, embedded system and machine learning techniques to build a low cost wearable glove that is scrupulous, elegant and portable.
Abstract: Time base maintenance (TBM) is conventionally applied by the power utilities to maintain circuit breakers (CBs), transformers, bus bars and cables, which may result in under maintenance or over maintenance. As information and communication technology (ICT) industry develops, the maintenance policies of many power utilities have gradually changed from TBM to condition base maintenance (CBM) to improve system operating efficiency, operation cost and power supply reliability. This paper discusses the feasibility of using intelligent electronic devices (IEDs) to construct a CB CBM management platform. CBs in power substations can be monitored using IEDs with additional logic configuration and wire connections. The CB monitoring data can be sent through intranet to a control center and be analyzed and integrated by the Elipse Power Studio software. Finally, a human-machine interface (HMI) of supervisory control and data acquisition (SCADA) system can be designed to construct a CBM management platform to provide maintenance decision information for the maintenance personnel, management personnel and CB manufacturers.
Abstract: The objective of this paper is to characterize the spontaneous Electroencephalogram (EEG) signals of four different motor imagery tasks and to show hereby a possible solution for the present binary communication between the brain and a machine ora Brain-Computer Interface (BCI). The processing technique used in this paper was the fractal analysis evaluated by the Critical Exponent Method (CEM). The EEG signal was registered in 5 healthy subjects,sampling 15 measuring channels at 1024 Hz.Each channel was preprocessed by the Laplacian space ltering so as to reduce the space blur and therefore increase the spaceresolution. The EEG of each channel was segmented and its Fractaldimension (FD) calculated. The FD was evaluated in the time interval corresponding to the motor imagery and averaged out for all the subjects (each channel). In order to characterize the FD distribution,the linear regression curves of FD over the electrodes position were applied. The differences FD between the proposed mental tasks are quantied and evaluated for each experimental subject. The obtained results of the proposed method are a substantial fractal dimension in the EEG signal of motor imagery tasks and can be considerably utilized as the multiple-states BCI applications.