Abstract: A multi fingered dexterous anthropomorphic hand is
being developed by the authors. The focus of the hand is the
replacement of human operators in hazardous environments and also
in environments where zero tolerance is observed for the human
errors. The robotic hand will comprise of five fingers (four fingers
and one thumb) each having four degrees of freedom (DOF) which
can perform flexion, extension, abduction, adduction and also
circumduction. For the actuation purpose pneumatic muscles and
springs will be used. The paper exemplifies the mechanical design for
the robotic hand. It also describes different mechanical designs that
have been developed before date.
Abstract: Let T and S be a subspace of Cn and Cm, respectively.
Then for A ∈ Cm×n satisfied AT ⊕ S = Cm, the generalized
inverse A(2)
T,S is given by A(2)
T,S = (PS⊥APT )†. In this paper, a
finite formulae is presented to compute generalized inverse A(2)
T,S
under the concept of restricted inner product, which defined as <
A,B >T,S=< PS⊥APT,B > for the A,B ∈ Cm×n. By this
iterative method, when taken the initial matrix X0 = PTA∗PS⊥, the
generalized inverse A(2)
T,S can be obtained within at most mn iteration
steps in absence of roundoff errors. Finally given numerical example
is shown that the iterative formulae is quite efficient.
Abstract: The use of the mechanical simulation (in particular the finite element analysis) requires the management of assumptions in order to analyse a real complex system. In finite element analysis (FEA), two modeling steps require assumptions to be able to carry out the computations and to obtain some results: the building of the physical model and the building of the simulation model. The simplification assumptions made on the analysed system in these two steps can generate two kinds of errors: the physical modeling errors (mathematical model, domain simplifications, materials properties, boundary conditions and loads) and the mesh discretization errors. This paper proposes a mesh adaptive method based on the use of an h-adaptive scheme in combination with an error estimator in order to choose the mesh of the simulation model. This method allows us to choose the mesh of the simulation model in order to control the cost and the quality of the finite element analysis.
Abstract: OpenMP is an API for parallel programming model of shared memory multiprocessors. Novice OpenMP programmers often produce the code that compiler cannot find human errors. It was investigated how compiler coped with the common mistakes that can occur in OpenMP code. The latest version(4.4.3) of GCC is used for this research. It was found that GCC compiled the codes without any errors or warnings. In this paper the programming aid tool is presented for OpenMP programs. It can check 12 common mistakes that novice programmer can commit during the programming of OpenMP. It was demonstrated that the programming aid tool can detect the various common mistakes that GCC failed to detect.
Abstract: Wireless channels are characterized by more serious
bursty and location-dependent errors. Many packet scheduling
algorithms have been proposed for wireless networks to guarantee
fairness and delay bounds. However, most existing schemes do not
consider the difference of traffic natures among packet flows. This
will cause the delay-weight coupling problem. In particular, serious
queuing delays may be incurred for real-time flows. In this paper, it
is proposed a scheduling algorithm that takes traffic types of flows
into consideration when scheduling packets and also it is provided
scheduling flexibility by trading off video quality to meet the
playback deadline.
Abstract: This paper presents a review of an 8-year study on radiation effects in commercial memory devices operating within the main on-board computer system OBC386 of the Algerian microsatellite Alsat-1. A statistical analysis of single-event upset (SEU) and multiple-bit upset (MBU) activity in these commercial memories shows that the typical SEU rate at alsat-1's orbit is 4.04 × 10-7 SEU/bit/day, where 98.6% of these SEUs cause single-bit errors, 1.22% cause double-byte errors, and the remaining SEUs result in multiple-bit and severe errors.
Abstract: The capturing of gel electrophoresis image represents
the output of a DNA computing algorithm. Before this image is being
captured, DNA computing involves parallel overlap assembly (POA)
and polymerase chain reaction (PCR) that is the main of this
computing algorithm. However, the design of the DNA
oligonucleotides to represent a problem is quite complicated and is
prone to errors. In order to reduce these errors during the design stage
before the actual in-vitro experiment is carried out; a simulation
software capable of simulating the POA and PCR processes is
developed. This simulation software capability is unlimited where
problem of any size and complexity can be simulated, thus saving
cost due to possible errors during the design process. Information
regarding the DNA sequence during the computing process as well as
the computing output can be extracted at the same time using the
simulation software.
Abstract: The design problem of Infinite Impulse Response (IIR)
digital filters is usually expressed as the minimization problem of
the complex magnitude error that includes both the magnitude and
phase information. However, the group delay of the filter obtained
by solving such design problem may be far from the desired group
delay. In this paper, we propose a design method of stable IIR digital
filters with prespecified maximum group delay errors. In the proposed
method, the approximation problems of the magnitude-phase and
group delay are separately defined, and these two approximation
problems are alternately solved using successive projections. As a
result, the proposed method can design the IIR filters that satisfy the
prespecified allowable errors for not only the complex magnitude but
also the group delay by alternately executing the coefficient update
for the magnitude-phase and the group delay approximation. The
usefulness of the proposed method is verified through some examples.
Abstract: Fractional delay FIR filters design method based on
the differential evolution algorithm is presented. Differential evolution
is an evolutionary algorithm for solving a global optimization problems in the continuous search space. In the proposed approach,
an evolutionary algorithm is used to determine the coefficients of
a fractional delay FIR filter based on the Farrow structure. Basic
differential evolution is enhanced with a restricted mating technique,
which improves the algorithm performance in terms of convergence
speed and obtained solution. Evolutionary optimization is carried out by minimizing an objective function which is based on the amplitude
response and phase delay errors. Experimental results show that the proposed algorithm leads to a reduction in the amplitude response and phase delay errors relative to those achieved with the Least-Squares
method.
Abstract: This paper describes a computer-aided design for
design of the concave globoidal cam with cylindrical rollers and
swinging follower. Four models with different modeling methods are
made from the same input data. The input data are angular input and
output displacements of the cam and the follower and some other
geometrical parameters of the globoidal cam mechanism. The best
cam model is the cam which has no interference with the rollers
when their motions are simulated in assembly conditions. The
angular output displacement of the follower for the best cam is also
compared with that of in the input data to check errors. In this study,
Pro/ENGINEER® Wildfire 2.0 is used for modeling the cam,
simulating motions and checking interference and errors of the
system.
Abstract: This paper describes a method to measure and
compensate a 4 axes ultra-precision machine tool that generates micro
patterns on the large surfaces. The grooving machine is usually used
for making a micro mold for many electrical parts such as a light guide
plate for LCD and fuel cells. The ultra precision machine tool has three
linear axes and one rotational table. Shaping is usually used to
generate micro patterns. In the case of 50 μm pitch and 25 μm height
pyramid pattern machining with a 90° wedge angle bite, one of linear
axis is used for long stroke motion for high cutting speed and other
linear axis are used for feeding. The triangular patterns can be
generated with many times of long stroke of one axis. Then 90°
rotation of work piece is needed to make pyramid patterns with
superposition of machined two triangular patterns.
To make a two dimensional positioning error, straightness of two
axes in out of plane, squareness between the each axis are important.
Positioning errors, straightness and squarness were measured by laser
interferometer system. Those were compensated and confirmed by
ISO230-6. One of difficult problem to measure the error motions is
squareness or parallelism of axis between the rotational table and
linear axis. It was investigated by simultaneous moving of rotary table
and XY axes. This compensation method is introduced in this paper.
Abstract: In this paper we investigate the influence of external
noise on the inference of network structures. The purpose of our
simulations is to gain insights in the experimental design of microarray
experiments to infer, e.g., transcription regulatory networks
from microarray experiments. Here external noise means, that the
dynamics of the system under investigation, e.g., temporal changes of
mRNA concentration, is affected by measurement errors. Additionally
to external noise another problem occurs in the context of microarray
experiments. Practically, it is not possible to monitor the mRNA
concentration over an arbitrary long time period as demanded by the
statistical methods used to learn the underlying network structure. For
this reason, we use only short time series to make our simulations
more biologically plausible.
Abstract: The SOM has several beneficial features which make
it a useful method for data mining. One of the most important
features is the ability to preserve the topology in the projection.
There are several measures that can be used to quantify the goodness
of the map in order to obtain the optimal projection, including the
average quantization error and many topological errors. Many
researches have studied how the topology preservation should be
measured. One option consists of using the topographic error which
considers the ratio of data vectors for which the first and second best
BMUs are not adjacent. In this work we present a study of the
behaviour of the topographic error in different kinds of maps. We
have found that this error devaluates the rectangular maps and we
have studied the reasons why this happens. Finally, we suggest a new
topological error to improve the deficiency of the topographic error.
Abstract: Accurate and comprehensive thermodynamic properties of pure and mixture of refrigerants are in demand by both producers and users of these materials. Information about thermodynamic properties is important initially to qualify potential candidates for working fluids in refrigeration machinery. From practical point of view, Refrigerants and refrigerant mixtures are widely used as working fluids in many industrial applications, such as refrigerators, heat pumps, and power plants The present work is devoted to evaluating seven cubic equations of state (EOS) in predicting gas and liquid phase volumetric properties of nine ozone-safe refrigerants both in super and sub-critical regions. The evaluations, in sub-critical region, show that TWU and PR EOS are capable of predicting PVT properties of refrigerants R32 within 2%, R22, R134a, R152a and R143a within 1% and R123, R124, R125, TWU and PR EOS's, from literature data are 0.5% for R22, R32, R152a, R143a, and R125, 1% for R123, R134a, and R141b, and 2% for R124. Moreover, SRK EOS predicts PVT properties of R22, R125, and R123 to within aforementioned errors. The remaining EOS's predicts volumetric properties of this class of fluids with higher errors than those above mentioned which are at most 8%.In general, the results are in favor of the preference of TWU and PR EOS over other remaining EOS's in predicting densities of all mentioned refrigerants in both super and sub critical regions. Typically, this refrigerant is known to offer advantages such as ozone depleting potential equal to zero, Global warming potential equal to 140, and no toxic.
Abstract: The study of the generated defects on manufactured
parts shows the difficulty to maintain parts in their positions during
the machining process and to estimate them during the pre-process
plan. This work presents a contribution to the development of 3D
models for the optimization of the manufacturing tolerances. An
experimental study allows the measurement of the defects of part
positioning for the determination of ε and the choice of an optimal
setup of the part. An approach of 3D tolerance based on the small
displacements method permits the determination of the
manufacturing errors upstream. A developed tool, allows an
automatic generation of the tolerance intervals along the three axes.
Abstract: Visualizing sound and noise often help us to determine
an appropriate control over the source localization. Near-field acoustic
holography (NAH) is a powerful tool for the ill-posed problem.
However, in practice, due to the small finite aperture size, the discrete
Fourier transform, FFT based NAH couldn-t predict the activeregion-
of-interest (AROI) over the edges of the plane. Theoretically
few approaches were proposed for solving finite aperture problem.
However most of these methods are not quite compatible for the
practical implementation, especially near the edge of the source. In
this paper, a zip-stuffing extrapolation approach has suggested with
2D Kaiser window. It is operated on wavenumber complex space
to localize the predicted sources. We numerically form a practice
environment with touch impact databases to test the localization of
sound source. It is observed that zip-stuffing aperture extrapolation
and 2D window with evanescent components provide more accuracy
especially in the small aperture and its derivatives.
Abstract: In this paper, we explore the applicability of the Sinc-
Collocation method to a three-dimensional (3D) oceanography model.
The model describes a wind-driven current with depth-dependent
eddy viscosity in the complex-velocity system. In general, the
Sinc-based methods excel over other traditional numerical methods
due to their exponentially decaying errors, rapid convergence and
handling problems in the presence of singularities in end-points.
Together with these advantages, the Sinc-Collocation approach that
we utilize exploits first derivative interpolation, whose integration
is much less sensitive to numerical errors. We bring up several
model problems to prove the accuracy, stability, and computational
efficiency of the method. The approximate solutions determined by
the Sinc-Collocation technique are compared to exact solutions and
those obtained by the Sinc-Galerkin approach in earlier studies. Our
findings indicate that the Sinc-Collocation method outperforms other
Sinc-based methods in past studies.
Abstract: Radio frequency identification (RFID) applications have grown rapidly in many industries, especially in indoor location identification. The advantage of using received signal strength indicator (RSSI) values as an indoor location measurement method is a cost-effective approach without installing extra hardware. Because the accuracy of many positioning schemes using RSSI values is limited by interference factors and the environment, thus it is challenging to use RFID location techniques based on integrating positioning algorithm design. This study proposes the location estimation approach and analyzes a scheme relying on RSSI values to minimize location errors. In addition, this paper examines different factors that affect location accuracy by integrating the backpropagation neural network (BPN) with the LANDMARC algorithm in a training phase and an online phase. First, the training phase computes coordinates obtained from the LANDMARC algorithm, which uses RSSI values and the real coordinates of reference tags as training data for constructing an appropriate BPN architecture and training length. Second, in the online phase, the LANDMARC algorithm calculates the coordinates of tracking tags, which are then used as BPN inputs to obtain location estimates. The results show that the proposed scheme can estimate locations more accurately compared to LANDMARC without extra devices.
Abstract: In this paper, to optimize the “Characteristic Straight Line Method" which is used in the soil displacement analysis, a “best estimate" of the geodetic leveling observations has been achieved by taking in account the concept of 'Height systems'. This concept has been discussed in detail and consequently the concept of “height". In landslides dynamic analysis, the soil is considered as a mosaic of rigid blocks. The soil displacement has been monitored and analyzed by using the “Characteristic Straight Line Method". Its characteristic components have been defined constructed from a “best estimate" of the topometric observations. In the measurement of elevation differences, we have used the most modern leveling equipment available. Observational procedures have also been designed to provide the most effective method to acquire data. In addition systematic errors which cannot be sufficiently controlled by instrumentation or observational techniques are minimized by applying appropriate corrections to the observed data: the level collimation correction minimizes the error caused by nonhorizontality of the leveling instrument's line of sight for unequal sight lengths, the refraction correction is modeled to minimize the refraction error caused by temperature (density) variation of air strata, the rod temperature correction accounts for variation in the length of the leveling rod' s Invar/LO-VAR® strip which results from temperature changes, the rod scale correction ensures a uniform scale which conforms to the international length standard and the introduction of the concept of the 'Height systems' where all types of height (orthometric, dynamic, normal, gravity correction, and equipotential surface) have been investigated. The “Characteristic Straight Line Method" is slightly more convenient than the “Characteristic Circle Method". It permits to evaluate a displacement of very small magnitude even when the displacement is of an infinitesimal quantity. The inclination of the landslide is given by the inverse of the distance reference point O to the “Characteristic Straight Line". Its direction is given by the bearing of the normal directed from point O to the Characteristic Straight Line (Fig..6). A “best estimate" of the topometric observations was used to measure the elevation of points carefully selected, before and after the deformation. Gross errors have been eliminated by statistical analyses and by comparing the heights within local neighborhoods. The results of a test using an area where very interesting land surface deformation occurs are reported. Monitoring with different options and qualitative comparison of results based on a sufficient number of check points are presented.
Abstract: Falls are the primary cause of accidents in people over
the age of 65, and frequently lead to serious injuries. Since the early
detection of falls is an important step to alert and protect the aging
population, a variety of research on detecting falls was carried out
including the use of accelerators, gyroscopes and tilt sensors. In
exiting studies, falls were detected using an accelerometer with
errors. In this study, the proposed method for detecting falls was to
use two accelerometers to reject wrong falls detection. As falls are
accompanied by the acceleration of gravity and rotational motion, the
falls in this study were detected by using the z-axial acceleration
differences between two sites. The falls were detected by calculating
the difference between the analyses of accelerometers placed on two
different positions on the chest of the subject. The parameters of the
maximum difference of accelerations (diff_Z) and the integration of
accelerations in a defined region (Sum_diff_Z) were used to form the
fall detection algorithm. The falls and the activities of daily living
(ADL) could be distinguished by using the proposed parameters
without errors in spite of the impact and the change in the positions
of the accelerometers. By comparing each of the axial accelerations,
the directions of falls and the condition of the subject afterwards
could be determined.In this study, by using two accelerometers
without errors attached to two sites to detect falls, the usefulness of
the proposed fall detection algorithm parameters, diff_Z and
Sum_diff_Z, were confirmed.