Abstract: This paper introduces the concept and principle of data
cleaning, analyzes the types and causes of dirty data, and proposes
several key steps of typical cleaning process, puts forward a well
scalability and versatility data cleaning framework, in view of data
with attribute dependency relation, designs several of violation data
discovery algorithms by formal formula, which can obtain inconsistent
data to all target columns with condition attribute dependent no matter
data is structured (SQL) or unstructured (NoSql), and gives 6 data
cleaning methods based on these algorithms.
Abstract: The system of ordinary nonlinear differential
equations describing sliding velocity during impact with friction for a
three-dimensional rigid-multibody system is developed. No analytical
solutions have been obtained before for this highly nonlinear system.
Hence, a power series solution is proposed. Since the validity of this
solution is limited to its convergence zone, a suitable time step is
chosen and at the end of it a new series solution is constructed. For a
case study, the trajectory of the sliding velocity using the proposed
method is built using 6 time steps, which coincides with a Runge-
Kutta solution using 38 time steps.
Abstract: The current tools for real time management of sewer
systems are based on two software tools: the software of weather
forecast and the software of hydraulic simulation. The use of the first
ones is an important cause of imprecision and uncertainty, the use of
the second requires temporal important steps of decision because of
their need in times of calculation. This way of proceeding fact that
the obtained results are generally different from those waited. The major idea of this project is to change the basic paradigm by
approaching the problem by the "automatic" face rather than by that
"hydrology". The objective is to make possible the realization of a
large number of simulations at very short times (a few seconds)
allowing to take place weather forecasts by using directly the real
time meditative pluviometric data. The aim is to reach a system
where the decision-making is realized from reliable data and where
the correction of the error is permanent. A first model of control laws was realized and tested with different
return-period rainfalls. The gains obtained in rejecting volume vary
from 19 to 100 %. The development of a new algorithm was then
used to optimize calculation time and thus to overcome the
subsequent combinatorial problem in our first approach. Finally, this
new algorithm was tested with 16- year-rainfall series. The obtained
gains are 40 % of total volume rejected to the natural environment
and of 65 % in the number of discharges.
Abstract: Objective: Sharing devastating news with patients is
often considered the most difficult task of doctors. This study aimed
to explore patients’ perceptions of receiving bad news including
which features improve the experience and which areas need refining. Methods: A questionnaire was written based on the steps of the
SPIKES model for breaking bad new. 20 patients receiving treatment
for a hematological malignancy completed the questionnaire. Results: Overall, the results are promising as most patients praised
their consultation. ‘Poor’ was more commonly rated by women and
participants aged 45-64. The main differences between the ‘excellent’
and ‘poor’ consultations include the doctor’s sensitivity and checking
the patients’ understanding. Only 35% of patients were asked their
existing knowledge and 85% of consultations failed to discuss the
impact of the diagnosis on daily life. Conclusion: This study agreed with the consensus of existing
literature. The commended aspects include consultation set-up and
information given. Areas patients felt needed improvement include
doctors determining the patient’s existing knowledge and checking
new information has been understood. Doctors should also explore
how the diagnosis will affect the patient’s life. With a poorer
prognosis, doctors should work on conveying appropriate hope. The
study was limited by a small sample size and potential recall bias.
Abstract: It is the patient compliance and stability in
combination with controlled drug delivery and biocompatibility that
forms the core feature in present research and development of
sustained biodegradable patch formulation intended for wound
healing. The aim was to impart sustained degradation, sterile
formulation, significant folding endurance, elasticity,
biodegradability, bio-acceptability and strength. The optimized
formulation comprised of polymers including Hydroxypropyl methyl
cellulose, Ethylcellulose, and Gelatin, and Citric Acid PEG Citric
acid (CPEGC) triblock dendrimers and active Curcumin. Polymeric
mixture dissolved in geometric order in suitable medium through
continuous stirring under ambient conditions. With continued stirring
Curcumin was added with aid of DCM and Methanol in optimized
ratio to get homogenous dispersion. The dispersion was sonicated
with optimum frequency and for given time and later casted to form a
patch form. All steps were carried out under strict aseptic conditions.
The formulations obtained in the acceptable working range were
decided based on thickness, uniformity of drug content, smooth
texture and flexibility and brittleness. The patch kept on stability
using butter paper in sterile pack displayed folding endurance in
range of 20 to 23 times without any evidence of crack in an
optimized formulation at room temperature (RT) (24 ± 2°C). The
patch displayed acceptable parameters after stability study conducted
in refrigerated conditions (8±0.2°C) and at RT (24 ± 2°C) up to 90
days. Further, no significant changes were observed in critical
parameters such as elasticity, biodegradability, drug release and drug
content during stability study conducted at RT 24±2°C for 45 and 90
days. The drug content was in range 95 to 102%, moisture content
didn’t exceeded 19.2% and patch passed the content uniformity test.
Percentage cumulative drug release was found to be 80% in 12h and
matched the biodegradation rate as drug release with correlation
factor R2>0.9. The biodegradable patch based formulation developed
shows promising results in terms of stability and release profiles.
Abstract: Land reallocation is one of the most important steps in
land consolidation projects. Many different models were proposed for
land reallocation in the literature such as Fuzzy Logic, block priority
based land reallocation and Spatial Decision Support Systems. A
model including four parts is considered for automatic block
reallocation with genetic algorithm method in land consolidation
projects. These stages are preparing data tables for a project land,
determining conditions and constraints of land reallocation, designing
command steps and logical flow chart of reallocation algorithm and
finally writing program codes of Genetic Algorithm respectively. In
this study, we designed the first three steps of the considered model
comprising four steps.
Abstract: Speaker Identification (SI) is the task of establishing
identity of an individual based on his/her voice characteristics. The SI
task is typically achieved by two-stage signal processing: training and
testing. The training process calculates speaker specific feature
parameters from the speech and generates speaker models
accordingly. In the testing phase, speech samples from unknown
speakers are compared with the models and classified. Even though
performance of speaker identification systems has improved due to
recent advances in speech processing techniques, there is still need of
improvement. In this paper, a Closed-Set Tex-Independent Speaker
Identification System (CISI) based on a Multiple Classifier System
(MCS) is proposed, using Mel Frequency Cepstrum Coefficient
(MFCC) as feature extraction and suitable combination of vector
quantization (VQ) and Gaussian Mixture Model (GMM) together
with Expectation Maximization algorithm (EM) for speaker
modeling. The use of Voice Activity Detector (VAD) with a hybrid
approach based on Short Time Energy (STE) and Statistical
Modeling of Background Noise in the pre-processing step of the
feature extraction yields a better and more robust automatic speaker
identification system. Also investigation of Linde-Buzo-Gray (LBG)
clustering algorithm for initialization of GMM, for estimating the
underlying parameters, in the EM step improved the convergence rate
and systems performance. It also uses relative index as confidence
measures in case of contradiction in identification process by GMM
and VQ as well. Simulation results carried out on voxforge.org
speech database using MATLAB highlight the efficacy of the
proposed method compared to earlier work.
Abstract: The design of Reverse logistics Network has attracted
growing attention with the stringent pressures from both
environmental awareness and business sustainability. Reverse
logistical activities include return, remanufacture, disassemble and
dispose of products can be quite complex to manage. In addition,
demand can be difficult to predict, and decision making is one of the
challenges task in such network. This complexity has amplified the
need to develop an integrated architecture for product return as an
enterprise system. The main purpose of this paper is to design Multi
Agent System (MAS) architecture using the Prometheus
methodology to efficiently manage reverse logistics processes. The
proposed MAS architecture includes five types of agents: Gate
keeping Agent, Collection Agent, Sorting Agent, Processing Agent
and Disposal Agent which act respectively during the five steps of
reverse logistics Network.
Abstract: The synthesis of CuFe2O4 spinel powders by an
optimized combustion-like process followed by calcination is
described herein. The samples were characterized using X-ray
diffraction (XRD), differential thermal analysis (TG/DTA), scanning
electron microscopy (SEM), dilatometry and 4-probe DC methods.
Different glycine to nitrate (G/N) ratios of 1 (fuel-deficient), 1.48
(stoichiometric) and 2 (fuel-rich) were employed. Calcining the asprepared
powders at 800 and 1000°C for 5 hours showed that the G/N
ratio of 2 results in the formation of the desired copper spinel single
phase at both calcination temperatures. For G/N=1, formation of
CuFe2O4 takes place in three steps. First, iron and copper nitrates
decompose to iron oxide and pure copper. Then, copper transforms to
copper oxide and finally, copper and iron oxides react with each other
to form a copper ferrite spinel phase. The electrical conductivity and
the coefficient of thermal expansion of the sintered pelletized
samples were 2 S.cm-1 (800°C) and 11×10-6 °C-1 (25-800°C),
respectively.
Abstract: DNA Barcode provides good sources of needed
information to classify living species. The classification problem has
to be supported with reliable methods and algorithms. To analyze
species regions or entire genomes, it becomes necessary to use the
similarity sequence methods. A large set of sequences can be
simultaneously compared using Multiple Sequence Alignment which
is known to be NP-complete. However, all the used methods are still
computationally very expensive and require significant computational
infrastructure. Our goal is to build predictive models that are highly
accurate and interpretable. In fact, our method permits to avoid the
complex problem of form and structure in different classes of
organisms. The empirical data and their classification performances
are compared with other methods. Evenly, in this study, we present
our system which is consisted of three phases. The first one, is called
transformation, is composed of three sub steps; Electron-Ion
Interaction Pseudopotential (EIIP) for the codification of DNA
Barcodes, Fourier Transform and Power Spectrum Signal Processing.
Moreover, the second phase step is an approximation; it is
empowered by the use of Multi Library Wavelet Neural Networks
(MLWNN). Finally, the third one, is called the classification of DNA
Barcodes, is realized by applying the algorithm of hierarchical
classification.
Abstract: Steepest descent method is a simple gradient method
for optimization. This method has a slow convergence in heading to
the optimal solution, which occurs because of the zigzag form of the
steps. Barzilai and Borwein modified this algorithm so that it
performs well for problems with large dimensions. Barzilai and
Borwein method results have sparked a lot of research on the method
of steepest descent, including alternate minimization gradient method
and Yuan method. Inspired by previous works, we modified the step
size of the steepest descent method. We then compare the
modification results against the Barzilai and Borwein method,
alternate minimization gradient method and Yuan method for
quadratic function cases in terms of the iterations number and the
running time. The average results indicate that the steepest descent
method with the new step sizes provide good results for small
dimensions and able to compete with the results of Barzilai and
Borwein method and the alternate minimization gradient method for
large dimensions. The new step sizes have faster convergence
compared to the other methods, especially for cases with large
dimensions.
Abstract: In Electric Power Steering (EPS), spoke type
Brushless AC (BLAC) motors offer distinct advantages over other
electric motor types in terms torque smoothness, reliability and
efficiency. This paper deals with the shape optimization of spoke
type BLAC motor, in order to reduce cogging torque. This paper
examines 3 steps skewing rotor angle, optimizing rotor core edge and
rotor overlap length for reducing cogging torque in spoke type BLAC
motor. The methods were applied to existing machine designs and
their performance was calculated using finite- element analysis
(FEA). Prototypes of the machine designs were constructed and
experimental results obtained. It is shown that the FEA predicted the
cogging torque to be nearly reduce using those methods.
Abstract: Neural activity in the human brain starts from the
early stages of prenatal development. This activity or signals
generated by the brain are electrical in nature and represent not only
the brain function but also the status of the whole body. At the
present moment, three methods can record functional and
physiological changes within the brain with high temporal resolution
of neuronal interactions at the network level: the
electroencephalogram (EEG), the magnet oencephalogram (MEG),
and functional magnetic resonance imaging (fMRI); each of these has
advantages and shortcomings. EEG recording with a large number of
electrodes is now feasible in clinical practice. Multichannel EEG
recorded from the scalp surface provides very valuable but indirect
information about the source distribution. However, deep electrode
measurements yield more reliable information about the source
locations intracranial recordings and scalp EEG are used with the
source imaging techniques to determine the locations and strengths of
the epileptic activity. As a source localization method, Low
Resolution Electro-Magnetic Tomography (LORETA) is solved for
the realistic geometry based on both forward methods, the Boundary
Element Method (BEM) and the Finite Difference Method (FDM). In
this paper, we review the findings EEG- LORETA about epilepsy.
Abstract: Rice bran is normally used as a raw material for rice
bran oil production or sold as feed with a low price. Conventionally,
the protein in defatted rice bran was extracted using alkaline
extraction and acid precipitation, which involves in chemical usage
and lowering some nutritious component. This study was conducted
in order to extract of rice bran protein concentrate (RBPC) from
defatted rice bran using enzymes and employing polysaccharides in a
precipitating step. The properties of RBPC obtained will be compared
to those of a control sample extracted using a conventional method.
The results showed that extraction of protein from rice bran using
enzymes exhibited the higher protein recovery compared to that
extraction with alkaline. The extraction conditions using alcalase 2%
(v/w) at 50 C, pH 9.5 gave the highest protein (2.44%) and yield
(32.09%) in extracted solution compared to other enzymes. Rice bran
protein concentrate powder prepared by a precipitation step using
alginate (protein in solution: alginate 1:0.016) exhibited the highest
protein (27.55%) and yield (6.84%). Precipitation using alginate was
better than that of acid. RBPC extracted with alkaline (ALK) or
enzyme alcalase (ALC), then precipitated with alginate (AL)
(samples RBP-ALK-AL and RBP-ALC-AL) yielded the precipitation
rate of 75% and 91.30%, respectively. Therefore, protein
precipitation using alginate was then selected. Amino acid profile of
control sample, and sample precipitated with alginate, as compared to
casein and soy protein isolated, showed that control sample showed
the highest content among all sample. Functional property study of
RBP showed that the highest nitrogen solubility occurred in pH 8-10.
There was no statically significant between emulsion capacity and
emulsion stability of control and sample precipitated by alginate.
However, control sample showed a higher of foaming capacity and
foaming stability compared to those of sample precipitated with
alginate. The finding was successful in terms of minimizing
chemicals used in extraction and precipitation steps in preparation of
rice bran protein concentrate. This research involves in a production
of value-added product in which the double amount of protein (28%)
compared to original amount (14%) contained in rice bran could be
beneficial in terms of adding to food products e.g. healthy drink with
high protein and fiber. In addition, the basic knowledge of functional
property of rice bran protein concentrate was obtained, which can be
used to appropriately select the application of this value-added
product from rice bran.
Abstract: Advance in techniques of image and video processing has enabled the development of intelligent video surveillance systems. This study was aimed to automatically detect moving human objects and to analyze events of dual human interaction in a surveillance scene. Our system was developed in four major steps: image preprocessing, human object detection, human object tracking, and motion trajectory analysis. The adaptive background subtraction and image processing techniques were used to detect and track moving human objects. To solve the occlusion problem during the interaction, the Kalman filter was used to retain a complete trajectory for each human object. Finally, the motion trajectory analysis was developed to distinguish between the interaction and non-interaction events based on derivatives of trajectories related to the speed of the moving objects. Using a database of 60 video sequences, our system could achieve the classification accuracy of 80% in interaction events and 95% in non-interaction events, respectively. In summary, we have explored the idea to investigate a system for the automatic classification of events for interaction and non-interaction events using surveillance cameras. Ultimately, this system could be incorporated in an intelligent surveillance system for the detection and/or classification of abnormal or criminal events (e.g., theft, snatch, fighting, etc.).
Abstract: Image segmentation and color identification is an
important process used in various emerging fields like intelligent
robotics. A method is proposed for the manipulator to grasp and place
the color object into correct location. The existing methods such as
PSO, has problems like accelerating the convergence speed and
converging to a local minimum leading to sub optimal performance.
To improve the performance, we are using watershed algorithm and
for color identification, we are using EPSO. EPSO method is used to
reduce the probability of being stuck in the local minimum. The
proposed method offers the particles a more powerful global
exploration capability. EPSO methods can determine the particles
stuck in the local minimum and can also enhance learning speed as
the particle movement will be faster.
Abstract: This article is to introduce the meaning and form of
social quality moving process as indicated by members of two suburb
communities with different social and cultural contexts. The form of
social quality moving process is very significant for the community
and social development, because it will make the people living
together with sustainable happiness.
This is a qualitative study involving 30 key-informants from two
suburb communities. Data were collected though key-informant
interviews, and analyzed using logical content description and
descriptive statistics.
This research found that on the social quality component, the
people in both communities stressed the procedure for social qualitymaking.
This includes the generousness, sharing and assisting among
people in the communities. These practices helped making people to
live together with sustainable happiness. Living as a family or appear
to be a family is the major social characteristic of these two
communities.
This research also found that form of social quality’s moving
process of both communities stress relation of human and nature;
“nature overpower humans” paradigm and influence of religious
doctrine that emphasizes relations among humans. Both criteria make
the form of social’s moving process simple, adaptive to nature and
caring for opinion sharing and understanding among each other
before action. This form of social quality’s moving process is
composed of 4 steps; (1) awareness building, (2) motivation to
change, (3) participation from every party which is concerned (4)
self-reliance.
Abstract: Through use of novel modern/rapid processing
techniques such as screen printing and Near-Infrared (NIR) radiative
curing, process time for the sintering of sintered nickel plaques,
applicable to alkaline nickel battery chemistries, has been drastically
reduced from in excess of 200 minutes with conventional convection
methods to below 2 minutes using NIR curing methods. Steps have
also been taken to remove the need for forming gas as a reducing
agent by implementing carbon as an in-situ reducing agent, within the
ink formulation.
Abstract: Background: Worldwide, at least 2.8 million people
die each year as a result of being overweight or obese, and 35.8
million (2.3%) of global DALYs are caused by overweight or
obesity. Obesity is acknowledged as one of the burning public
health problems reducing life expectancy and quality of life. The
body composition analysis of the university population is essential
in assessing the nutritional status, as well as the risk of developing
diseases associated with abnormal body fat content so as to make
nutritional recommendations. Objectives: The main aim was to
determine the prevalence of obesity and overweight in University
students using Anthropometric analysis and BIA methods. Material
and Methods: In this cross-sectional study, 283 university students
participated. The body composition analysis was undertaken by
using mainly: i) Anthropometric Measurement: Height, Weight,
BMI, waist circumference, hip circumference and skin fold
thickness, ii) Bio-electrical impedance was used for analysis of
body fat mass, fat percent and visceral fat which was measured by
Tanita SC-330P Professional Body Composition Analyzer. The
data so collected were compiled in MS Excel and analyzed for
males and females using SPSS 16. Results and Discussion: The
mean age of the male (n= 153) studied subjects was 25.37 ±2.39
years and females (n=130) was 22.53 ±2.31. The data of BIA
revealed very high mean fat per cent of the female subjects i.e.
30.3±6.5 per cent whereas mean fat per cent of the male subjects
was 15.60±6.02 per cent indicating a normal body fat range. The
findings showed high visceral fat of both males (12.92±3.02) and
females (16.86±4.98). BMI, BF% and WHR were higher among
females, and BMI was higher among males. The most evident
correlation was verified between BF% and WHR for female
students (r=0.902; p
Abstract: Waste load allocation (WLA) policies may use multiobjective
optimization methods to find the most appropriate and
sustainable solutions. These usually intend to simultaneously
minimize two criteria, total abatement costs (TC) and environmental
violations (EV). If other criteria, such as inequity, need for
minimization as well, it requires introducing more binary
optimizations through different scenarios. In order to reduce the
calculation steps, this study presents value index as an innovative
decision making approach. Since the value index contains both the
environmental violation and treatment costs, it can be maximized
simultaneously with the equity index. It implies that the definition of
different scenarios for environmental violations is no longer required.
Furthermore, the solution is not necessarily the point with minimized
total costs or environmental violations. This idea is testified for Haraz
River, in north of Iran. Here, the dissolved oxygen (DO) level of river
is simulated by Streeter-Phelps equation in MATLAB software. The
WLA is determined for fish farms using multi-objective particle
swarm optimization (MOPSO) in two scenarios. At first, the trade-off
curves of TC-EV and TC-Inequity are plotted separately as the
conventional approach. In the second, the Value-Equity curve is
derived. The comparative results show that the solutions are in a
similar range of inequity with lower total costs. This is due to the
freedom of environmental violation attained in value index. As a
result, the conventional approach can well be replaced by the value
index particularly for problems optimizing these objectives. This
reduces the process to achieve the best solutions and may find better
classification for scenario definition. It is also concluded that decision
makers are better to focus on value index and weighting its contents
to find the most sustainable alternatives based on their requirements.