Abstract: One of the processes of slope that occurs every year in Iran and some parts of world and cause a lot of criminal and financial harms is called landslide. They are plenty of method to stability landslide in soil and rock slides. The use of the best method with the least cost and in the shortest time is important for researchers. In this research, determining the best method of stability is investigated by using of Decision Support systems. DSS is made for this purpose and was used (for Hasan Salaran area in Kurdistan). Field study data from topography, slope, geology, geometry of landslide and the related features was used. The related data entered decision making managements programs (DSS) (ALES).Analysis of mass stability indicated the instability potential at present. Research results show that surface and sub surface drainage the best method of stabilizing. Analysis of stability shows that acceptable increase in security coefficient is a consequence of drainage.
Abstract: The objective of this research is to develop the
performance indicators (PIs) in operational level for the Pre-hospital Emergency Medical Service (EMS) system employing in Thailand. This research started with ascertaining the current pre-hospital care
system. The team analyzed the strategies of Narerthorn, a government unit under the ministry of public health, and the existing PIs of the pre-hospital care. Afterwards, the current National Strategic Plan of EMS development (2008-2012) of the Emergency
Medical Institute of Thailand (EMIT) was considered using strategic
analysis to developed Strategy Map (SM) and identified the Success
Factors (SFs). The analysis results from strategy map and SFs were used to develop the Performance Indicators (PIs). To verify the set of
PIs, the team has interviewed with the relevant practitioners for the possibilities to implement the PIs. To this paper, it was to ascertain
that all the developed PIs support the objectives of the strategic plan. Nevertheless, the results showed that the operational level PIs suited
only with the first dimension of National Strategic Plan
(infrastructure and information technology development). Besides,
the SF was the infrastructure development (to contribute the EMS system to people throughout with standard and efficiency both in normally and disaster conditions). Finally, twenty-nine indicators
were developed from the analysis results of SM and SFs.
Abstract: The soil moisture content is an important property of
the soil. The results of mean weekly gravimetric soil moisture
content, measured for the three soil layers within the A horizon,
showed that it was higher for the top 5 cm over the whole period of
monitoring (15/7/2004 up to 10/11/05) with the variation becoming
greater during winter time. This reflects the pattern of rainfall in
Ireland which is spread over the whole year and shows that light
rainfall events during summer time were compensated by loss
through evapotranspiration, but only in the top 5 cm of soil. This
layer had the highest porosity and highest moisture holding capacity
due to the high content of organic matter. The gravimetric soil
moisture contents of the top 5 cm and the underlying 5-15 and 15-25
cm layers show that bottom site of the Hill Field had higher soil
moisture content than the middle and top sites during the whole
period of monitoring.
Abstract: This is an application research presenting the
improvement of production quality using the six sigma solutions and
the analyses of benefit-cost ratio. The case of interest is the
production of tile-concrete. Such production has faced with the
problem of high nonconforming products from an inappropriate
surface coating and had low process capability based on the strength
property of tile. Surface coating and tile strength are the most critical
to quality of this product. The improvements followed five stages of
six sigma solutions. After the improvement, the production yield was
improved to 80% as target required and the defective products from
coating process was remarkably reduced from 29.40% to 4.09%. The
process capability based on the strength quality was increased from
0.87 to 1.08 as customer oriented. The improvement was able to save
the materials loss for 3.24 millions baht or 0.11 million dollars. The
benefits from the improvement were analyzed from (1) the reduction
of the numbers of non conforming tile using its factory price for
surface coating improvement and (2) the materials saved from the
increment of process capability. The benefit-cost ratio of overall
improvement was high as 7.03. It was non valuable investment in
define, measure, analyses and the initial of improve stages after that
it kept increasing. This was due to there were no benefits in define,
measure, and analyze stages of six sigma since these three stages
mainly determine the cause of problem and its effects rather than
improve the process. The benefit-cost ratio starts existing in the
improve stage and go on. Within each stage, the individual benefitcost
ratio was much higher than the accumulative one as there was an
accumulation of cost since the first stage of six sigma. The
consideration of the benefit-cost ratio during the improvement
project helps make decisions for cost saving of similar activities
during the improvement and for new project. In conclusion, the
determination of benefit-cost ratio behavior through out six sigma
implementation period provides the useful data for managing quality
improvement for the optimal effectiveness. This is the additional
outcome from the regular proceeding of six sigma.
Abstract: Above Elbow Prosthesis is one of the most commonly
amputated or missing limbs. The research is done for modelling
techniques of upper limb prosthesis and design of high torque, light
weight and compact in size elbow actuator. The purposed actuator
consists of a DC motor, planetary gear set and a harmonic drive. The
calculations show that the actuator is good enough to be used in real
life powered prosthetic upper limb or rehabilitation exoskeleton.
Abstract: The objective of this paper is to study the electrical
resistivity complexity between field and laboratory measurement, in
order to improve the effectiveness of data interpretation for
geophysical ground resistivity survey. The geological outcrop in
Penang, Malaysia with an obvious layering contact was chosen as the
study site. Two dimensional geoelectrical resistivity imaging were
used in this study to maps the resistivity distribution of subsurface,
whereas few subsurface sample were obtained for laboratory
advance. In this study, resistivity of samples in original conditions is
measured in laboratory by using time domain low-voltage technique,
particularly for granite core sample and soil resistivity measuring set
for soil sample. The experimentation results from both schemes are
studied, analyzed, calibrated and verified, including basis and
correlation, degree of tolerance and characteristics of substance.
Consequently, the significant different between both schemes is
explained comprehensively within this paper.
Abstract: In this paper, transversal vibration of buried pipelines
during loading induced by underground explosions is analyzed. The
pipeline is modeled as an infinite beam on an elastic foundation, so
that soil-structure interaction is considered by means of transverse
linear springs along the pipeline. The pipeline behavior is assumed to
be ideal elasto-plastic which an ultimate strain value limits the plastic
behavior. The blast loading is considered as a point load, considering
the affected length at some point of the pipeline, in which the
magnitude decreases exponentially with time. A closed-form solution
for the quasi-static problem is carried out for both elastic and elasticperfect
plastic behaviors of pipe materials. At the end, a comparative
study on steel and polyethylene pipes with different sizes buried in
various soil conditions, affected by a predefined underground
explosion is conducted, in which effect of each parameter is
discussed.
Abstract: Parallel programming models exist as an abstraction
of hardware and memory architectures. There are several parallel
programming models in commonly use; they are shared memory
model, thread model, message passing model, data parallel model,
hybrid model, Flynn-s models, embarrassingly parallel computations
model, pipelined computations model. These models are not specific
to a particular type of machine or memory architecture. This paper
expresses the model program for concurrent approach to data parallel
model through java programming.
Abstract: This study performs a comparative analysis of the 21 Greek Universities in terms of their public funding, awarded for covering their operating expenditure. First it introduces a DEA/MCDM model that allocates the fund into four expenditure factors in the most favorable way for each university. Then, it presents a common, consensual assessment model to reallocate the amounts, remaining in the same level of total public budget. From the analysis it derives that a number of universities cannot justify the public funding in terms of their size and operational workload. For them, the sufficient reduction of their public funding amount is estimated as a future target. Due to the lack of precise data for a number of expenditure criteria, the analysis is based on a mixed crisp-ordinal data set.
Abstract: The issue of unintentional islanding in PV grid
interconnection still remains as a challenge in grid-connected
photovoltaic (PV) systems. This paper discusses the overview of
popularly used anti-islanding detection methods, practically applied
in PV grid-connected systems. Anti-islanding methods generally can
be classified into four major groups, which include passive methods,
active methods, hybrid methods and communication base methods.
Active methods have been the preferred detection technique over the
years due to very small non-detected zone (NDZ) in small scale
distribution generation. Passive method is comparatively simpler
than active method in terms of circuitry and operations. However, it
suffers from large NDZ that significantly reduces its performance.
Communication base methods inherit the advantages of active and
passive methods with reduced drawbacks. Hybrid method which
evolved from the combination of both active and passive methods
has been proven to achieve accurate anti-islanding detection by many
researchers. For each of the studied anti-islanding methods, the
operation analysis is described while the advantages and
disadvantages are compared and discussed. It is difficult to pinpoint a
generic method for a specific application, because most of the
methods discussed are governed by the nature of application and
system dependent elements. This study concludes that the setup and
operation cost is the vital factor for anti-islanding method selection in
order to achieve minimal compromising between cost and system
quality.
Abstract: Blood pulse is an important human physiological signal commonly used for the understanding of the individual physical health. Current methods of non-invasive blood pulse sensing require direct contact or access to the human skin. As such, the performances of these devices tend to vary with time and are subjective to human body fluids (e.g. blood, perspiration and skin-oil) and environmental contaminants (e.g. mud, water, etc). This paper proposes a simulation model for the novel method of non-invasive acquisition of blood pulse using the disturbance created by blood flowing through a localized magnetic field. The simulation model geometry represents a blood vessel, a permanent magnet, a magnetic sensor, surrounding tissues and air in 2-dimensional. In this model, the velocity and pressure fields in the blood stream are described based on Navier-Stroke equations and the walls of the blood vessel are assumed to have no-slip condition. The blood assumes a parabolic profile considering a laminar flow for blood in major artery near the skin. And the inlet velocity follows a sinusoidal equation. This will allow the computational software to compute the interactions between the magnetic vector potential generated by the permanent magnet and the magnetic nanoparticles in the blood. These interactions are simulated based on Maxwell equations at the location where the magnetic sensor is placed. The simulated magnetic field at the sensor location is found to assume similar sinusoidal waveform characteristics as the inlet velocity of the blood. The amplitude of the simulated waveforms at the sensor location are compared with physical measurements on human subjects and found to be highly correlated.
Abstract: The objective of this paper is to estimate realistic
principal extrusion process parameters by means of artificial neural
network. Conventionally, finite element analysis is used to derive
process parameters. However, the finite element analysis of the
extrusion model does not consider the manufacturing process
constraints in its modeling. Therefore, the process parameters
obtained through such an analysis remains highly theoretical.
Alternatively, process development in industrial extrusion is to a
great extent based on trial and error and often involves full-size
experiments, which are both expensive and time-consuming. The
artificial neural network-based estimation of the extrusion process
parameters prior to plant execution helps to make the actual extrusion
operation more efficient because more realistic parameters may be
obtained. And so, it bridges the gap between simulation and real
manufacturing execution system. In this work, a suitable neural
network is designed which is trained using an appropriate learning
algorithm. The network so trained is used to predict the
manufacturing process parameters.
Abstract: Sharing the manufacturing facility through remote
operation and monitoring of a machining process is challenge for
effective use the production facility. Several automation tools in term
of hardware and software are necessary for successfully remote
operation of a machine. This paper presents a prototype of workpiece
holding attachment for remote operation of milling process by self
configuration the workpiece setup. The prototype is designed with
mechanism to reorient the work surface into machining spindle
direction with high positioning accuracy. Variety of parts geometry
is hold by attachment to perform single setup machining. Pin type
with array pattern additionally clamps the workpiece surface from
two opposite directions for increasing the machining rigidity.
Optimum pins configuration for conforming the workpiece geometry
with minimum deformation is determined through hybrid algorithms,
Genetic Algorithms (GA) and Particle Swarm Optimization (PSO).
Prototype with intelligent optimization technique enables to hold
several variety of workpiece geometry which is suitable for
machining low of repetitive production in remote operation.
Abstract: Electrospinning is a broadly used technology to obtain
polymeric nanofibers ranging from several micrometers down to
several hundred nanometers for a wide range of applications. It offers
unique capabilities to produce nanofibers with controllable porous
structure. With smaller pores and higher surface area than regular
fibers, electrospun fibers have been successfully applied in various
fields, such as, nanocatalysis, tissue engineering scaffolds, protective
clothing, filtration, biomedical, pharmaceutical, optical electronics,
healthcare, biotechnology, defense and security, and environmental
engineering. In this study, polyurethane nanofibers were obtained
under different electrospinning parameters. Fiber morphology and
diameter distribution were investigated in order to understand them
as a function of process parameters.
Abstract: Statistical learning theory was developed by Vapnik. It
is a learning theory based on Vapnik-Chervonenkis dimension. It also
has been used in learning models as good analytical tools. In general, a
learning theory has had several problems. Some of them are local
optima and over-fitting problems. As well, statistical learning theory
has same problems because the kernel type, kernel parameters, and
regularization constant C are determined subjectively by the art of
researchers. So, we propose an evolutionary statistical learning theory
to settle the problems of original statistical learning theory.
Combining evolutionary computing into statistical learning theory,
our theory is constructed. We verify improved performances of an
evolutionary statistical learning theory using data sets from KDD cup.
Abstract: An optical fiber Fabry-Perot interferometer (FFPI) is
proposed and demonstrated for dynamic measurements in a
mechanical vibrating target. A polishing metal with a low reflectance
value adhered to a mechanical vibrator was excited via a function
generator at various excitation frequencies. Output interference
fringes were generated by modulating the reference and sensing
signal at the output arm. A fringe-counting technique was used for
interpreting the displacement information on the dedicated computer.
The fiber interferometer has been found the capability of the
displacement measurements of 1.28 μm – 96.01 μm. A commercial
displacement sensor was employed as a reference sensor for
investigating the measurement errors from the fiber sensor. A
maximum percentage measurement error of approximately 1.59 %
was obtained.
Abstract: This paper introduces a temporal epistemic logic
CBCTL that updates agent-s belief states through communications
in them, based on computational tree logic (CTL). In practical
environments, communication channels between agents may not be
secure, and in bad cases agents might suffer blackouts. In this study,
we provide inform* protocol based on ACL of FIPA, and declare the
presence of secure channels between two agents, dependent on time.
Thus, the belief state of each agent is updated along with the progress
of time. We show a prover, that is a reasoning system for a given
formula in a given a situation of an agent ; if it is directly provable
or if it could be validated through the chains of communications, the
system returns the proof.
Abstract: Software Development Risks Identification (SDRI),
using Fault Tree Analysis (FTA), is a proposed technique to identify
not only the risk factors but also the causes of the appearance of the
risk factors in software development life cycle. The method is based
on analyzing the probable causes of software development failures
before they become problems and adversely affect a project. It uses
Fault tree analysis (FTA) to determine the probability of a particular
system level failures that are defined by A Taxonomy for Sources of
Software Development Risk to deduce failure analysis in which an
undesired state of a system by using Boolean logic to combine a
series of lower-level events. The major purpose of this paper is to use
the probabilistic calculations of Fault Tree Analysis approach to
determine all possible causes that lead to software development risk
occurrence
Abstract: The authors present an algorithm for order reduction of linear dynamic systems using the combined advantages of stability equation method and the error minimization by Genetic algorithm. The denominator of the reduced order model is obtained by the stability equation method and the numerator terms of the lower order transfer function are determined by minimizing the integral square error between the transient responses of original and reduced order models using Genetic algorithm. The reduction procedure is simple and computer oriented. It is shown that the algorithm has several advantages, e.g. the reduced order models retain the steady-state value and stability of the original system. The proposed algorithm has also been extended for the order reduction of linear multivariable systems. Two numerical examples are solved to illustrate the superiority of the algorithm over some existing ones including one example of multivariable system.
Abstract: In this paper we compare the response of linear and
nonlinear neural network-based prediction schemes in prediction of
received Signal-to-Interference Power Ratio (SIR) in Direct
Sequence Code Division Multiple Access (DS/CDMA) systems. The
nonlinear predictor is Multilayer Perceptron MLP and the linear
predictor is an Adaptive Linear (Adaline) predictor. We solve the
problem of complexity by using the Minimum Mean Squared Error
(MMSE) principle to select the optimal predictors. The optimized
Adaline predictor is compared to optimized MLP by employing
noisy Rayleigh fading signals with 1.8 GHZ carrier frequency in an
urban environment. The results show that the Adaline predictor can
estimates SIR with the same error as MLP when the user has the
velocity of 5 km/h and 60 km/h but by increasing the velocity up-to
120 km/h the mean squared error of MLP is two times more than
Adaline predictor. This makes the Adaline predictor (with lower
complexity) more suitable than MLP for closed-loop power control
where efficient and accurate identification of the time-varying
inverse dynamics of the multi path fading channel is required.