Abstract: In this paper, we generalize several techniques in
developing Fault Tolerant Software. We introduce property
“Correctness" in evaluating N-version Systems and compare it to
some commonly used properties such as reliability or availability.
We also find out the relation between this property and the number of
versions of system. Our experiments to verify the correctness and the
applicability of the relation are also presented.
Abstract: Sputum smear conversion after one month of antituberculosis
therapy in new smear positive pulmonary tuberculosis
patients (PTB+) is a vital indicator towards treatment success. The
objective of this study is to determine the rate of sputum smear
conversion in new PTB+ patients after one month under treatment of
National Institute of Diseases of the Chest and Hospital (NIDCH).
Analysis of sputum smear conversion was done by re-clinical
examination with sputum smear microscopic test after one month.
Socio-demographic and hematological parameters were evaluated to
perceive the correlation with the disease status. Among all enrolled
patients only 33.33% were available for follow up diagnosis and of
them only 42.86% patients turned to smear negative. Probably this
consequence is due to non-coherence to the proper disease
management. 66.67% and 78.78% patients reported low haemoglobin
and packed cell volume level respectively whereas 80% and 93.33%
patients accounted accelerated platelet count and erythrocyte
sedimentation rate correspondingly.
Abstract: Evolutionary Algorithms are population-based,
stochastic search techniques, widely used as efficient global
optimizers. However, many real life optimization problems often
require finding optimal solution to complex high dimensional,
multimodal problems involving computationally very expensive
fitness function evaluations. Use of evolutionary algorithms in such
problem domains is thus practically prohibitive. An attractive
alternative is to build meta models or use an approximation of the
actual fitness functions to be evaluated. These meta models are order
of magnitude cheaper to evaluate compared to the actual function
evaluation. Many regression and interpolation tools are available to
build such meta models. This paper briefly discusses the
architectures and use of such meta-modeling tools in an evolutionary
optimization context. We further present two evolutionary algorithm
frameworks which involve use of meta models for fitness function
evaluation. The first framework, namely the Dynamic Approximate
Fitness based Hybrid EA (DAFHEA) model [14] reduces
computation time by controlled use of meta-models (in this case
approximate model generated by Support Vector Machine
regression) to partially replace the actual function evaluation by
approximate function evaluation. However, the underlying
assumption in DAFHEA is that the training samples for the metamodel
are generated from a single uniform model. This does not take
into account uncertain scenarios involving noisy fitness functions.
The second model, DAFHEA-II, an enhanced version of the original
DAFHEA framework, incorporates a multiple-model based learning
approach for the support vector machine approximator to handle
noisy functions [15]. Empirical results obtained by evaluating the
frameworks using several benchmark functions demonstrate their
efficiency
Abstract: This conference paper discusses a risk allocation problem for subprime investing banks involving investment in subprime structured mortgage products (SMPs) and Treasuries. In order to solve this problem, we develop a L'evy process-based model of jump diffusion-type for investment choice in subprime SMPs and Treasuries. This model incorporates subprime SMP losses for which credit default insurance in the form of credit default swaps (CDSs) can be purchased. In essence, we solve a mean swap-at-risk (SaR) optimization problem for investment which determines optimal allocation between SMPs and Treasuries subject to credit risk protection via CDSs. In this regard, SaR is indicative of how much protection investors must purchase from swap protection sellers in order to cover possible losses from SMP default. Here, SaR is defined in terms of value-at-risk (VaR). Finally, we provide an analysis of the aforementioned optimization problem and its connections with the subprime mortgage crisis (SMC).
Abstract: The conjugate gradient optimization algorithm
usually used for nonlinear least squares is presented and is
combined with the modified back propagation algorithm yielding
a new fast training multilayer perceptron (MLP) algorithm
(CGFR/AG). The approaches presented in the paper consist of
three steps: (1) Modification on standard back propagation
algorithm by introducing gain variation term of the activation
function, (2) Calculating the gradient descent on error with
respect to the weights and gains values and (3) the determination
of the new search direction by exploiting the information
calculated by gradient descent in step (2) as well as the previous
search direction. The proposed method improved the training
efficiency of back propagation algorithm by adaptively modifying
the initial search direction. Performance of the proposed method
is demonstrated by comparing to the conjugate gradient algorithm
from neural network toolbox for the chosen benchmark. The
results show that the number of iterations required by the
proposed method to converge is less than 20% of what is required
by the standard conjugate gradient and neural network toolbox
algorithm.
Abstract: In this paper we present a method for gene ranking
from DNA microarray data. More precisely, we calculate the correlation
networks, which are unweighted and undirected graphs, from
microarray data of cervical cancer whereas each network represents
a tissue of a certain tumor stage and each node in the network
represents a gene. From these networks we extract one tree for
each gene by a local decomposition of the correlation network. The
interpretation of a tree is that it represents the n-nearest neighbor
genes on the n-th level of a tree, measured by the Dijkstra distance,
and, hence, gives the local embedding of a gene within the correlation
network. For the obtained trees we measure the pairwise similarity
between trees rooted by the same gene from normal to cancerous
tissues. This evaluates the modification of the tree topology due to
progression of the tumor. Finally, we rank the obtained similarity
values from all tissue comparisons and select the top ranked genes.
For these genes the local neighborhood in the correlation networks
changes most between normal and cancerous tissues. As a result
we find that the top ranked genes are candidates suspected to be
involved in tumor growth and, hence, indicates that our method
captures essential information from the underlying DNA microarray
data of cervical cancer.
Abstract: It is known that if harmonic spectra are decreased, then
acoustic noise also decreased. Hence, this paper deals with a new
random switching strategy using DSP TMS320F2812 to decrease the
harmonics spectra of single phase switched reluctance motor. The
proposed method which combines random turn-on, turn-off angle
technique and random pulse width modulation technique is shown. A
harmonic spread factor (HSF) is used to evaluate the random
modulation scheme. In order to confirm the effectiveness of the new
method, the experimental results show that the harmonic intensity of
output voltage for the proposed method is better than that for
conventional methods.
Abstract: A mobile agent is a software which performs an
action autonomously and independently as a person or an
organizations assistance. Mobile agents are used for searching
information, retrieval information, filtering, intruder recognition in
networks, and so on. One of the important issues of mobile agent is
their security. It must consider different security issues in effective
and secured usage of mobile agent. One of those issues is the
integrity-s protection of mobile agents.
In this paper, the advantages and disadvantages of each method,
after reviewing the existing methods, is examined. Regarding to this
matter that each method has its own advantage or disadvantage, it
seems that by combining these methods, one can reach to a better
method for protecting the integrity of mobile agents. Therefore, this
method is provided in this paper and then is evaluated in terms of
existing method. Finally, this method is simulated and its results are
the sign of improving the possibility of integrity-s protection of
mobile agents.
Abstract: Quality control charts indicate out of control
conditions if any nonrandom pattern of the points is observed or any
point is plotted beyond the control limits. Nonrandom patterns of
Shewhart control charts are tested with sensitizing rules. When the
processes are defined with fuzzy set theory, traditional sensitizing
rules are insufficient for defining all out of control conditions. This is
due to the fact that fuzzy numbers increase the number of out of
control conditions. The purpose of the study is to develop a set of
fuzzy sensitizing rules, which increase the flexibility and sensitivity
of fuzzy control charts. Fuzzy sensitizing rules simplify the
identification of out of control situations that results in a decrease in
the calculation time and number of evaluations in fuzzy control chart
approach.
Abstract: All practical real-time scheduling algorithms in multiprocessor systems present a trade-off between their computational complexity and performance. In real-time systems, tasks have to be performed correctly and timely. Finding minimal schedule in multiprocessor systems with real-time constraints is shown to be NP-hard. Although some optimal algorithms have been employed in uni-processor systems, they fail when they are applied in multiprocessor systems. The practical scheduling algorithms in real-time systems have not deterministic response time. Deterministic timing behavior is an important parameter for system robustness analysis. The intrinsic uncertainty in dynamic real-time systems increases the difficulties of scheduling problem. To alleviate these difficulties, we have proposed a fuzzy scheduling approach to arrange real-time periodic and non-periodic tasks in multiprocessor systems. Static and dynamic optimal scheduling algorithms fail with non-critical overload. In contrast, our approach balances task loads of the processors successfully while consider starvation prevention and fairness which cause higher priority tasks have higher running probability. A simulation is conducted to evaluate the performance of the proposed approach. Experimental results have shown that the proposed fuzzy scheduler creates feasible schedules for homogeneous and heterogeneous tasks. It also and considers tasks priorities which cause higher system utilization and lowers deadline miss time. According to the results, it performs very close to optimal schedule of uni-processor systems.
Abstract: Today we tend to go back to the past to our root
relation to nature. Therefore in search of friendly spaces there are
elements of natural environment introduced as elements of spatial
composition. Though reinvented through the use of the new
substance such as greenery, water etc. made possible by state of the
art technologies, still, in principal, they remain the same. As a result,
sustainable design, based upon the recognized means of composition
in addition to the relation of architecture and urbanism vs. nature
introduces a new aesthetical values into architectural and urban
space.
Abstract: This paper presents a novel two-phase hybrid optimization algorithm with hybrid genetic operators to solve the optimal control problem of a single stage hybrid manufacturing system. The proposed hybrid real coded genetic algorithm (HRCGA) is developed in such a way that a simple real coded GA acts as a base level search, which makes a quick decision to direct the search towards the optimal region, and a local search method is next employed to do fine tuning. The hybrid genetic operators involved in the proposed algorithm improve both the quality of the solution and convergence speed. The phase–1 uses conventional real coded genetic algorithm (RCGA), while optimisation by direct search and systematic reduction of the size of search region is employed in the phase – 2. A typical numerical example of an optimal control problem with the number of jobs varying from 10 to 50 is included to illustrate the efficacy of the proposed algorithm. Several statistical analyses are done to compare the validity of the proposed algorithm with the conventional RCGA and PSO techniques. Hypothesis t – test and analysis of variance (ANOVA) test are also carried out to validate the effectiveness of the proposed algorithm. The results clearly demonstrate that the proposed algorithm not only improves the quality but also is more efficient in converging to the optimal value faster. They can outperform the conventional real coded GA (RCGA) and the efficient particle swarm optimisation (PSO) algorithm in quality of the optimal solution and also in terms of convergence to the actual optimum value.
Abstract: This paper proposed a stiffness analysis method for a
3-PRS mechanism for welding thick aluminum plate using FSW
technology. In the molding process, elastic deformation of lead-screws
and links are taken into account. This method is based on the virtual
work principle. Through a survey of the commonly used stiffness
performance indices, the minimum and maximum eigenvalues of the
stiffness matrix are used to evaluate the stiffness of the 3-PRS
mechanism. Furthermore, A FEA model has been constructed to verify
the method. Finally, we redefined the workspace using the stiffness
analysis method.
Abstract: Heat powered solid sorption is a feasible alternative to
electrical vapor compression refrigeration systems. In this paper,
activated carbon (powder type Maxsorb and fiber type ACF-A10)-
CO2 based adsorption cooling cycles are studied using the pressuretemperature-
concentration (P-T-W) diagram. The specific cooling
effect (SCE) and the coefficient of performance (COP) of these two
cooling systems are simulated for the driving heat source
temperatures ranging from 30 ºC to 90 ºC in terms of different
cooling load temperatures with a cooling source temperature of 25
ºC. It is found from the present analysis that Maxsorb-CO2 couple
shows higher cooling capacity and COP. The maximum COPs of
Maxsorb-CO2 and ACF(A10)-CO2 based cooling systems are found
to be 0.15 and 0.083, respectively. The main innovative feature of
this cooling cycle is the ability to utilize low temperature waste heat
or solar energy using CO2 as the refrigerant, which is one of the best
alternative for applications where flammability and toxicity are not
allowed.
Abstract: This paper evaluates performances of an adaptive noise
cancelling (ANC) based target detection algorithm on a set of real test
data supported by the Defense Evaluation Research Agency (DERA
UK) for multi-target wideband active sonar echolocation system. The
hybrid algorithm proposed is a combination of an adaptive ANC
neuro-fuzzy scheme in the first instance and followed by an iterative
optimum target motion estimation (TME) scheme. The neuro-fuzzy
scheme is based on the adaptive noise cancelling concept with the
core processor of ANFIS (adaptive neuro-fuzzy inference system) to
provide an effective fine tuned signal. The resultant output is then
sent as an input to the optimum TME scheme composed of twogauge
trimmed-mean (TM) levelization, discrete wavelet denoising
(WDeN), and optimal continuous wavelet transform (CWT) for
further denosing and targets identification. Its aim is to recover the
contact signals in an effective and efficient manner and then determine
the Doppler motion (radial range, velocity and acceleration) at very
low signal-to-noise ratio (SNR). Quantitative results have shown that
the hybrid algorithm have excellent performance in predicting targets-
Doppler motion within various target strength with the maximum
false detection of 1.5%.
Abstract: Control chart pattern recognition is one of the most important tools to identify the process state in statistical process control. The abnormal process state could be classified by the recognition of unnatural patterns that arise from assignable causes. In this study, a wavelet based neural network approach is proposed for the recognition of control chart patterns that have various characteristics. The procedure of proposed control chart pattern recognizer comprises three stages. First, multi-resolution wavelet analysis is used to generate time-shape and time-frequency coefficients that have detail information about the patterns. Second, distance based features are extracted by a bi-directional Kohonen network to make reduced and robust information. Third, a back-propagation network classifier is trained by these features. The accuracy of the proposed method is shown by the performance evaluation with numerical results.
Abstract: This paper studies questions of continuous data dependence and uniqueness for solutions of initial boundary value problems in linear micropolar thermoelastic mixtures. Logarithmic convexity arguments are used to establish results with no definiteness assumptions upon the internal energy.
Abstract: The wind resource in the Italian site of Lendinara
(RO) is analyzed through a systematic anemometric campaign
performed on the top of the bell tower, at an altitude of over 100 m
above the ground. Both the average wind speed and the Weibull
distribution are computed. The resulting average wind velocity is in
accordance with the numerical predictions of the Italian Wind Atlas,
confirming the accuracy of the extrapolation of wind data adopted for
the evaluation of wind potential at higher altitudes with respect to the
commonly placed measurement stations.
Abstract: The purpose of this study was to evaluate and
compare new indices based on the discrete wavelet transform
with another spectral parameters proposed in the literature as
mean average voltage, median frequency and ratios between
spectral moments applied to estimate acute exercise-induced
changes in power output, i.e., to assess peripheral muscle
fatigue during a dynamic fatiguing protocol. 15 trained
subjects performed 5 sets consisting of 10 leg press, with 2
minutes rest between sets. Surface electromyography was
recorded from vastus medialis (VM) muscle. Several surface
electromyographic parameters were compared to detect
peripheral muscle fatigue. These were: mean average voltage
(MAV), median spectral frequency (Fmed), Dimitrov spectral
index of muscle fatigue (FInsm5), as well as other five
parameters obtained from the discrete wavelet transform
(DWT) as ratios between different scales. The new wavelet
indices achieved the best results in Pearson correlation
coefficients with power output changes during acute dynamic
contractions. Their regressions were significantly different
from MAV and Fmed. On the other hand, they showed the
highest robustness in presence of additive white gaussian
noise for different signal to noise ratios (SNRs). Therefore,
peripheral impairments assessed by sEMG wavelet indices
may be a relevant factor involved in the loss of power output
after dynamic high-loading fatiguing task.
Abstract: The coalescer process is one of the methods for oily water treatment by increasing the oil droplet size in order to enhance the separating velocity and thus effective separation. However, the presence of surfactants in an oily emulsion can limit the obtained mechanisms due to the small oil size related with stabilized emulsion. In this regard, the purpose of this research is to improve the efficiency of the coalescer process for treating the stabilized emulsion. The effects of bed types, bed height, liquid flow rate and stage coalescer (step-bed) on the treatment efficiencies in term of COD values were studied. Note that the treatment efficiency obtained experimentally was estimated by using the COD values and oil droplet size distribution. The study has shown that the plastic media has more effective to attach with oil particles than the stainless one due to their hydrophobic properties. Furthermore, the suitable bed height (3.5 cm) and step bed (3.5 cm with 2 steps) were necessary in order to well obtain the coalescer performance. The application of step bed coalescer process in reactor has provided the higher treatment efficiencies in term of COD removal than those obtained with classical process. The proposed model for predicting the area under curve and thus treatment efficiency, based on the single collector efficiency (ηT) and the attachment efficiency (α), provides relatively a good coincidence between the experimental and predicted values of treatment efficiencies in this study.