Abstract: Electrochemical-oxidation of Reactive Black-5 (RB- 5) was conducted for degradation using DSA type Ti/RuO2-SnO2- Sb2O5 electrode. In the study, for electro-oxidation, electrode was indigenously fabricated in laboratory using titanium as substrate. This substrate was coated using different metal oxides RuO2, Sb2O5 and SnO2 by thermal decomposition method. Laboratory scale batch reactor was used for degradation and decolorization studies at pH 2, 7 and 11. Current density (50mA/cm2) and distance between electrodes (8mm) were kept constant for all experiments. Under identical conditions, removal of color, COD and TOC at initial pH 2 was 99.40%, 55% and 37% respectively for initial concentration of 100 mg/L RB-5. Surface morphology and composition of the fabricated electrode coatings were characterized using scanning electron microscopy (SEM) and energy dispersive X-ray spectroscopy (EDX) respectively. Coating microstructure was analyzed by X-ray diffraction (XRD). Results of this study further revealed that almost 90% of oxidation occurred within 5-10 minutes.
Abstract: Support Vector Machine (SVM) is a statistical
learning tool that was initially developed by Vapnik in 1979 and later
developed to a more complex concept of structural risk minimization
(SRM). SVM is playing an increasing role in applications to
detection problems in various engineering problems, notably in
statistical signal processing, pattern recognition, image analysis, and
communication systems. In this paper, SVM was applied to the
detection of SAR (synthetic aperture radar) images in the presence of
partially developed speckle noise. The simulation was done for single
look and multi-look speckle models to give a complete overlook and
insight to the new proposed model of the SVM-based detector. The
structure of the SVM was derived and applied to real SAR images
and its performance in terms of the mean square error (MSE) metric
was calculated. We showed that the SVM-detected SAR images have
a very low MSE and are of good quality. The quality of the
processed speckled images improved for the multi-look model.
Furthermore, the contrast of the SVM detected images was higher
than that of the original non-noisy images, indicating that the SVM
approach increased the distance between the pixel reflectivity levels
(the detection hypotheses) in the original images.
Abstract: In this paper, we consider a risk model involving two independent classes of insurance risks and random premium income. We assume that the premium income process is a Poisson Process, and the claim number processes are independent Poisson and generalized Erlang(n) processes, respectively. Both of the Gerber- Shiu functions with zero initial surplus and the probability generating functions (p.g.f.) of the Gerber-Shiu functions are obtained.
Abstract: This research presents a system for post processing of
data that takes mined flat rules as input and discovers crisp as well as
fuzzy hierarchical structures using Learning Classifier System
approach. Learning Classifier System (LCS) is basically a machine
learning technique that combines evolutionary computing,
reinforcement learning, supervised or unsupervised learning and
heuristics to produce adaptive systems. A LCS learns by interacting
with an environment from which it receives feedback in the form of
numerical reward. Learning is achieved by trying to maximize the
amount of reward received. Crisp description for a concept usually
cannot represent human knowledge completely and practically. In the
proposed Learning Classifier System initial population is constructed
as a random collection of HPR–trees (related production rules) and
crisp / fuzzy hierarchies are evolved. A fuzzy subsumption relation is
suggested for the proposed system and based on Subsumption Matrix
(SM), a suitable fitness function is proposed. Suitable genetic
operators are proposed for the chosen chromosome representation
method. For implementing reinforcement a suitable reward and
punishment scheme is also proposed. Experimental results are
presented to demonstrate the performance of the proposed system.
Abstract: Tasks of the work were study the possible E.coli
contamination in red deer meat, identify pathogenic strains from
isolated E.coli, determine their incidence in red deer meat and
determine the presence of VT1, VT2 and eaeA genes for the
pathogenic E.coli. 8 (10%) samples were randomly selected from 80
analysed isolates of E.coli and PCR reaction was performed on them.
PCR was done both on initial materials – samples of red deer meat -
and for already isolated liqueurs. Two of analysed venison samples
contain verotoxin-producing strains of E. coli. It means that this meat
is not safe to consumer. It was proven by the sequestration reaction of
E. coli and by comparison of the obtained results with the database of
microorganism genome available on the internet that the isolated
culture corresponds to region 16S rDNS of E. coli thus presenting
correctness of the microbiological methods.
Abstract: The performance of modified Fenton (MF) treatment
to promote PAH oxidation in artificially contaminated soil was
investigated in packed soil column with a hydrogen peroxide (H2O2)
delivery system simulating in situ injection. Soil samples were spiked
with phenanthrene (low molecular weight PAH) and fluoranthene
(high molecular weight PAH) to an initial concentration of 500
mg/kg dried soil each. The effectiveness of process parameters
H2O2/soil, iron/soil, chelating agent/soil weight ratios and reaction
time were studied using a 24 three level factorial design experiments.
Statistically significant quadratic models were developed using
Response Surface Methodology (RSM) for degrading PAHs from the
soil samples. Optimum operating condition was achieved at mild
range of H2O2/soil, iron/soil and chelating agent/soil weight ratios,
indicating cost efficient method for treating highly contaminated
lands.
Abstract: Due to the fact that in the new century customers tend
to express globally increasing demands, networks of interconnected
businesses have been established in societies and the management of
such networks seems to be a major key through gaining competitive
advantages. Supply chain management encompasses such managerial
activities. Within a supply chain, a critical role is played by quality.
QFD is a widely-utilized tool which serves the purpose of not only
bringing quality to the ultimate provision of products or service
packages required by the end customer or the retailer, but it can also
initiate us into a satisfactory relationship with our initial customer;
that is the wholesaler. However, the wholesalers- cooperation is
considerably based on the capabilities that are heavily dependent on
their locations and existing circumstances. Therefore, it is undeniable
that for all companies each wholesaler possesses a specific
importance ratio which can heavily influence the figures calculated in
the House of Quality in QFD. Moreover, due to the competitiveness
of the marketplace today, it-s been widely recognized that
consumers- expression of demands has been highly volatile in
periods of production. Apparently, such instability and proneness to
change has been very tangibly noticed and taking it into account
during the analysis of HOQ is widely influential and doubtlessly
required. For a more reliable outcome in such matters, this article
demonstrates the application viability of Analytic Network Process
for considering the wholesalers- reputation and simultaneously
introduces a mortality coefficient for the reliability and stability of
the consumers- expressed demands in course of time. Following to
this, the paper provides further elaboration on the relevant
contributory factors and approaches through the calculation of such
coefficients. In the end, the article concludes that an empirical
application is needed to achieve broader validity.
Abstract: The focal spot of a high intensity focused ultrasound
transducer is small. To heat a large target volume, multiple treatment spots are required. If the power of each treatment spot is fixed, it could
results in insufficient heating of initial spots and over-heating of later ones, which is caused by the thermal diffusion. Hence, to produce a
uniform heated volume, the delivered energy of each treatment spot
should be properly adjusted. In this study, we proposed an iterative, extrapolation technique to adjust the required ultrasound energy of
each treatment spot. Three different scanning pathways were used to evaluate the performance of this technique. Results indicate that by using the proposed technique, uniform heating volume could be obtained.
Abstract: Traditionally, wind tunnel models are made of metal
and are very expensive. In these years, everyone is looking for ways
to do more with less. Under the right test conditions, a rapid
prototype part could be tested in a wind tunnel. Using rapid prototype
manufacturing techniques and materials in this way significantly
reduces time and cost of production of wind tunnel models. This
study was done of fused deposition modeling (FDM) and their ability
to make components for wind tunnel models in a timely and cost
effective manner. This paper discusses the application of wind tunnel
model configuration constructed using FDM for transonic wind
tunnel testing. A study was undertaken comparing a rapid
prototyping model constructed of FDM Technologies using
polycarbonate to that of a standard machined steel model. Testing
covered the Mach range of Mach 0.3 to Mach 0.75 at an angle-ofattack
range of - 2° to +12°. Results from this study show relatively
good agreement between the two models and rapid prototyping
Method reduces time and cost of production of wind tunnel models.
It can be concluded from this study that wind tunnel models
constructed using rapid prototyping method and materials can be
used in wind tunnel testing for initial baseline aerodynamic database
development.
Abstract: This paper describes Independent Component Analysis (ICA) based fixed-point algorithm for the blind separation of the convolutive mixture of speech, picked-up by a linear microphone array. The proposed algorithm extracts independent sources by non- Gaussianizing the Time-Frequency Series of Speech (TFSS) in a deflationary way. The degree of non-Gaussianization is measured by negentropy. The relative performances of algorithm under random initialization and Null beamformer (NBF) based initialization are studied. It has been found that an NBF based initial value gives speedy convergence as well as better separation performance
Abstract: In this paper the gradient based iterative algorithms are presented to solve the following four types linear matrix equations: (a) AXB = F; (b) AXB = F, CXD = G; (c) AXB = F s. t. X = XT ; (d) AXB+CYD = F, where X and Y are unknown matrices, A,B,C,D, F,G are the given constant matrices. It is proved that if the equation considered has a solution, then the unique minimum norm solution can be obtained by choosing a special kind of initial matrices. The numerical results show that the proposed method is reliable and attractive.
Abstract: Web services provide significant new benefits for SOAbased
applications, but they also expose significant new security
risks. There are huge number of WS security standards and
processes. At present, there is still a lack of a comprehensive
approach which offers a methodical development in the construction
of secure WS-based SOA. Thus, the main objective of this paper is
to address this needs, presenting a comprehensive method for Web
Services Security guaranty in SOA. The proposed method defines
three stages, Initial Security Analysis, Architectural Security
Guaranty and WS Security Standards Identification. These facilitate,
respectively, the definition and analysis of WS-specific security
requirements, the development of a WS-based security architecture
and the identification of the related WS security standards that the
security architecture must articulate in order to implement the
security services.
Abstract: In this paper we consider the problem of distributed adaptive estimation in wireless sensor networks for two different observation noise conditions. In the first case, we assume that there are some sensors with high observation noise variance (noisy sensors) in the network. In the second case, different variance for observation noise is assumed among the sensors which is more close to real scenario. In both cases, an initial estimate of each sensor-s observation noise is obtained. For the first case, we show that when there are such sensors in the network, the performance of conventional distributed adaptive estimation algorithms such as incremental distributed least mean square (IDLMS) algorithm drastically decreases. In addition, detecting and ignoring these sensors leads to a better performance in a sense of estimation. In the next step, we propose a simple algorithm to detect theses noisy sensors and modify the IDLMS algorithm to deal with noisy sensors. For the second case, we propose a new algorithm in which the step-size parameter is adjusted for each sensor according to its observation noise variance. As the simulation results show, the proposed methods outperforms the IDLMS algorithm in the same condition.
Abstract: This paper focuses on a critical component of the
situational awareness (SA), the control of autonomous vertical flight for tactical unmanned aerial vehicle (TUAV). With the SA strategy,
we proposed a two stage flight control procedure using two autonomous control subsystems to address the dynamics variation
and performance requirement difference in initial and final stages of flight trajectory for a nontrivial nonlinear eight-rotor helicopter
model. This control strategy for chosen model of mini-TUAV has been verified by simulation of hovering maneuvers using software
package Simulink and demonstrated good performance for fast
stabilization of engines in hovering, consequently, fast SA with
economy in energy of batteries can be asserted during search-andrescue
operations.
Abstract: Characteristics of ad hoc networks and even their existence depend on the nodes forming them. Thus, services and applications designed for ad hoc networks should adapt to this dynamic and distributed environment. In particular, multicast algorithms having reliability and scalability requirements should abstain from centralized approaches. We aspire to define a reliable and scalable multicast protocol for ad hoc networks. Our target is to utilize epidemic techniques for this purpose. In this paper, we present a brief survey of epidemic algorithms for reliable multicasting in ad hoc networks, and describe formulations and analytical results for simple epidemics. Then, P2P anti-entropy algorithm for content distribution and our prototype simulation model are described together with our initial results demonstrating the behavior of the algorithm.
Abstract: Maize and Indian mustard are significant crops in
semi-arid climate zones of India. Improved water management
requires precise scheduling of irrigation, which in turn requires an
accurate computation of daily crop evapotranspiration (ETc). Daily
crop evapotranspiration comes as a product of reference
evapotranspiration (ET0) and the growth stage specific crop
coefficients modified for daily variation. The first objective of
present study is to develop crop coefficients Kc for Maize and Indian
mustard. The estimated values of Kc for maize at the four crop
growth stages (initial, development, mid-season, and late season) are
0.55, 1.08, 1.25, and 0.75, respectively, and for Indian mustard the Kc
values at the four growth stages are 0.3, 0.6, 1.12, and 0.35,
respectively. The second objective of the study is to compute daily
crop evapotranspiration from ET0 and crop coefficients. Average
daily ETc of maize varied from about 2.5 mm/d in the early growing
period to > 6.5 mm/d at mid season. The peak ETc of maize is 8.3
mm/d and it occurred 64 days after sowing at the reproductive growth
stage when leaf area index was 4.54. In the case of Indian mustard,
average ETc is 1 mm/d at the initial stage, >1.8 mm/d at mid season
and achieves a peak value of 2.12 mm/d on 56 days after sowing.
Improved schedules of irrigation have been simulated based on daily
crop evapo-transpiration and field measured data. Simulation shows a
close match between modeled and field moisture status prevalent
during crop season.
Abstract: A novel method using bearing-only SLAM to estimate node positions of a localization network is proposed. A group of simple robots are used to estimate the position of each node. Each node has a unique ID, which it can communicate to a robot close by. Initially the node IDs and positions are unknown. A case example using RFID technology in the localization network is introduced.
Abstract: This paper applies fuzzy clustering algorithm in classifying real estate companies in China according to some general financial indexes, such as income per share, share accumulation fund, net profit margins, weighted net assets yield and shareholders' equity. By constructing and normalizing initial partition matrix, getting fuzzy similar matrix with Minkowski metric and gaining the transitive closure, the dynamic fuzzy clustering analysis for real estate companies is shown clearly that different clustered result change gradually with the threshold reducing, and then, it-s shown there is the similar relationship with the prices of those companies in stock market. In this way, it-s great valuable in contrasting the real estate companies- financial condition in order to grasp some good chances of investment, and so on.
Abstract: The accelerated sonophotocatalytic degradation of
Reactive Red (RR) 120 dye under visible light using dye sensitized
TiO2 activated by ultrasound has been carried out. The effect of
sonolysis, photocatalysis and sonophotocatalysis under visible light
has been examined to study the influence on the degradation rates by
varying the initial substrate concentration, pH and catalyst loading to
ascertain the synergistic effect on the degradation techniques.
Ultrasonic activation contributes degradation through cavitation
leading to the splitting of H2O2 produced by both photocatalysis and
sonolysis. This results in the formation of oxidative species, such as
singlet oxygen (1O2) and superoxide (O2
-●) radicals in the presence of
oxygen. The increase in the amount of reactive radical species which
induce faster oxidation of the substrate and degradation of
intermediates and also the deaggregation of the photocatalyst are
responsible for the synergy observed under sonication. A
comparative study of photocatalysis and sonophotocatalysis using
TiO2, Hombikat UV 100 and ZnO was also carried out.
Abstract: In this paper we present an autoregressive model with
neural networks modeling and standard error backpropagation
algorithm training optimization in order to predict the gross domestic
product (GDP) growth rate of four countries. Specifically we propose
a kind of weighted regression, which can be used for econometric
purposes, where the initial inputs are multiplied by the neural
networks final optimum weights from input-hidden layer after the
training process. The forecasts are compared with those of the
ordinary autoregressive model and we conclude that the proposed
regression-s forecasting results outperform significant those of
autoregressive model in the out-of-sample period. The idea behind
this approach is to propose a parametric regression with weighted
variables in order to test for the statistical significance and the
magnitude of the estimated autoregressive coefficients and
simultaneously to estimate the forecasts.