Abstract: Ever since industrial revolution began, our ecosystem
has changed. And indeed, the negatives outweigh the positives.
Industrial waste usually released into all kinds of body of water, such
as river or sea. Tempeh waste is one example of waste that carries
many hazardous and unwanted substances that will affect the
surrounding environment. Tempeh is a popular fermented food in
Asia which is rich in nutrients and active substances. Tempeh liquid
waste- in particular- can cause an air pollution, and if penetrates
through the soil, it will contaminates ground-water, making it
unavailable for the water to be consumed. Moreover, bacteria will
thrive within the polluted water, which often responsible for causing
many kinds of diseases. The treatment used for this chemical waste is
biological treatment such as constructed wetland and activated
sludge. These kinds of treatment are able to reduce both physical and
chemical parameters altogether such as temperature, TSS, pH, BOD,
COD, NH3-N, NO3-N, and PO4-P. These treatments are implemented
before the waste is released into the water. The result is a
comparation between constructed wetland and activated sludge,
along with determining which method is better suited to reduce the
physical and chemical subtances of the waste.
Abstract: A homologous series of aromatic esters, 4-nalkanoyloxybenzylidene-
4--bromoanilines, nABBA,
consisting of two 1,4-disubstituted phenyl cores and a Schiff
base central linkage was synthesized. All the members can be
differed by the number of carbon atoms at terminal
alkanoyloxy chain (CnH2n-1COO-, n = 2, 6, 18). The molecular
structure of nABBA was confirmed with infrared
spectroscopy, nuclear magnetic resonance (NMR)
spectroscopy and electron-ionization mass (EI-MS)
spectrometry. Mesomorphic properties were studied using
differential scanning calorimetry and polarizing optical
microscopy.
Abstract: The ability of agricultural and decorative plants to
absorb and detoxify TNT and RDX has been studied. All tested 8
plants, grown hydroponically, were able to absorb these explosives
from water solutions: Alfalfa > Soybean > Chickpea> Chikling vetch
>Ryegrass > Mung bean> China bean > Maize. Differently from
TNT, RDX did not exhibit negative influence on seed germination
and plant growth. Moreover, some plants, exposed to RDX
containing solution were increased in their biomass by 20%. Study of
the fate of absorbed [1-14ðí]-TNT revealed the label distribution in
low and high-molecular mass compounds, both in roots and above
ground parts of plants, prevailing in the later. Content of 14ðí in lowmolecular
compounds in plant roots are much higher than in above
ground parts. On the contrary, high-molecular compounds are more
intensively labeled in aboveground parts of soybean. Most part (up to
70%) of metabolites of TNT, formed either by enzymatic reduction
or oxidation, is found in high molecular insoluble conjugates.
Activation of enzymes, responsible for reduction, oxidation and
conjugation of TNT, such as nitroreductase, peroxidase,
phenoloxidase and glutathione S-transferase has been demonstrated.
Among these enzymes, only nitroreductase was shown to be induced
in alfalfa, exposed to RDX. The increase in malate dehydrogenase
activities in plants, exposed to both explosives, indicates
intensification of Tricarboxylic Acid Cycle, that generates reduced
equivalents of NAD(P)H, necessary for functioning of the
nitroreductase. The hypothetic scheme of TNT metabolism in plants
is proposed.
Abstract: In any trust model, the two information sources that a peer relies on to predict trustworthiness of another peer are direct experience as well as reputation. These two vital components evolve over time. Trust evolution is an important issue, where the objective is to observe a sequence of past values of a trust parameter and determine the future estimates. Unfortunately, trust evolution algorithms received little attention and the proposed algorithms in the literature do not comply with the conditions and the nature of trust. This paper contributes to this important problem in the following ways: (a) presents an algorithm that manages and models trust evolution in a P2P environment, (b) devises new mechanisms for effectively maintaining trust values based on the conditions that influence trust evolution , and (c) introduces a new methodology for incorporating trust-nurture incentives into the trust evolution algorithm. Simulation experiments are carried out to evaluate our trust evolution algorithm.
Abstract: This paper presents a new Hybrid Fuzzy (HF) PID type controller based on Genetic Algorithms (GA-s) for solution of the Automatic generation Control (AGC) problem in a deregulated electricity environment. In order for a fuzzy rule based control system to perform well, the fuzzy sets must be carefully designed. A major problem plaguing the effective use of this method is the difficulty of accurately constructing the membership functions, because it is a computationally expensive combinatorial optimization problem. On the other hand, GAs is a technique that emulates biological evolutionary theories to solve complex optimization problems by using directed random searches to derive a set of optimal solutions. For this reason, the membership functions are tuned automatically using a modified GA-s based on the hill climbing method. The motivation for using the modified GA-s is to reduce fuzzy system effort and take large parametric uncertainties into account. The global optimum value is guaranteed using the proposed method and the speed of the algorithm-s convergence is extremely improved, too. This newly developed control strategy combines the advantage of GA-s and fuzzy system control techniques and leads to a flexible controller with simple stricture that is easy to implement. The proposed GA based HF (GAHF) controller is tested on a threearea deregulated power system under different operating conditions and contract variations. The results of the proposed GAHF controller are compared with those of Multi Stage Fuzzy (MSF) controller, robust mixed H2/H∞ and classical PID controllers through some performance indices to illustrate its robust performance for a wide range of system parameters and load changes.
Abstract: Super-resolution is nowadays used for a high-resolution
image produced from several low-resolution noisy frames. In
this work, we consider the problem of high-quality interpolation of a
single noise-free image. Such images may come from different sources,
i.e., they may be frames of videos, individual pictures, etc. On
the other hand, in the encoder we apply a downsampling via
bidimen-sional interpolation of each frame, and in the decoder we
apply a upsampling by which we restore the original size of the
image. If the compression ratio is very high, then we use a
convolutive mask that restores the edges, eliminating the blur.
Finally, both, the encoder and the complete decoder are implemented
on General-Purpose computation on Graphics Processing Units
(GPGPU) cards. In fact, the mentioned mask is coded inside texture
memory of a GPGPU.
Abstract: This paper presents a simple method for estimation of
additional load as a factor of the existing load that may be drawn
before reaching the point of line maximum loadability of radial
distribution system (RDS) with different realistic load models at
different substation voltages. The proposed method involves a simple
line loadability index (LLI) that gives a measure of the proximity of
the present state of a line in the distribution system. The LLI can use
to assess voltage instability and the line loading margin. The
proposed method also compares with the existing method of
maximum loadability index [10]. The simulation results show that the
LLI can identify not only the weakest line/branch causing system
instability but also the system voltage collapse point when it is near
one. This feature enables us to set an index threshold to monitor and
predict system stability on-line so that a proper action can be taken to
prevent the system from collapse. To demonstrate the validity of the
proposed algorithm, computer simulations are carried out on two bus
and 69 bus RDS.
Abstract: The challenge in the swing-up problem of double
inverted pendulum on a cart (DIPC) is to design a controller that
bring all DIPC's states, especially the joint angles of the two links,
into the region of attraction of the desired equilibrium. This paper
proposes a new method to swing-up DIPC based on a series of restto-
rest maneuvers of the first link about its vertically upright
configuration while holding the cart fixed at the origin. The rest-torest
maneuvers are designed such that each one results in a net gain
in energy of the second link. This results in swing-up of DIPC-s
configuration to the region of attraction of the desired equilibrium. A
three-step algorithm is provided for swing-up control followed by the
stabilization step. Simulation results with a comparison to an
experimental work done in the literature are presented to demonstrate
the efficacy of the approach.
Abstract: In this paper a Public Key Cryptosystem is proposed
using the number theoretic transforms (NTT) over a ring of integer
modulo a composite number. The key agreement is similar to
ElGamal public key algorithm. The security of the system is based on
solution of multivariate linear congruence equations and discrete
logarithm problem. In the proposed cryptosystem only fixed numbers
of multiplications are carried out (constant complexity) and hence the
encryption and decryption can be done easily. At the same time, it is
very difficult to attack the cryptosystem, since the cipher text is a
sequence of integers which are interrelated. The system provides
authentication also. Using Mathematica version 5.0 the proposed
algorithm is justified with a numerical example.
Abstract: Steganography is the art of hiding and transmitting data
through apparently innocuous carriers in an effort to conceal the
existence of the data. A lot of steganography algorithms have been
proposed recently. Many of them use the digital image data as a carrier.
In data hiding scheme of halftoning and coordinate projection, still
image data is used as a carrier, and the data of carrier image are
modified for data embedding. In this paper, we present three features
for analysis of data hiding via halftoning and coordinate projection.
Also, we present a classifier using the proposed three features.
Abstract: In the current context of globalization, a large number of companies sought to develop as a group in order to reach to other markets or meet the necessary criteria for listing on a stock exchange. The issue of consolidated financial statements prepared by a parent, an investor or a venture and the financial reporting standards guiding them therefore becomes even more important. The aim of our paper is to expose this issue in a consistent manner, first by summarizing the international accounting and financial reporting standards applicable before the 1st of January 2013 and considering the role of the crisis in shaping the standard setting process, and secondly by analyzing the newly issued/modified standards and main changes being brought
Abstract: The structural interpretation of a part of eastern Potwar
(Missa Keswal) has been carried out with available seismological,
seismic and well data. Seismological data contains both the source
parameters and fault plane solution (FPS) parameters and seismic data
contains ten seismic lines that were re-interpreted by using well data.
Structural interpretation depicts two broad types of fault sets namely,
thrust and back thrust faults. These faults together give rise to pop up
structures in the study area and also responsible for many structural
traps and seismicity. Seismic interpretation includes time and depth
contour maps of Chorgali Formation while seismological interpretation
includes focal mechanism solution (FMS), depth, frequency,
magnitude bar graphs and renewal of Seismotectonic map. The Focal
Mechanism Solutions (FMS) that surrounds the study area are
correlated with the different geological and structural maps of the area
for the determination of the nature of subsurface faults. Results of
structural interpretation from both seismic and seismological data
show good correlation. It is hoped that the present work will help in
better understanding of the variations in the subsurface structure and
can be a useful tool for earthquake prediction, planning of oil field and
reservoir monitoring.
Abstract: The importance of supply chain and logistics
management has been widely recognised. Effective management of
the supply chain can reduce costs and lead times and improve
responsiveness to changing customer demands. This paper proposes a
multi-matrix real-coded Generic Algorithm (MRGA) based
optimisation tool that minimises total costs associated within supply
chain logistics. According to finite capacity constraints of all parties
within the chain, Genetic Algorithm (GA) often produces infeasible
chromosomes during initialisation and evolution processes. In the
proposed algorithm, chromosome initialisation procedure, crossover
and mutation operations that always guarantee feasible solutions
were embedded. The proposed algorithm was tested using three sizes
of benchmarking dataset of logistic chain network, which are typical
of those faced by most global manufacturing companies. A half
fractional factorial design was carried out to investigate the influence
of alternative crossover and mutation operators by varying GA
parameters. The analysis of experimental results suggested that the
quality of solutions obtained is sensitive to the ways in which the
genetic parameters and operators are set.
Abstract: Bioprocesses are appreciated as difficult to control because their dynamic behavior is highly nonlinear and time varying, in particular, when they are operating in fed batch mode. The research objective of this study was to develop an appropriate control method for a complex bioprocess and to implement it on a laboratory plant. Hence, an intelligent control structure has been designed in order to produce biomass and to maximize the specific growth rate.
Abstract: The paper describes a knowledge based system for
analysis of microscopic wear particles. Wear particles contained in
lubricating oil carry important information concerning machine
condition, in particular the state of wear. Experts (Tribologists) in the
field extract this information to monitor the operation of the machine
and ensure safety, efficiency, quality, productivity, and economy of
operation. This procedure is not always objective and it can also be
expensive. The aim is to classify these particles according to their
morphological attributes of size, shape, edge detail, thickness ratio,
color, and texture, and by using this classification thereby predict
wear failure modes in engines and other machinery. The attribute
knowledge links human expertise to the devised Knowledge Based
Wear Particle Analysis System (KBWPAS). The system provides an
automated and systematic approach to wear particle identification
which is linked directly to wear processes and modes that occur in
machinery. This brings consistency in wear judgment prediction
which leads to standardization and also less dependence on
Tribologists.
Abstract: Estimating the reliability of a computer network has been a subject of great interest. It is a well known fact that this problem is NP-hard. In this paper we present a very efficient combinatorial approach for Monte Carlo reliability estimation of a network with unreliable nodes and unreliable edges. Its core is the computation of some network combinatorial invariants. These invariants, once computed, directly provide pure and simple framework for computation of network reliability. As a specific case of this approach we obtain tight lower and upper bounds for distributed network reliability (the so called residual connectedness reliability). We also present some simulation results.
Abstract: This paper deals with tracking and estimating time delay between two signals. The simulation of this algorithm accomplished by using Mathcad package is carried out. The algorithm we will present adaptively controls and tracking the delay, so as to minimize the mean square of this error. Thus the algorithm in this case has task not only of seeking the minimum point of error but also of tracking the change of position, leading to a significant improving of performance. The flowchart of the algorithm is presented as well as several tests of different cases are carried out.
Abstract: In this paper we present a GP-based method for automatically evolve projections, so that data can be more easily classified in the projected spaces. At the same time, our approach can reduce dimensionality by constructing more relevant attributes. Fitness of each projection measures how easy is to classify the dataset after applying the projection. This is quickly computed by a Simple Linear Perceptron. We have tested our approach in three domains. The experiments show that it obtains good results, compared to other Machine Learning approaches, while reducing dimensionality in many cases.
Abstract: Accurate modeling of high speed RLC interconnects
has become a necessity to address signal integrity issues in current
VLSI design. To accurately model a dispersive system of interconnects
at higher frequencies; a full-wave analysis is required.
However, conventional circuit simulation of interconnects with full
wave models is extremely CPU expensive. We present an algorithm
for reducing large VLSI circuits to much smaller ones with similar
input-output behavior. A key feature of our method, called Frequency
Shift Technique, is that it is capable of reducing linear time-varying
systems. This enables it to capture frequency-translation and sampling
behavior, important in communication subsystems such as mixers,
RF components and switched-capacitor filters. Reduction is obtained
by projecting the original system described by linear differential
equations into a lower dimension. Experiments have been carried out
using Cadence Design Simulator cwhich indicates that the proposed
technique achieves more % reduction with less CPU time than the
other model order reduction techniques existing in literature. We
also present applications to RF circuit subsystems, obtaining size
reductions and evaluation speedups of orders of magnitude with
insignificant loss of accuracy.
Abstract: Microbial-induced calcite precipitation (MICP) is a
relatively green and sustainable soil improvement technique. It
utilizes biochemical process that exists naturally in soil to improve
engineering properties of soils. The calcite precipitation process is
uplifted by the mean of injecting higher concentration of urease
positive bacteria and reagents into the soil. The main objective of this
paper is to provide an overview of the factors affecting the MICP in
soil. Several factors were identified including nutrients, bacteria type,
geometric compatibility of bacteria, bacteria cell concentration,
fixation and distribution of bacteria in soil, temperature, reagents
concentration, pH, and injection method. These factors were found to
be essential for promoting successful MICP soil treatment.
Furthermore, a preliminary laboratory test was carried out to
investigate the potential application of the technique in improving the
shear strength and impermeability of a residual soil specimen. The
results showed that both shear strength and impermeability of
residual soil improved significantly upon MICP treatment. The
improvement increased with increasing soil density.