Abstract: A high-linearity and high-speed current-mode sampleand-
hold circuit is designed and simulated using a 0.25μm CMOS
technology. This circuit design is based on low voltage and it utilizes
a fully differential circuit. Due to the use of only two switches the
switch related noise has been reduced. Signal - dependent -error is
completely eliminated by a new zero voltage switching technique.
The circuit has a linearity error equal to ±0.05μa, i.e. 12-bit
accuracy with a ±160 μa differential output - input signal frequency
of 5MHZ, and sampling frequency of 100 MHZ. Third
harmonic is equal to –78dB.
Abstract: Operating a device at high power and high frequency
is a major problem because wall losses greatly reduce the efficiency
of the device. In the present communication, authors analytically
analyzed the dependence of ohmic/RF efficiency, the fraction of
output power with respect to the total power generated, of gyrotron
cavity structure on the conductivity of copper for the second
harmonic TE0,6 mode. This study shows a rapid fall in the RF
efficiency as the quality (conductivity) of copper degrades. Starting
with an RF efficiency near 40% at the conductivity of ideal copper
(5.8 x 107 S/m), the RF efficiency decreases (upto 8%) as the copper
quality degrades. Assuming conductivity half that of ideal copper the
RF efficiency as a function of diffractive quality factor, Qdiff, has
been studied. Here the RF efficiency decreases rapidly with
increasing diffractive Q. Ohmic wall losses as a function of
frequency for 460 GHz gyrotron cavity excited in TE0,6 mode has
also been analyzed. For 460 GHz cavity, the extracted power is
reduced to 32% of the generated power due to ohmic losses in the
walls of the cavity.
Abstract: The many feasible alternatives and conflicting
objectives make equipment selection in materials handling a
complicated task. This paper presents utilizing Monte Carlo (MC)
simulation combined with the Analytic Hierarchy Process (AHP) to
evaluate and select the most appropriate Material Handling
Equipment (MHE). The proposed hybrid model was built on the base
of material handling equation to identify main and sub criteria critical
to MHE selection. The criteria illustrate the properties of the material
to be moved, characteristics of the move, and the means by which the
materials will be moved. The use of MC simulation beside the AHP
is very powerful where it allows the decision maker to represent
his/her possible preference judgments as random variables. This will
reduce the uncertainty of single point judgment at conventional AHP,
and provide more confidence in the decision problem results. A small
business pharmaceutical company is used as an example to illustrate
the development and application of the proposed model.
Abstract: According to Rostler method (ASTM D 2006), saturates content of bitumen is determined based on its reactivity to sulphuric acid. While Corbett method (ASTM D 4124) based on its polarity level. This paper presents results from the study on the effect of saturates content determined by two different fractionation methods on the rheological and aging characteristics of bitumen. The result indicated that the increment of saturates content tended to reduce all the rheological characteristics concerned. Bitumen became less elastic, less viscous, and less resistant to plastic deformation, but became more resistant to fatigue cracking. After short and long term aging process, the treatment effect coefficients of saturates decreased, saturates became thicker due to aging process. This study concludes that saturates is not really stable or reactive in aging process. Therefore, the reactivity of saturates should be considered in bitumen aging index
Abstract: Addition of an oily waste to a co-composting process of dairy cow manure with food waste, and the influence in the final product was evaluated. Three static composting piles with different substrates concentrations were assessed. Sawdust was also added to all composting piles to attain 60%, humidity at the beginning of the process. In pile 1, the co-substrates were the solid-phase of dairy cow manure, food waste and sawdust as bulking agent. In piles 2 and 3 there was an extra input of oily waste of 7 and 11% of the total volume, respectively, corresponding to 18 and 28% in dry weight. The results showed that the co-composting process was feasible even at the highest fat content. Another positive effect due to the oily waste addition was the requirement of extra humidity, due to the hydrophobic properties of this specific waste, which may imply reduced need of a bulking agent. Moreover, this study shows that composting can be a feasible way of adding value to fatty wastes. The three final composts presented very similar and suitable properties for land application.
Abstract: Metrics is the process by which numbers or symbols
are assigned to attributes of entities in the real world in such a way as
to describe them according to clearly defined rules. Software metrics
are instruments or ways to measuring all the aspect of software
product. These metrics are used throughout a software project to
assist in estimation, quality control, productivity assessment, and
project control. Object oriented software metrics focus on
measurements that are applied to the class and other characteristics.
These measurements convey the software engineer to the behavior of
the software and how changes can be made that will reduce
complexity and improve the continuing capability of the software.
Object oriented software metric can be classified in two types static
and dynamic. Static metrics are concerned with all the aspects of
measuring by static analysis of software and dynamic metrics are
concerned with all the measuring aspect of the software at run time.
Major work done before, was focusing on static metric. Also some
work has been done in the field of dynamic nature of the software
measurements. But research in this area is demanding for more work.
In this paper we give a set of dynamic metrics specifically for
polymorphism in object oriented system.
Abstract: Severe heart failure is a common problem that has a significant effect on health expenditures in industrialized countries; moreover it reduces patient-s quality of life. However, current research usually focuses either on detailed modeling of the heart or on detailed modeling of the cardiovascular system. Thus, this paper aims to present a sophisticated model of the heart enhanced with an extensive model of the cardiovascular system. Special interest is on the pressure and flow values close to the heart since these values are critical to accurately diagnose causes of heart failure. The model is implemented in Dymola an object-oriented, physical modeling language. Results achieved with the novel model show overall feasibility of the approach. Moreover, results are illustrated and compared to other models. The novel model shows significant improvements.
Abstract: There are multiple ways to implement a decimator
filter. This paper addresses usage of CIC (cascaded-integrator-comb)
filter and HB (half band) filter as the decimator filter to reduce the
frequency sample rate by factor of 64 and detail of the
implementation step to realize this design in hardware. Low power
design approach for CIC filter and half band filter will be discussed.
The filter design is implemented through MATLAB system
modeling, ASIC (application specific integrated circuit) design flow
and verified using a FPGA (field programmable gate array) board
and MATLAB analysis.
Abstract: This paper presents a cold flow simulation study of a small gas turbine combustor performed using laboratory scale test rig. The main objective of this investigation is to obtain physical insight of the main vortex, responsible for the efficient mixing of fuel and air. Such models are necessary for predictions and optimization of real gas turbine combustors. Air swirler can control the combustor performance by assisting in the fuel-air mixing process and by producing recirculation region which can act as flame holders and influences residence time. Thus, proper selection of a swirler is needed to enhance combustor performance and to reduce NOx emissions. Three different axial air swirlers were used based on their vane angles i.e., 30°, 45°, and 60°. Three-dimensional, viscous, turbulent, isothermal flow characteristics of the combustor model operating at room temperature were simulated via Reynolds- Averaged Navier-Stokes (RANS) code. The model geometry has been created using solid model, and the meshing has been done using GAMBIT preprocessing package. Finally, the solution and analysis were carried out in a FLUENT solver. This serves to demonstrate the capability of the code for design and analysis of real combustor. The effects of swirlers and mass flow rate were examined. Details of the complex flow structure such as vortices and recirculation zones were obtained by the simulation model. The computational model predicts a major recirculation zone in the central region immediately downstream of the fuel nozzle and a second recirculation zone in the upstream corner of the combustion chamber. It is also shown that swirler angles changes have significant effects on the combustor flowfield as well as pressure losses.
Abstract: This paper presents an algorithm of particle swarm
optimization with reduction for global optimization problems. Particle
swarm optimization is an algorithm which refers to the collective
motion such as birds or fishes, and a multi-point search algorithm
which finds a best solution using multiple particles. Particle
swarm optimization is so flexible that it can adapt to a number
of optimization problems. When an objective function has a lot of
local minimums complicatedly, the particle may fall into a local
minimum. For avoiding the local minimum, a number of particles are
initially prepared and their positions are updated by particle swarm
optimization. Particles sequentially reduce to reach a predetermined
number of them grounded in evaluation value and particle swarm
optimization continues until the termination condition is met. In order
to show the effectiveness of the proposed algorithm, we examine the
minimum by using test functions compared to existing algorithms.
Furthermore the influence of best value on the initial number of
particles for our algorithm is discussed.
Abstract: Ever since industrial revolution began, our ecosystem
has changed. And indeed, the negatives outweigh the positives.
Industrial waste usually released into all kinds of body of water, such
as river or sea. Tempeh waste is one example of waste that carries
many hazardous and unwanted substances that will affect the
surrounding environment. Tempeh is a popular fermented food in
Asia which is rich in nutrients and active substances. Tempeh liquid
waste- in particular- can cause an air pollution, and if penetrates
through the soil, it will contaminates ground-water, making it
unavailable for the water to be consumed. Moreover, bacteria will
thrive within the polluted water, which often responsible for causing
many kinds of diseases. The treatment used for this chemical waste is
biological treatment such as constructed wetland and activated
sludge. These kinds of treatment are able to reduce both physical and
chemical parameters altogether such as temperature, TSS, pH, BOD,
COD, NH3-N, NO3-N, and PO4-P. These treatments are implemented
before the waste is released into the water. The result is a
comparation between constructed wetland and activated sludge,
along with determining which method is better suited to reduce the
physical and chemical subtances of the waste.
Abstract: The ability of agricultural and decorative plants to
absorb and detoxify TNT and RDX has been studied. All tested 8
plants, grown hydroponically, were able to absorb these explosives
from water solutions: Alfalfa > Soybean > Chickpea> Chikling vetch
>Ryegrass > Mung bean> China bean > Maize. Differently from
TNT, RDX did not exhibit negative influence on seed germination
and plant growth. Moreover, some plants, exposed to RDX
containing solution were increased in their biomass by 20%. Study of
the fate of absorbed [1-14ðí]-TNT revealed the label distribution in
low and high-molecular mass compounds, both in roots and above
ground parts of plants, prevailing in the later. Content of 14ðí in lowmolecular
compounds in plant roots are much higher than in above
ground parts. On the contrary, high-molecular compounds are more
intensively labeled in aboveground parts of soybean. Most part (up to
70%) of metabolites of TNT, formed either by enzymatic reduction
or oxidation, is found in high molecular insoluble conjugates.
Activation of enzymes, responsible for reduction, oxidation and
conjugation of TNT, such as nitroreductase, peroxidase,
phenoloxidase and glutathione S-transferase has been demonstrated.
Among these enzymes, only nitroreductase was shown to be induced
in alfalfa, exposed to RDX. The increase in malate dehydrogenase
activities in plants, exposed to both explosives, indicates
intensification of Tricarboxylic Acid Cycle, that generates reduced
equivalents of NAD(P)H, necessary for functioning of the
nitroreductase. The hypothetic scheme of TNT metabolism in plants
is proposed.
Abstract: In this study, we developed an algorithm for detecting
seam cracks in a steel plate. Seam cracks are generated in the edge
region of a steel plate. We used the Gabor filter and an adaptive double
threshold method to detect them. To reduce the number of pseudo
defects, features based on the shape of seam cracks were used. To
evaluate the performance of the proposed algorithm, we tested 989
images with seam cracks and 9470 defect-free images. Experimental
results show that the proposed algorithm is suitable for detecting seam
cracks. However, it should be improved to increase the true positive
rate.
Abstract: This paper presents a new Hybrid Fuzzy (HF) PID type controller based on Genetic Algorithms (GA-s) for solution of the Automatic generation Control (AGC) problem in a deregulated electricity environment. In order for a fuzzy rule based control system to perform well, the fuzzy sets must be carefully designed. A major problem plaguing the effective use of this method is the difficulty of accurately constructing the membership functions, because it is a computationally expensive combinatorial optimization problem. On the other hand, GAs is a technique that emulates biological evolutionary theories to solve complex optimization problems by using directed random searches to derive a set of optimal solutions. For this reason, the membership functions are tuned automatically using a modified GA-s based on the hill climbing method. The motivation for using the modified GA-s is to reduce fuzzy system effort and take large parametric uncertainties into account. The global optimum value is guaranteed using the proposed method and the speed of the algorithm-s convergence is extremely improved, too. This newly developed control strategy combines the advantage of GA-s and fuzzy system control techniques and leads to a flexible controller with simple stricture that is easy to implement. The proposed GA based HF (GAHF) controller is tested on a threearea deregulated power system under different operating conditions and contract variations. The results of the proposed GAHF controller are compared with those of Multi Stage Fuzzy (MSF) controller, robust mixed H2/H∞ and classical PID controllers through some performance indices to illustrate its robust performance for a wide range of system parameters and load changes.
Abstract: In this work we present a solution for DAGC (Digital
Automatic Gain Control) in WLAN receivers compatible to IEEE 802.11a/g standard. Those standards define communication in 5/2.4
GHz band using Orthogonal Frequency Division Multiplexing OFDM modulation scheme. WLAN Transceiver that we have used
enables gain control over Low Noise Amplifier (LNA) and a
Variable Gain Amplifier (VGA). The control over those signals is
performed in our digital baseband processor using dedicated hardware block DAGC. DAGC in this process is used to automatically control the VGA and LNA in order to achieve better
signal-to-noise ratio, decrease FER (Frame Error Rate) and hold the
average power of the baseband signal close to the desired set point.
DAGC function in baseband processor is done in few steps: measuring power levels of baseband samples of an RF signal,accumulating the differences between the measured power level and
actual gain setting, adjusting a gain factor of the accumulation, and
applying the adjusted gain factor the baseband values. Based on the measurement results of RSSI signal dependence to input power we have concluded that this digital AGC can be implemented applying
the simple linearization of the RSSI. This solution is very simple but also effective and reduces complexity and power consumption of the
DAGC. This DAGC is implemented and tested both in FPGA and in ASIC as a part of our WLAN baseband processor. Finally, we have integrated this circuit in a compact WLAN PCMCIA board based on MAC and baseband ASIC chips designed from us.
Abstract: An image compression method has been developed
using fuzzy edge image utilizing the basic Block Truncation Coding
(BTC) algorithm. The fuzzy edge image has been validated with
classical edge detectors on the basis of the results of the well-known
Canny edge detector prior to applying to the proposed method. The
bit plane generated by the conventional BTC method is replaced with
the fuzzy bit plane generated by the logical OR operation between
the fuzzy edge image and the corresponding conventional BTC bit
plane. The input image is encoded with the block mean and standard
deviation and the fuzzy bit plane. The proposed method has been
tested with test images of 8 bits/pixel and size 512×512 and found to
be superior with better Peak Signal to Noise Ratio (PSNR) when
compared to the conventional BTC, and adaptive bit plane selection
BTC (ABTC) methods. The raggedness and jagged appearance, and
the ringing artifacts at sharp edges are greatly reduced in
reconstructed images by the proposed method with the fuzzy bit
plane.
Abstract: Along with the progress of our information society,
various risks are becoming increasingly common, causing multiple social problems. For this reason, risk communications for
establishing consensus among stakeholders who have different
priorities have become important. However, it is not always easy for the decision makers to agree on measures to reduce risks based on
opposing concepts, such as security, privacy and cost. Therefore, we previously developed and proposed the “Multiple Risk Communicator" (MRC) with the following functions: (1) modeling
the support role of the risk specialist, (2) an optimization engine, and (3) displaying the computed results. In this paper, MRC program
version 1.0 is applied to the personal information leakage problem. The application process and validation of the results are discussed.
Abstract: Due to environmental concerns, the recent regulation on automobile fuel economy has been strengthened. The market demand for efficient vehicles is growing and automakers to improve engine fuel efficiency in the industry have been paying a lot of effort. To improve the fuel efficiency, it is necessary to reduce losses or to improve combustion efficiency of the engine. VVA (Variable Valve Actuation) technology enhances the engine's intake air flow, reduce pumping losses and mechanical friction losses. And also, VVA technology is the engine's low speed and high speed operation to implement each of appropriate valve lift. It improves the performance of engine in the entire operating range. This paper presents a design procedure of DC motor and drive for VVA system and shows the validity of the design result by experimental result with prototype.
Abstract: The importance of supply chain and logistics
management has been widely recognised. Effective management of
the supply chain can reduce costs and lead times and improve
responsiveness to changing customer demands. This paper proposes a
multi-matrix real-coded Generic Algorithm (MRGA) based
optimisation tool that minimises total costs associated within supply
chain logistics. According to finite capacity constraints of all parties
within the chain, Genetic Algorithm (GA) often produces infeasible
chromosomes during initialisation and evolution processes. In the
proposed algorithm, chromosome initialisation procedure, crossover
and mutation operations that always guarantee feasible solutions
were embedded. The proposed algorithm was tested using three sizes
of benchmarking dataset of logistic chain network, which are typical
of those faced by most global manufacturing companies. A half
fractional factorial design was carried out to investigate the influence
of alternative crossover and mutation operators by varying GA
parameters. The analysis of experimental results suggested that the
quality of solutions obtained is sensitive to the ways in which the
genetic parameters and operators are set.
Abstract: This paper presents the novel Rao-Blackwellised
particle filter (RBPF) for mobile robot simultaneous localization and
mapping (SLAM) using monocular vision. The particle filter is
combined with unscented Kalman filter (UKF) to extending the path
posterior by sampling new poses that integrate the current observation
which drastically reduces the uncertainty about the robot pose. The
landmark position estimation and update is also implemented through
UKF. Furthermore, the number of resampling steps is determined
adaptively, which seriously reduces the particle depletion problem,
and introducing the evolution strategies (ES) for avoiding particle
impoverishment. The 3D natural point landmarks are structured with
matching Scale Invariant Feature Transform (SIFT) feature pairs. The
matching for multi-dimension SIFT features is implemented with a
KD-Tree in the time cost of O(log2
N). Experiment results on real robot
in our indoor environment show the advantages of our methods over
previous approaches.