Abstract: This paper proposes a zero-voltage transition (ZVT) PWM synchronous buck converter, which is designed to operate at low output voltage and high efficiency typically required for portable systems. To make the DC-DC converter efficient at lower voltage, synchronous converter is an obvious choice because of lower conduction loss in the diode. The high-side MOSFET is dominated by the switching losses and it is eliminated by the soft switching technique. Additionally, the resonant auxiliary circuit designed is also devoid of the switching losses. The suggested procedure ensures an efficient converter. Theoretical analysis, computer simulation, and experimental results are presented to explain the proposed schemes.
Abstract: Fault-proneness of a software module is the
probability that the module contains faults. To predict faultproneness
of modules different techniques have been proposed which
includes statistical methods, machine learning techniques, neural
network techniques and clustering techniques. The aim of proposed
study is to explore whether metrics available in the early lifecycle
(i.e. requirement metrics), metrics available in the late lifecycle (i.e.
code metrics) and metrics available in the early lifecycle (i.e.
requirement metrics) combined with metrics available in the late
lifecycle (i.e. code metrics) can be used to identify fault prone
modules using Genetic Algorithm technique. This approach has been
tested with real time defect C Programming language datasets of
NASA software projects. The results show that the fusion of
requirement and code metric is the best prediction model for
detecting the faults as compared with commonly used code based
model.
Abstract: In this paper, based on flume experimental data, the velocity distribution in open channel flows is re-investigated. From the analysis, it is proposed that the wake layer in outer region may be divided into two regions, the relatively weak outer region and the relatively strong outer region. Combining the log law for inner region and the parabolic law for relatively strong outer region, an explicit equation for mean velocity distribution of steady and uniform turbulent flow through straight open channels is proposed and verified with the experimental data. It is found that the sediment concentration has significant effect on velocity distribution in the relatively weak outer region.
Abstract: This paper presents a practical scheme that can be used for allocating the transmission loss to generators and loads. In this scheme first the share of a generator or load on the current through a branch is determined using Z-bus modified matrix. Then the current components are decomposed and the branch loss allocation is obtained. A motivation of proposed scheme is to improve the results of Z-bus method and to reach more fair allocation. The proposed scheme has been implemented and tested on several networks. To achieve practical and applicable results, the proposed scheme is simulated and compared on the transmission network (400kv) of Khorasan region in Iran and the 14-bus standard IEEE network. The results show that the proposed scheme is comprehensive and fair to allocating the energy losses of a power market to its participants.
Abstract: Disposal of health-care waste (HCW) is considered as
an important environmental problem especially in large cities.
Multiple criteria decision making (MCDM) techniques are apt to deal
with quantitative and qualitative considerations of the health-care
waste management (HCWM) problems. This research proposes a
fuzzy multi-criteria group decision making approach with a multilevel
hierarchical structure including qualitative as well as
quantitative performance attributes for evaluating HCW disposal
alternatives for Istanbul. Using the entropy weighting method,
objective weights as well as subjective weights are taken into account
to determine the importance weighting of quantitative performance
attributes. The results obtained using the proposed methodology are
thoroughly analyzed.
Abstract: The main objective of this article is to present the semi-active vibration control using an electro-rheological fluid embedded sandwich structure for a cantilever beam. ER fluid is a smart material, which cause the suspended particles polarize and connect each other to form chain. The stiffness and damping coefficients of the ER fluid can be changed in 10 micro seconds; therefore, ERF is suitable to become the material embedded in the tunable vibration absorber to become a smart absorber. For the ERF smart material embedded structure, the fuzzy control law depends on the experimental expert database and the proposed self-tuning strategy. The electric field is controlled by a CRIO embedded system to implement the real application. This study investigates the different performances using the Type-1 fuzzy and interval Type-2 fuzzy controllers. The Interval type-2 fuzzy control is used to improve the modeling uncertainties for this ERF embedded shock absorber. The self-tuning vibration controllers using Type-1 and Interval Type-2 fuzzy law are implemented to the shock absorber system. Based on the resulting performance, Internal Type-2 fuzzy is better than the traditional Type-1 fuzzy control for this vibration control system.
Abstract: This paper presents the development and application of an adaptive neuro fuzzy inference system (ANFIS) based intelligent hybrid neuro fuzzy controller for automatic generation control (AGC) of two-area interconnected thermal power system with reheat non linearity. The dynamic response of the system has been studied for 1% step load perturbation in area-1. The performance of the proposed neuro fuzzy controller is compared against conventional proportional-integral (PI) controller, state feedback linear quadratic regulator (LQR) controller and fuzzy gain scheduled proportionalintegral (FGSPI) controller. Comparative analysis demonstrates that the proposed intelligent neuro fuzzy controller is the most effective of all in improving the transients of frequency and tie-line power deviations against small step load disturbances. Simulations have been performed using Matlab®.
Abstract: This paper addresses an efficient technique to embed and detect digital fingerprint code. Orthogonal modulation method is a straightforward and widely used approach for digital fingerprinting but shows several limitations in computational cost and signal efficiency. Coded modulation method can solve these limitations in theory. However it is difficult to perform well in practice if host signals are not available during tracing colluders, other kinds of attacks are applied, and the size of fingerprint code becomes large. In this paper, we propose a hybrid modulation method, in which the merits of or-thogonal modulation and coded modulation method are combined so that we can achieve low computational cost and high signal efficiency. To analyze the performance, we design a new fingerprint code based on GD-PBIBD theory and modulate this code into images by our method using spread-spectrum watermarking on frequency domain. The results show that the proposed method can efficiently handle large fingerprint code and trace colluders against averaging attacks.
Abstract: Increasing growth of information volume in the
internet causes an increasing need to develop new (semi)automatic
methods for retrieval of documents and ranking them according to
their relevance to the user query. In this paper, after a brief review
on ranking models, a new ontology based approach for ranking
HTML documents is proposed and evaluated in various
circumstances. Our approach is a combination of conceptual,
statistical and linguistic methods. This combination reserves the
precision of ranking without loosing the speed. Our approach
exploits natural language processing techniques for extracting
phrases and stemming words. Then an ontology based conceptual
method will be used to annotate documents and expand the query.
To expand a query the spread activation algorithm is improved so
that the expansion can be done in various aspects. The annotated
documents and the expanded query will be processed to compute
the relevance degree exploiting statistical methods. The outstanding
features of our approach are (1) combining conceptual, statistical
and linguistic features of documents, (2) expanding the query with
its related concepts before comparing to documents, (3) extracting
and using both words and phrases to compute relevance degree, (4)
improving the spread activation algorithm to do the expansion based
on weighted combination of different conceptual relationships and
(5) allowing variable document vector dimensions. A ranking
system called ORank is developed to implement and test the
proposed model. The test results will be included at the end of the
paper.
Abstract: Conventional approaches in the implementation of logic programming applications on embedded systems are solely of software nature. As a consequence, a compiler is needed that transforms the initial declarative logic program to its equivalent procedural one, to be programmed to the microprocessor. This approach increases the complexity of the final implementation and reduces the overall system's performance. On the contrary, presenting hardware implementations which are only capable of supporting logic programs prevents their use in applications where logic programs need to be intertwined with traditional procedural ones, for a specific application. We exploit HW/SW codesign methods to present a microprocessor, capable of supporting hybrid applications using both programming approaches. We take advantage of the close relationship between attribute grammar (AG) evaluation and knowledge engineering methods to present a programmable hardware parser that performs logic derivations and combine it with an extension of a conventional RISC microprocessor that performs the unification process to report the success or failure of those derivations. The extended RISC microprocessor is still capable of executing conventional procedural programs, thus hybrid applications can be implemented. The presented implementation is programmable, supports the execution of hybrid applications, increases the performance of logic derivations (experimental analysis yields an approximate 1000% increase in performance) and reduces the complexity of the final implemented code. The proposed hardware design is supported by a proposed extended C-language called C-AG.
Abstract: In this work, we successfully extended one-dimensional differential transform method (DTM), by presenting and proving some theorems, to solving nonlinear high-order multi-pantograph equations. This technique provides a sequence of functions which converges to the exact solution of the problem. Some examples are given to demonstrate the validity and applicability of the present method and a comparison is made with existing results.
Abstract: The aim of this paper is to present the kinematic
analysis and mechanism design of an assistive robotic leg for
hemiplegic and hemiparetic patients. In this work, the priority is to
design and develop the lightweight, effective and single driver
mechanism on the basis of experimental hip and knee angles- data for
walking speed of 1 km/h. A mechanism of cam-follower with three
links is suggested for this purpose. The kinematic analysis is carried
out and analysed using commercialized MATLAB software based on
the prototype-s links sizes and kinematic relationships. In order to
verify the kinematic analysis of the prototype, kinematic analysis data
are compared with the experimental data. A good agreement between
them proves that the anthropomorphic design of the lower extremity
exoskeleton follows the human walking gait.
Abstract: Full adders are important components in applications
such as digital signal processors (DSP) architectures and
microprocessors. In addition to its main task, which is adding two
numbers, it participates in many other useful operations such as
subtraction, multiplication, division,, address calculation,..etc. In
most of these systems the adder lies in the critical path that
determines the overall speed of the system. So enhancing the
performance of the 1-bit full adder cell (the building block of the
adder) is a significant goal.Demands for the low power VLSI have
been pushing the development of aggressive design methodologies to
reduce the power consumption drastically. To meet the growing
demand, we propose a new low power adder cell by sacrificing the
MOS Transistor count that reduces the serious threshold loss
problem, considerably increases the speed and decreases the power
when compared to the static energy recovery full (SERF) adder. So a
new improved 14T CMOS l-bit full adder cell is presented in this
paper. Results show 50% improvement in threshold loss problem,
45% improvement in speed and considerable power consumption
over the SERF adder and other different types of adders with
comparable performance.
Abstract: Since 1992, year where Hugo de Garis has published
the first paper on Evolvable Hardware (EHW), a period of intense
creativity has followed. It has been actively researched, developed
and applied to various problems. Different approaches have been
proposed that created three main classifications: extrinsic, mixtrinsic
and intrinsic EHW. Each of these solutions has a real interest.
Nevertheless, although the extrinsic evolution generates some
excellent results, the intrinsic systems are not so advanced. This
paper suggests 3 possible solutions to implement the run-time
configuration intrinsic EHW system: FPGA-based Run-Time
Configuration system, JBits-based Run-Time Configuration system
and Multi-board functional-level Run-Time Configuration system.
The main characteristic of the proposed architectures is that they are
implemented on Field Programmable Gate Array. A comparison of
proposed solutions demonstrates that multi-board functional-level
run-time configuration is superior in terms of scalability, flexibility
and the implementation easiness.
Abstract: In this paper, a fibre laser at 546 nm has been studied
for a signal power of -30 dB. Er3+-doped ZBLAN fibre has been used
by upconversion pumping of a 980 nm laser diode. Gain saturation
effect has been investigated in detail. Laser performance has also been
discussed. An efficiency of 35% has been calculated with a length of 5
mm fibre laser. Results show that Er3+-doped ZBLAN is a promising
candidate for optical amplification at 546 nm.
Abstract: In this work, a radial basis function (RBF) neural network is developed for the identification of hyperbolic distributed parameter systems (DPSs). This empirical model is based only on process input-output data and used for the estimation of the controlled variables at specific locations, without the need of online solution of partial differential equations (PDEs). The nonlinear model that is obtained is suitably transformed to a nonlinear state space formulation that also takes into account the model mismatch. A stable robust control law is implemented for the attenuation of external disturbances. The proposed identification and control methodology is applied on a long duct, a common component of thermal systems, for a flow based control of temperature distribution. The closed loop performance is significantly improved in comparison to existing control methodologies.
Abstract: Repeated observation of a given area over time yields
potential for many forms of change detection analysis. These
repeated observations are confounded in terms of radiometric
consistency due to changes in sensor calibration over time,
differences in illumination, observation angles and variation in
atmospheric effects.
This paper demonstrates applicability of an empirical relative
radiometric normalization method to a set of multitemporal cloudy
images acquired by Resourcesat1 LISS III sensor. Objective of this
study is to detect and remove cloud cover and normalize an image
radiometrically. Cloud detection is achieved by using Average
Brightness Threshold (ABT) algorithm. The detected cloud is
removed and replaced with data from another images of the same
area. After cloud removal, the proposed normalization method is
applied to reduce the radiometric influence caused by non surface
factors. This process identifies landscape elements whose reflectance
values are nearly constant over time, i.e. the subset of non-changing
pixels are identified using frequency based correlation technique. The
quality of radiometric normalization is statistically assessed by R2
value and mean square error (MSE) between each pair of analogous
band.
Abstract: There are reports of gas and oil wells fire due to different accidents. Many different methods are used for fire fighting in gas and oil industry. Traditional fire extinguishing techniques are mostly faced with many problems and are usually time consuming and needs lots of equipments. Besides, they cause damages to facilities, and create health and environmental problems. This article proposes innovative approach in fire extinguishing techniques in oil and gas industry, especially applicable for burning oil wells located offshore. Fire extinguishment employing a turbojet is a novel approach which can help to extinguishment the fire in short period of time. Divergent and convergent turbojets modeled in laboratory scale along with a high pressure flame were used. Different experiments were conducted to determine the relationship between output discharges of trumpet and oil wells. The results were corrected and the relationship between dimensionless parameters of flame and fire extinguishment distances and also the output discharge of turbojet and oil wells in specified distances are demonstrated by specific curves.
Abstract: In this paper we propose a new approach for flexible document categorization according to the document type or genre instead of topic. Our approach implements two homogenous classifiers: contextual classifier and logical classifier. The contextual classifier is based on the document URL, whereas, the logical classifier use the logical structure of the document to perform the categorization. The final categorization is obtained by combining contextual and logical categorizations. In our approach, each document is assigned to all predefined categories with different membership degrees. Our experiments demonstrate that our approach is best than other genre categorization approaches.
Abstract: This paper proposes a method that discovers sequential patterns corresponding to user-s interests from sequential data. This method expresses the interests as constraint patterns. The constraint patterns can define relationships among attributes of the items composing the data. The method recursively decomposes the constraint patterns into constraint subpatterns. The method evaluates the constraint subpatterns in order to efficiently discover sequential patterns satisfying the constraint patterns. Also, this paper applies the method to the sequential data composed of stock price indexes and verifies its effectiveness through comparing it with a method without using the constraint patterns.