Abstract: In order to enhance the contrast in the regions where the pixels have similar intensities, this paper presents a new histogram equalization scheme. Conventional global equalization schemes over-equalizes these regions so that too bright or dark pixels are resulted and local equalization schemes produce unexpected discontinuities at the boundaries of the blocks. The proposed algorithm segments the original histogram into sub-histograms with reference to brightness level and equalizes each sub-histogram with the limited extents of equalization considering its mean and variance. The final image is determined as the weighted sum of the equalized images obtained by using the sub-histogram equalizations. By limiting the maximum and minimum ranges of equalization operations on individual sub-histograms, the over-equalization effect is eliminated. Also the result image does not miss feature information in low density histogram region since the remaining these area is applied separating equalization. This paper includes how to determine the segmentation points in the histogram. The proposed algorithm has been tested with more than 100 images having various contrasts in the images and the results are compared to the conventional approaches to show its superiority.
Abstract: Power transformers are among the most important and
expensive equipments in the electric power systems. Consequently
the transformer protection is an essential part of the system
protection. This paper presents a new method for locating
transformer winding faults such as turn-to-turn, turn-to-core, turn-totransformer
body, turn-to-earth, and high voltage winding to low
voltage winding. In this study the current and voltage signals of input
and output terminals of the transformer are measured, which the
Fourier transform of measured signals and harmonic analysis
determine the fault's location.
Abstract: Automatic reusability appraisal is helpful in
evaluating the quality of developed or developing reusable software
components and in identification of reusable components from
existing legacy systems; that can save cost of developing the
software from scratch. But the issue of how to identify reusable
components from existing systems has remained relatively
unexplored. In this research work, structural attributes of software
components are explored using software metrics and quality of the
software is inferred by different Neural Network based approaches,
taking the metric values as input. The calculated reusability value
enables to identify a good quality code automatically. It is found that
the reusability value determined is close to the manual analysis used
to be performed by the programmers or repository managers. So, the
developed system can be used to enhance the productivity and
quality of software development.
Abstract: .Hardware realization of a Neural Network (NN), to a large extent depends on the efficient implementation of a single neuron. FPGA-based reconfigurable computing architectures are suitable for hardware implementation of neural networks. FPGA realization of ANNs with a large number of neurons is still a challenging task. This paper discusses the issues involved in implementation of a multi-input neuron with linear/nonlinear excitation functions using FPGA. Implementation method with resource/speed tradeoff is proposed to handle signed decimal numbers. The VHDL coding developed is tested using Xilinx XC V50hq240 Chip. To improve the speed of operation a lookup table method is used. The problems involved in using a lookup table (LUT) for a nonlinear function is discussed. The percentage saving in resource and the improvement in speed with an LUT for a neuron is reported. An attempt is also made to derive a generalized formula for a multi-input neuron that facilitates to estimate approximately the total resource requirement and speed achievable for a given multilayer neural network. This facilitates the designer to choose the FPGA capacity for a given application. Using the proposed method of implementation a neural network based application, namely, a Space vector modulator for a vector-controlled drive is presented
Abstract: This paper presents an analysis result of relationship
between business and information technology (IT) in business process
reengineering (BPR). 258 Japanese firm-level data collected have been
analyzed using structural equation modeling. This analysis was aimed
to illuminating success factors of achieve effective BPR. Analysis was
focused on management factors (including organizational factors) and
implementing management method (e.g. balanced score card, internal
control, etc.).These results would contribute for achieving effective
BPR by showing effective tasks and environment to be focused.
Abstract: With the exponential rise in the number of multimedia
applications available, the best-effort service provided by the Internet
today is insufficient. Researchers have been working on new
architectures like the Next Generation Network (NGN) which, by
definition, will ensure Quality of Service (QoS) in an all-IP based
network [1]. For this approach to become a reality, reservation of
bandwidth is required per application per user. WiMAX (Worldwide
Interoperability for Microwave Access) is a wireless communication
technology which has predefined levels of QoS which can be
provided to the user [4]. IPv6 has been created as the successor for
IPv4 and resolves issues like the availability of IP addresses and
QoS. This paper provides a design to use the power of WiMAX as an
NSP (Network Service Provider) for NGN using IPv6. The use of the
Traffic Class (TC) field and the Flow Label (FL) field of IPv6 has
been explained for making QoS requests and grants [6], [7]. Using
these fields, the processing time is reduced and routing is simplified.
Also, we define the functioning of the ASN gateway and the NGN
gateway (NGNG) which are edge node interfaces in the NGNWiMAX
design. These gateways ensure QoS management through
built in functions and by certain physical resources and networking
capabilities.
Abstract: Image compression can improve the performance of
the digital systems by reducing time and cost in image storage
and transmission without significant reduction of the image quality.
Furthermore, the discrete cosine transform has emerged as the new
state-of-the art standard for image compression. In this paper, a
hybrid image compression technique based on reversible blockade
transform coding is proposed. The technique, implemented over
regions of interest (ROIs), is based on selection of the coefficients
that belong to different transforms, depending on the coefficients is
proposed. This method allows: (1) codification of multiple kernals
at various degrees of interest, (2) arbitrary shaped spectrum,and (3)
flexible adjustment of the compression quality of the image and the
background. No standard modification for JPEG2000 decoder was
required. The method was applied over different types of images.
Results show a better performance for the selected regions, when
image coding methods were employed for the whole set of images.
We believe that this method is an excellent tool for future image
compression research, mainly on images where image coding can
be of interest, such as the medical imaging modalities and several
multimedia applications. Finally VLSI implementation of proposed
method is shown. It is also shown that the kernal of Hartley and
Cosine transform gives the better performance than any other model.
Abstract: Iron in groundwater is one of the problems that render the water unsuitable for drinking. The concentration above 0.3 mg/L is common in groundwater. The conventional method of removal is by precipitation under oxic condition. In this study, iron removal under anaerobic conditions was examined by batch experiment as a main purpose. The process involved by purging of groundwater samples with H2S to form iron sulfide. Removal up to 83% for 1 mg/L iron solution was achieved. The removal efficiency dropped to 82% and 75% for the higher initial iron concentrations 3.55 and 5.01 mg/L, respectively. The average residual sulfide concentration in water after the process was 25*g/L. The Eh level during the process was -272 mV. The removal process was found to follow the first order reaction with average rate constant of 4.52 x 10-3. The half-life for the concentrations to reduce from initial values was 157 minutes.
Abstract: Patients with diabetes are susceptible to chronic foot
wounds which may be difficult to manage and slow to heal.
Diagnosis and treatment currently rely on the subjective judgement of
experienced professionals. An objective method of tissue assessment
is required. In this paper, a data fusion approach was taken to wound
tissue classification. The supervised Maximum Likelihood and
unsupervised Multi-Modal Expectation Maximisation algorithms
were used to classify tissues within simulated wound models by
weighting the contributions of both colour and 3D depth information.
It was found that, at low weightings, depth information could show
significant improvements in classification accuracy when compared
to classification by colour alone, particularly when using the
maximum likelihood method. However, larger weightings were
found to have an entirely negative effect on accuracy.
Abstract: For a given specific problem an efficient algorithm has been the matter of study. However, an alternative approach orthogonal to this approach comes out, which is called a reduction. In general for a given specific problem this reduction approach studies how to convert an original problem into subproblems. This paper proposes a formal modeling language to support this reduction approach in order to make a solver quickly. We show three examples from the wide area of learning problems. The benefit is a fast prototyping of algorithms for a given new problem. It is noted that our formal modeling language is not intend for providing an efficient notation for data mining application, but for facilitating a designer who develops solvers in machine learning.
Abstract: In this paper, we present a cost-effective wireless
distributed load shedding system for non-emergency scenarios. In
power transformer locations where SCADA system cannot be used,
the proposed solution provides a reasonable alternative that combines
the use of microcontrollers and existing GSM infrastructure to send
early warning SMS messages to users advising them to proactively
reduce their power consumption before system capacity is reached
and systematic power shutdown takes place.
A novel communication protocol and message set have been
devised to handle the messaging between the transformer sites, where
the microcontrollers are located and where the measurements take
place, and the central processing site where the database server is
hosted. Moreover, the system sends warning messages to the endusers
mobile devices that are used as communication terminals. The
system has been implemented and tested via different experimental
results.
Abstract: In this study, a fuzzy similarity approach for Arabic
web pages classification is presented. The approach uses a fuzzy
term-category relation by manipulating membership degree for the
training data and the degree value for a test web page. Six measures
are used and compared in this study. These measures include:
Einstein, Algebraic, Hamacher, MinMax, Special case fuzzy and
Bounded Difference approaches. These measures are applied and
compared using 50 different Arabic web pages. Einstein measure was
gave best performance among the other measures. An analysis of
these measures and concluding remarks are drawn in this study.
Abstract: In the paper, a fast high-resolution range profile synthetic algorithm called orthogonal matching pursuit with sensing dictionary (OMP-SD) is proposed. It formulates the traditional HRRP synthetic to be a sparse approximation problem over redundant dictionary. As it employs a priori that the synthetic range profile (SRP) of targets are sparse, SRP can be accomplished even in presence of data lost. Besides, the computation complexity decreases from O(MNDK) flops for OMP to O(M(N + D)K) flops for OMP-SD by introducing sensing dictionary (SD). Simulation experiments illustrate its advantages both in additive white Gaussian noise (AWGN) and noiseless situation, respectively.
Abstract: Today-s Information and Knowledge Society has
placed new demands on education and a new paradigm of education
is required. Learning, facilitated by educational systems and the
pedagogic process, is globally undergoing dramatic changes. The aim
of this paper is the development of a simple Instructional Design tool
for E-Learning, named IDEL (Instructional Design for Electronic
Learning), that provides the educators with facilities to create their
own courses with the essential educational material and manage
communication with students. It offers flexibility in the way of
learning and provides ease in employment and reusability of
resources. IDEL is a web-based Instructional System and is designed
to facilitate course design process in accordance with the ADDIE
model and the instructional design principles with emphasis placed
on the use of technology enhanced learning. An example case of
using the ADDIE model to systematically develop a course and its
implementation with the aid of IDEL is given and some results from
student evaluation of the tool and the course are reported.
Abstract: Business process automation is an important task in an
enterprise business environment software development. The
requirements of processing acceleration and automation level of
enterprises are inherently different from one organization to another.
We present a methodology and system for automation of business
process management system architecture by multi-agent collaboration
based on SOA. Design layer processes are modeled in semantic
markup language for web services application. At the core of our
system is considering certain types of human tasks to their further
automation across over multiple platform environments. An
improved abnormality processing with model for automation of
BPMS architecture by multi-agent collaboration based on SOA is
introduced. Validating system for efficiency of process automation,
an application for educational knowledge base instance would also be
described.
Abstract: This paper addresses the problem of determining the current 3D location of a moving object and robustly tracking it from a sequence of camera images. The approach presented here uses a particle filter and does not perform any explicit triangulation. Only the color of the object to be tracked is required, but not any precisemotion model. The observation model we have developed avoids the color filtering of the entire image. That and the Monte Carlotechniques inside the particle filter provide real time performance.Experiments with two real cameras are presented and lessons learned are commented. The approach scales easily to more than two cameras and new sensor cues.
Abstract: This paper investigates the application of Particle Swarm Optimization (PSO) technique for coordinated design of a Power System Stabilizer (PSS) and a Thyristor Controlled Series Compensator (TCSC)-based controller to enhance the power system stability. The design problem of PSS and TCSC-based controllers is formulated as a time domain based optimization problem. PSO algorithm is employed to search for optimal controller parameters. By minimizing the time-domain based objective function, in which the deviation in the oscillatory rotor speed of the generator is involved; stability performance of the system is improved. To compare the capability of PSS and TCSC-based controller, both are designed independently first and then in a coordinated manner for individual and coordinated application. The proposed controllers are tested on a weakly connected power system. The eigenvalue analysis and non-linear simulation results are presented to show the effectiveness of the coordinated design approach over individual design. The simulation results show that the proposed controllers are effective in damping low frequency oscillations resulting from various small disturbances like change in mechanical power input and reference voltage setting.
Abstract: This research is aimed to describe the application of robust regression and its advantages over the least square regression method in analyzing financial data. To do this, relationship between earning per share, book value of equity per share and share price as price model and earning per share, annual change of earning per share and return of stock as return model is discussed using both robust and least square regressions, and finally the outcomes are compared. Comparing the results from the robust regression and the least square regression shows that the former can provide the possibility of a better and more realistic analysis owing to eliminating or reducing the contribution of outliers and influential data. Therefore, robust regression is recommended for getting more precise results in financial data analysis.
Abstract: Workflow Management Systems (WfMS) alloworganizations to streamline and automate business processes and reengineer their structure. One important requirement for this type of system is the management and computation of the Quality of Service(QoS) of processes and workflows. Currently, a range of Web processes and workflow languages exist. Each language can be characterized by the set of patterns they support. Developing andimplementing a suitable and generic algorithm to compute the QoSof processes that have been designed using different languages is a difficult task. This is because some patterns are specific to particular process languages and new patterns may be introduced in future versions of a language. In this paper, we describe an adaptive algorithm implemented to cope with these two problems. The algorithm is called adaptive since it can be dynamically changed as the patterns of a process language also change.
Abstract: Malay Folk Literature in early childhood education
served as an important agent in child development that involved
emotional, thinking and language aspects. Up to this moment not
much research has been carried out in Malaysia particularly in the
teaching and learning aspects nor has there been an effort to publish
“big books." Hence this article will discuss the stance taken by
university undergraduate students, teachers and parents in evaluating
Malay Folk Literature in early childhood education to be used as big
books. The data collated and analyzed were taken from 646
respondents comprising 347 undergraduates and 299 teachers. Results
of the study indicated that Malay Folk Literature can be absorbed into
teaching and learning for early childhood with a mean of 4.25 while it
can be in big books with a mean of 4.14. Meanwhile the highest mean
value required for placing Malay Folk Literature genre as big books in
early childhood education rests on exemplary stories for
undergraduates with mean of 4.47; animal fables for teachers with a
mean of 4.38. The lowest mean value of 3.57 is given to lipurlara
stories. The most popular Malay Folk Literature found suitable for
early children is Sang Kancil and the Crocodile, followed by Bawang
Putih Bawang Merah. Pak Padir, Legends of Mahsuri, Origin of
Malacca, and Origin of Rainbow are among the popular stories as
well. Overall the undergraduates show a positive attitude toward all
the items compared to teachers. The t-test analysis has revealed a non
significant relationship between the undergraduate students and
teachers with all the items for the teaching and learning of Malay Folk
Literature.