Abstract: In this paper, we propose an easily computable proximity index for predicting voltage collapse of a load bus using only measured values of the bus voltage and power; Using these measurements a polynomial of fourth order is obtained by using LES estimation algorithms. The sum of the absolute values of the polynomial coefficient gives an idea of the critical bus. We demonstrate the applicability of our proposed method on 6 bus test system. The results obtained verify its applicability, as well as its accuracy and the simplicity. From this indicator, it is allowed to predict the voltage instability or the proximity of a collapse. Results obtained by the PV curve are compared with corresponding values by QV curves and are observed to be in close agreement.
Abstract: To improve the characterization of blood flows, we propose a method which makes it possible to use the spectral analysis
of the Doppler signals. Our calculation induces a reasonable approximation, the error made on estimated speed reflects the fact
that speed depends on the flow conditions as well as on measurement parameters like the bore and the volume flow rate. The estimate of the Doppler signal frequency enables us to determine the maximum Doppler frequencie Fd max as well as the maximum flow speed. The
results show that the difference between the estimated frequencies
( Fde ) and the Doppler frequencies ( Fd ) is small, this variation tends to zero for important θ angles and it is proportional to the diameter D. The description of the speed of friction and the
coefficient of friction justify the error rate obtained.
Abstract: Virtualization-based server consolidation has been
proven to be an ideal technique to solve the server sprawl problem by
consolidating multiple virtualized servers onto a few physical servers
leading to improved resource utilization and return on investment. In
this paper, we solve this problem by using existing servers, which are
heterogeneous and diversely preferred by IT managers. Five practical
consolidation rules are introduced, and a decision model is proposed to
optimally allocate source services to physical target servers while
maximizing the average resource utilization and preference value. Our
model can be regarded as a multi-objective multi-dimension
bin-packing (MOMDBP) problem with constraints, which is strongly
NP-hard. An improved grouping generic algorithm (GGA) is
introduced for the problem. Extensive simulations were performed and
the results are given.
Abstract: For identifying the discriminative sequence features between exons and introns, a new paradigm, rescaled-range frameshift analysis (RRFA), was proposed. By RRFA, two new
sequence features, the frameshift sensitivity (FS) and the accumulative
penta-mer complexity (APC), were discovered which
were further integrated into a new feature of larger scale, the persistency in anti-mutation (PAM). The feature-validation experiments
were performed on six model organisms to test the power
of discrimination. All the experimental results highly support that FS, APC and PAM were all distinguishing features between exons
and introns. These identified new sequence features provide new insights into the sequence composition of genes and they have
great potentials of forming a new basis for recognizing the exonintron boundaries in gene sequences.
Abstract: Searching similar documents and document
management subjects have important place in text mining. One of the
most important parts of similar document research studies is the
process of classifying or clustering the documents. In this study, a
similar document search approach that includes discussion of out the
case of belonging to multiple categories (multiple categories
problem) has been carried. The proposed method that based on Fuzzy
Similarity Classification (FSC) has been compared with Rocchio
algorithm and naive Bayes method which are widely used in text
mining. Empirical results show that the proposed method is quite
successful and can be applied effectively. For the second stage,
multiple categories vector method based on information of categories
regarding to frequency of being seen together has been used.
Empirical results show that achievement is increased almost two
times, when proposed method is compared with classical approach.
Abstract: Traditional optical networks are gradually evolving towards intelligent optical networks due to the need for faster bandwidth provisioning, protection and restoration of the network that can be accomplished with devices like optical switch, add drop multiplexer and cross connects. Since dense wavelength multiplexing forms the physical layer for intelligent optical networking, the roll of high speed all optical switch is important. This paper analyzes such an ultra-high speed polymer electro-optic switch. The performances of the 2x2 optical waveguide switch with rectangular, triangular and trapezoidal grating profiles on various device parameters are analyzed. The simulation result shows that trapezoidal grating is the optimized structure which has the coupling length of 81μm and switching voltage of 11V for the operating wavelength of 1550nm. The switching time for this proposed switch is 0.47 picosecond. This makes the proposed switch to be an important element in the intelligent optical network.
Abstract: Extensive rainfall disaggregation approaches have been developed and applied in climate change impact studies such as flood risk assessment and urban storm water management.In this study, five rainfall models that were capable ofdisaggregating daily rainfall data into hourly one were investigated for the rainfall record in theChangi Airport, Singapore. The objectives of this study were (i) to study the temporal characteristics of hourly rainfall in Singapore, and (ii) to evaluate the performance of variousdisaggregation models. The used models included: (i) Rectangular pulse Poisson model (RPPM), (ii) Bartlett-Lewis Rectangular pulse model (BLRPM), (iii) Bartlett-Lewis model with 2 cell types (BL2C), (iv) Bartlett-Lewis Rectangular with cell depth distribution dependent on duration (BLRD), and (v) Neyman-Scott Rectangular pulse model (NSRPM). All of these models werefitted using hourly rainfall data ranging from 1980 to 2005 (which was obtained from Changimeteorological station).The study results indicated that the weight scheme of inversely proportional variance could deliver more accurateoutputs for fitting rainfall patterns in tropical areas, and BLRPM performedrelatively better than other disaggregation models.
Abstract: One of the major problems in liberalized power
markets is loss allocation. In this paper, a different method for
allocating transmission losses to pool market participants is
proposed. The proposed method is fundamentally based on
decomposition of loss function and current projection concept. The
method has been implemented and tested on several networks and
one sample summarized in the paper. The results show that the
method is comprehensive and fair to allocating the energy losses of a
power market to its participants.
Abstract: Learning programming is difficult for many learners. Some researches have found that the main difficulty relates to cognitive load. Cognitive overload happens in programming due to the nature of the subject which is intrinisicly over-bearing on the working memory. It happens due to the complexity of the subject itself. The problem is made worse by the poor instructional design methodology used in the teaching and learning process. Various efforts have been proposed to reduce the cognitive load, e.g. visualization softwares, part-program method etc. Use of many computer based systems have also been tried to tackle the problem. However, little success has been made to alleviate the problem. More has to be done to overcome this hurdle. This research attempts at understanding how cognitive load can be managed so as to reduce the problem of overloading. We propose a mechanism to measure the cognitive load during pre instruction, post instruction and in instructional stages of learning. This mechanism is used to help the instruction. As the load changes the instruction is made to adapt itself to ensure cognitive viability. This mechanism could be incorporated as a sub domain in the student model of various computer based instructional systems to facilitate the learning of programming.
Abstract: A synchronous network-on-chip using wormhole packet switching
and supporting guaranteed-completion best-effort with low-priority (LP)
and high-priority (HP) wormhole packet delivery service is presented in
this paper. Both our proposed LP and HP message services deliver a good
quality of service in term of lossless packet completion and in-order message
data delivery. However, the LP message service does not guarantee minimal
completion bound. The HP packets will absolutely use 100% bandwidth of
their reserved links if the HP packets are injected from the source node with
maximum injection. Hence, the service are suitable for small size messages
(less than hundred bytes). Otherwise the other HP and LP messages, which
require also the links, will experience relatively high latency depending on the
size of the HP message. The LP packets are routed using a minimal adaptive
routing, while the HP packets are routed using a non-minimal adaptive routing
algorithm. Therefore, an additional 3-bit field, identifying the packet type,
is introduced in their packet headers to classify and to determine the type
of service committed to the packet. Our NoC prototypes have been also
synthesized using a 180-nm CMOS standard-cell technology to evaluate the
cost of implementing the combination of both services.
Abstract: In the last decade, energy based control theory has undergone a significant breakthrough in dealing with underactated mechanical systems with two successful and similar tools, controlled Lagrangians and controlled Hamiltanians (IDA-PBC). However, because of the complexity of these tools, successful case studies are lacking, in particular, MIMO cases. The seminal theoretical paper of controlled Lagrangians proposed by Bloch and his colleagues presented a benchmark example–a 4 d.o.f underactuated pendulum on a cart but a detailed and completed design is neglected. To compensate this ignorance, the note revisit their design idea by addressing explicit control functions for a similar device motivated by a vector thrust body hovering in the air. To the best of our knowledge, this system is the first MIMO, underactuated example that is stabilized by using energy based tools at the courtesy of the original design idea. Some observations are given based on computer simulation.
Abstract: The line sleeves on power transmission line connects
two conductors while the transmission line is constructing. However,
the line sleeves sometimes cause transmission line break down,
because the line sleeves are deteriorated and decayed by acid rain.
When the transmission line is broken, the economical loss is huge.
Therefore the line sleeves on power transmission lines should be
inspected periodically to prevent power failure. In this paper, Korea
Electric Power Research Institute reviewed several robots to inspect
line status and proposes a robot to inspect line sleeve by measuring
magnetic field on line sleeve. The developed inspection tool can
reliable to move along transmission line and overcome several
obstacles on transmission line. The developed system is also applied
on power transmission line and verified the efficiency of the robot.
Abstract: Data mining, which is the exploration of
knowledge from the large set of data, generated as a result of
the various data processing activities. Frequent Pattern Mining
is a very important task in data mining. The previous
approaches applied to generate frequent set generally adopt
candidate generation and pruning techniques for the
satisfaction of the desired objective. This paper shows how
the different approaches achieve the objective of frequent
mining along with the complexities required to perform the
job. This paper will also look for hardware approach of cache
coherence to improve efficiency of the above process. The
process of data mining is helpful in generation of support
systems that can help in Management, Bioinformatics,
Biotechnology, Medical Science, Statistics, Mathematics,
Banking, Networking and other Computer related
applications. This paper proposes the use of both upward and
downward closure property for the extraction of frequent item
sets which reduces the total number of scans required for the
generation of Candidate Sets.
Abstract: We propose a method for discrimination and
classification of ovarian with benign, malignant and normal tissue
using independent component analysis and neural networks. The
method was tested for a proteomic patters set from A database, and
radial basis functions neural networks. The best performance was
obtained with probabilistic neural networks, resulting I 99% success
rate, with 98% of specificity e 100% of sensitivity.
Abstract: In this paper, we propose an effective relay
communication for layered video transmission as an alternative to
make the most of limited resources in a wireless communication
network where loss often occurs. Relaying brings stable multimedia
services to end clients, compared to multiple description coding
(MDC). Also, retransmission of only parity data about one or more
video layer using channel coder to the end client of the relay device is
paramount to the robustness of the loss situation. Using these
methods in resource-constrained environments, such as real-time user
created content (UCC) with layered video transmission, can provide
high-quality services even in a poor communication environment.
Minimal services are also possible. The mathematical analysis shows
that the proposed method reduced the probability of GOP loss rate
compared to MDC and raptor code without relay. The GOP loss rate
is about zero, while MDC and raptor code without relay have a GOP
loss rate of 36% and 70% in case of 10% frame loss rate.
Abstract: This paper considers inference under progressive type II censoring with a compound Rayleigh failure time distribution. The maximum likelihood (ML), and Bayes methods are used for estimating the unknown parameters as well as some lifetime parameters, namely reliability and hazard functions. We obtained Bayes estimators using the conjugate priors for two shape and scale parameters. When the two parameters are unknown, the closed-form expressions of the Bayes estimators cannot be obtained. We use Lindley.s approximation to compute the Bayes estimates. Another Bayes estimator has been obtained based on continuous-discrete joint prior for the unknown parameters. An example with the real data is discussed to illustrate the proposed method. Finally, we made comparisons between these estimators and the maximum likelihood estimators using a Monte Carlo simulation study.
Abstract: Medical applications are among the most impactful
areas of microrobotics. The ultimate goal of medical microrobots is
to reach currently inaccessible areas of the human body and carry out
a host of complex operations such as minimally invasive surgery
(MIS), highly localized drug delivery, and screening for diseases at
their very early stages. Miniature, safe and efficient propulsion
systems hold the key to maturing this technology but they pose
significant challenges. A new type of propulsion developed recently,
uses multi-flagella architecture inspired by the motility mechanism of
prokaryotic microorganisms. There is a lack of efficient methods for
designing this type of propulsion system. The goal of this paper is to
overcome the lack and this way, a numerical strategy is proposed to
design multi-flagella propulsion systems. The strategy is based on the
implementation of the regularized stokeslet and rotlet theory, RFT
theory and new approach of “local corrected velocity". The effects of
shape parameters and angular velocities of each flagellum on overall
flow field and on the robot net forces and moments are considered.
Then a multi-layer perceptron artificial neural network is designed
and employed to adjust the angular velocities of the motors for
propulsion control. The proposed method applied successfully on a
sample configuration and useful demonstrative results is obtained.
Abstract: Probability-based identity disclosure risk
measurement may give the same overall risk for different
anonymization strategy of the same dataset. Some entities in the
anonymous dataset may have higher identification risks than the
others. Individuals are more concerned about higher risks than the
average and are more interested to know if they have a possibility of
being under higher risk. A notation of overall risk in the above
measurement method doesn-t indicate whether some of the involved
entities have higher identity disclosure risk than the others. In this
paper, we have introduced an identity disclosure risk measurement
method that not only implies overall risk, but also indicates whether
some of the members have higher risk than the others. The proposed
method quantifies the overall risk based on the individual risk values,
the percentage of the records that have a risk value higher than the
average and how larger the higher risk values are compared to the
average. We have analyzed the disclosure risks for different
disclosure control techniques applied to original microdata and
present the results.
Abstract: In this research, we have developed a new efficient
heuristic algorithm for the dynamic facility layout problem with
budget constraint (DFLPB). This heuristic algorithm combines two
mathematical programming methods such as discrete event
simulation and linear integer programming (IP) to obtain a near
optimum solution. In the proposed algorithm, the non-linear model
of the DFLP has been changed to a pure integer programming (PIP)
model. Then, the optimal solution of the PIP model has been used in
a simulation model that has been designed in a similar manner as the
DFLP for determining the probability of assigning a facility to a
location. After a sufficient number of runs, the simulation model
obtains near optimum solutions. Finally, to verify the performance of
the algorithm, several test problems have been solved. The results
show that the proposed algorithm is more efficient in terms of speed
and accuracy than other heuristic algorithms presented in previous
works found in the literature.
Abstract: This paper proposes an easy-to-use instruction hiding
method to protect software from malicious reverse engineering
attacks. Given a source program (original) to be protected, the
proposed method (1) takes its modified version (fake) as an input,
(2) differences in assembly code instructions between original and
fake are analyzed, and, (3) self-modification routines are introduced
so that fake instructions become correct (i.e., original instructions)
before they are executed and that they go back to fake ones after
they are executed. The proposed method can add a certain amount
of security to a program since the fake instructions in the resultant
program confuse attackers and it requires significant effort to discover
and remove all the fake instructions and self-modification routines.
Also, this method is easy to use (with little effort) because all a user
(who uses the proposed method) has to do is to prepare a fake source
code by modifying the original source code.