Abstract: Behavioral aspects of experience such as will power
are rarely subjected to quantitative study owing to the numerous
complexities involved. Will is a phenomenon that has puzzled
humanity for a long time. It is a belief that will power of an individual
affects the success achieved by them in life. It is also thought that a
person endowed with great will power can overcome even the most
crippling setbacks in life while a person with a weak will cannot make
the most of life even the greatest assets. This study is an attempt
to subject the phenomena of will to the test of an artificial neural
network through a computational model. The claim being tested is
that will power of an individual largely determines success achieved
in life. It is proposed that data pertaining to success of individuals
be obtained from an experiment and the phenomenon of will be
incorporated into the model, through data generated recursively using
a relation between will and success characteristic to the model.
An artificial neural network trained using part of the data, could
subsequently be used to make predictions regarding data points in
the rest of the model. The procedure would be tried for different
models and the model where the networks predictions are found to
be in greatest agreement with the data would be selected; and used
for studying the relation between success and will.
Abstract: This paper proposes techniques like MT CMOS,
POWER GATING, DUAL STACK, GALEOR and LECTOR to
reduce the leakage power. A Full Adder has been designed using
these techniques and power dissipation is calculated and is compared
with general CMOS logic of Full Adder.
Simulation results show the validity of the proposed techniques is
effective to save power dissipation and to increase the speed of
operation of the circuits to a large extent.
Abstract: The exponential growth of social media arouses much
attention on public opinion information. The online forums, blogs,
micro blogs are proving to be extremely valuable resources and are
having bulk volume of information. However, most of the social
media data is unstructured and semi structured form. So that it is
more difficult to decipher automatically. Therefore, it is very much
essential to understand and analyze those data for making a right
decision. The online forums hotspot detection is a promising research
field in the web mining and it guides to motivate the user to take right
decision in right time. The proposed system consist of a novel
approach to detect a hotspot forum for any given time period. It uses
aging theory to find the hot terms and E-K-means for detecting the
hotspot forum. Experimental results demonstrate that the proposed
approach outperforms k-means for detecting the hotspot forums with
the improved accuracy.
Abstract: In the past few years, the amount of malicious software
increased exponentially and, therefore, machine learning algorithms
became instrumental in identifying clean and malware files through
(semi)-automated classification. When working with very large
datasets, the major challenge is to reach both a very high malware
detection rate and a very low false positive rate. Another challenge
is to minimize the time needed for the machine learning algorithm to
do so. This paper presents a comparative study between different
machine learning techniques such as linear classifiers, ensembles,
decision trees or various hybrids thereof. The training dataset consists
of approximately 2 million clean files and 200.000 infected files,
which is a realistic quantitative mixture. The paper investigates the
above mentioned methods with respect to both their performance
(detection rate and false positive rate) and their practicability.
Abstract: One of the crucial parameters of digital cryptographic
systems is the selection of the keys used and their distribution. The
randomness of the keys has a strong impact on the system’s security
strength being difficult to be predicted, guessed, reproduced, or
discovered by a cryptanalyst. Therefore, adequate key randomness
generation is still sought for the benefit of stronger cryptosystems.
This paper suggests an algorithm designed to generate and test
pseudo random number sequences intended for cryptographic
applications. This algorithm is based on mathematically manipulating
a publically agreed upon information between sender and receiver
over a public channel. This information is used as a seed for
performing some mathematical functions in order to generate a
sequence of pseudorandom numbers that will be used for
encryption/decryption purposes. This manipulation involves
permutations and substitutions that fulfill Shannon’s principle of
“confusion and diffusion”. ASCII code characters were utilized in the
generation process instead of using bit strings initially, which adds
more flexibility in testing different seed values. Finally, the obtained
results would indicate sound difficulty of guessing keys by attackers.
Abstract: This paper proposes the application of the Smart
Security Concept in the East Mediterranean. Smart Security aims to
secure critical infrastructure, such as hydrocarbon platforms, against
asymmetrical threats. The concept is based on Anti Asymmetrical
Area Denial (A3D) which necessitates limiting freedom of action of
maritime terrorists and piracy by founding safe and secure maritime
areas through sea lines of communication using short range
capabilities.
Abstract: DC motors have been widely used in the past
centuries which are proudly known as the workhorse of industrial
systems until the invention of the AC induction motors which makes
a huge revolution in industries. Since then, the use of DC machines
has been decreased due to enormous factors such as reliability,
robustness and complexity but it lost its fame due to the losses. In this
paper a new methodology is proposed to construct a DC motor
through the simulation in LabVIEW to get an idea about its real time
performances, if a change in parameter might have bigger
improvement in losses and reliability.
Abstract: Proposed paper dealt with the modelling and analysis of induction motor based on the mathematical expression using the graphical programming environment of Laboratory Virtual Instrument Engineering Workbench (LabVIEW). Induction motor modelling with the mathematical expression enables the motor to be simulated with the various required parameters. Owing to the invention of variable speed drives study about the induction motor characteristics became complex. In this simulation motor internal parameter such as stator resistance and reactance, rotor resistance and reactance, phase voltage, frequency and losses will be given as input. By varying the speed of motor corresponding parameters can be obtained they are input power, output power, efficiency, torque induced, slip and current.
Abstract: PhilSHORE is a multi-site, multi-device and multicriteria
decision support tool designed to support the development of
tidal current energy in the Philippines. Its platform is based on
Geographic Information Systems (GIS) which allows for the
collection, storage, processing, analyses and display of geospatial
data. Combining GIS tools with open source web development
applications, PhilSHORE becomes a webGIS-based marine spatial
planning tool. To date, PhilSHORE displays output maps and graphs
of power and energy density, site suitability and site-device analysis.
It enables stakeholders and the public easy access to the results of
tidal current energy resource assessments and site suitability
analyses. Results of the initial development show that PhilSHORE is
a promising decision support tool for ORE project developments.
Abstract: Grid is an environment with millions of resources
which are dynamic and heterogeneous in nature. A computational
grid is one in which the resources are computing nodes and is meant
for applications that involves larger computations. A scheduling
algorithm is said to be efficient if and only if it performs better
resource allocation even in case of resource failure. Resource
allocation is a tedious issue since it has to consider several
requirements such as system load, processing cost and time, user’s
deadline and resource failure. This work attempts in designing a
resource allocation algorithm which is cost-effective and also targets
at load balancing, fault tolerance and user satisfaction by considering
the above requirements. The proposed Budget Constrained Load
Balancing Fault Tolerant algorithm with user satisfaction (BLBFT)
reduces the schedule makespan, schedule cost and task failure rate
and improves resource utilization. Evaluation of the proposed
BLBFT algorithm is done using Gridsim toolkit and the results are
compared with the algorithms which separately concentrates on all
these factors. The comparison results ensure that the proposed
algorithm works better than its counterparts.
Abstract: Transmission system performance analysis is vital to
proper planning and operations of power systems in the presence of
deregulation. Key performance indicators (KPIs) are often used as
measure of degree of performance. This paper gives a novel method
to determine the transmission efficiency by evaluating the ratio of
real power losses incurred from a specified transfer direction.
Available Transmission Transfer Efficiency (ATTE) expresses the
percentage of real power received resulting from inter-area available
power transfer. The Tie line (Rated system path) performance is seen
to differ from system wide (Network response) performance and
ATTE values obtained are transfer direction specific. The required
sending end quantities with specified receiving end ATC and the
receiving end power circle diagram are obtained for the tie line
analysis. The amount of real power loss load relative to the available
transfer capability gives a measure of the transmission grid
efficiency.
Abstract: The effects of the pumping wavelength and their power
on the gain flattening of a fiber Raman amplifier (FRA) are
investigated. The multi-wavelength pumping scheme is utilized to
achieve gain flatness in FRA. It is proposed that gain flatness
becomes better with increase in number of pumping wavelengths
applied. We have achieved flat gain with 0.27 dB fluctuation in a
spectral range of 1475-1600 nm for a Raman fiber length of 10 km by
using six pumps with wavelengths with in the 1385-1495 nm interval.
The effect of multi-wavelength pumping scheme on gain saturation in
FRA is also studied. It is proposed that gain saturation condition gets
improved by using this scheme and this scheme is more useful for
higher spans of Raman fiber length.
Abstract: Image enhancement is a challenging issue in many applications. In the last two decades, there are various filters developed. This paper proposes a novel method which removes Gaussian noise from the gray scale images. The proposed technique is compared with Enhanced Fuzzy Peer Group Filter (EFPGF) for various noise levels. Experimental results proved that the proposed filter achieves better Peak-Signal-to-Noise-Ratio PSNR than the existing techniques. The proposed technique achieves 1.736dB gain in PSNR than the EFPGF technique.
Abstract: Association rule mining is one of the most important fields of data mining and knowledge discovery. In this paper, we propose an efficient multiple support frequent pattern growth algorithm which we called “MSFP-growth” that enhancing the FPgrowth algorithm by making infrequent child node pruning step with multiple minimum support using maximum constrains. The algorithm is implemented, and it is compared with other common algorithms: Apriori-multiple minimum supports using maximum constraints and FP-growth. The experimental results show that the rule mining from the proposed algorithm are interesting and our algorithm achieved better performance than other algorithms without scarifying the accuracy.
Abstract: Wavelength Division Multiplexing (WDM) is the dominant transport technology used in numerous high capacity backbone networks, based on optical infrastructures. Given the importance of costs (CapEx and OpEx) associated to these networks, resource management is becoming increasingly important, especially how the optical circuits, called “lightpaths”, are routed throughout the network. This requires the use of efficient algorithms which provide routing strategies with the lowest cost. We focus on the lightpath routing and wavelength assignment problem, known as the RWA problem, while optimizing wavelength fragmentation over the network. Wavelength fragmentation poses a serious challenge for network operators since it leads to the misuse of the wavelength spectrum, and then to the refusal of new lightpath requests. In this paper, we first establish a new Integer Linear Program (ILP) for the problem based on a node-link formulation. This formulation is based on a multilayer approach where the original network is decomposed into several network layers, each corresponding to a wavelength. Furthermore, we propose an efficient heuristic for the problem based on a greedy algorithm followed by a post-treatment procedure. The obtained results show that the optimal solution is often reached. We also compare our results with those of other RWA heuristic methods
Abstract: In this paper static scheme of under-frequency based load shedding is considered for chemical and petrochemical industries with islanded distribution networks relying heavily on the primary commodity to ensure minimum production loss, plant downtime or critical equipment shutdown. A simplistic methodology is proposed for in-house implementation of this scheme using underfrequency relays and a step by step guide is provided including the techniques to calculate maximum percentage overloads, frequency decay rates, time based frequency response and frequency based time response of the system. Case study of FFL electrical system is utilized, presenting the actual system parameters and employed load shedding settings following the similar series of steps. The arbitrary settings are then verified for worst overload conditions (loss of a generation source in this case) and comprehensive system response is then investigated.
Abstract: The article deals with the tool in Matlab GUI form
that is designed to analyse a mechatronic system sensitivity and
tolerance. In the analysed mechatronic system, a torque is transferred
from the drive to the load through a coupling containing flexible
elements. Different methods of control system design are used. The
classic form of the feedback control is proposed using Naslin method,
modulus optimum criterion and inverse dynamics method. The
cascade form of the control is proposed based on combination of
modulus optimum criterion and symmetric optimum criterion. The
sensitivity is analysed on the basis of absolute and relative sensitivity
of system function to the change of chosen parameter value of the
mechatronic system, as well as the control subsystem. The tolerance
is analysed in the form of determining the range of allowed relative
changes of selected system parameters in the field of system stability.
The tool allows to analyse an influence of torsion stiffness, torsion
damping, inertia moments of the motor and the load and controller(s)
parameters. The sensitivity and tolerance are monitored in terms of
the impact of parameter change on the response in the form of system
step response and system frequency-response logarithmic
characteristics. The Symbolic Math Toolbox for expression of the
final shape of analysed system functions was used. The sensitivity
and tolerance are graphically represented as 2D graph of sensitivity
or tolerance of the system function and 3D/2D static/interactive graph
of step/frequency response.
Abstract: The distribution of a single global clock across a chip
has become the major design bottleneck for high performance VLSI
systems owing to the power dissipation, process variability and multicycle
cross-chip signaling. A Network-on-Chip (NoC) architecture
partitioned into several synchronous blocks has become a promising
approach for attaining fine-grain power management at the system
level. In a NoC architecture the communication between the blocks is
handled asynchronously. To interface these blocks on a chip
operating at different frequencies, an asynchronous FIFO interface is
inevitable. However, these asynchronous FIFOs are not required if
adjacent blocks belong to the same clock domain. In this paper, we
have designed and analyzed a 16-bit asynchronous micropipelined
FIFO of depth four, with the awareness of place and route on an
FPGA device. We have used a commercially available Spartan 3
device and designed a high speed implementation of the
asynchronous 4-phase micropipeline. The asynchronous FIFO
implemented on the FPGA device shows 76 Mb/s throughput and a
handshake cycle of 109 ns for write and 101.3 ns for read at the
simulation under the worst case operating conditions (voltage =
0.95V) on a working chip at the room temperature.
Abstract: We address a new integer frequency offset (IFO)
estimation scheme with an aid of a pilot for orthogonal frequency
division multiplexing systems. After correlating each continual pilot
with a predetermined scattered pilot, the correlation value is again
correlated to alleviate the influence of the timing offset. From
numerical results, it is demonstrated that the influence of the timing
offset on the IFO estimation is significantly decreased.
Abstract: This work proposes a data-driven multiscale based
quantitative measures to reveal the underlying complexity of
electroencephalogram (EEG), applying to a rodent model of
hypoxic-ischemic brain injury and recovery. Motivated by that real
EEG recording is nonlinear and non-stationary over different
frequencies or scales, there is a need of more suitable approach over
the conventional single scale based tools for analyzing the EEG data.
Here, we present a new framework of complexity measures
considering changing dynamics over multiple oscillatory scales. The
proposed multiscale complexity is obtained by calculating entropies of
the probability distributions of the intrinsic mode functions extracted
by the empirical mode decomposition (EMD) of EEG. To quantify
EEG recording of a rat model of hypoxic-ischemic brain injury
following cardiac arrest, the multiscale version of Tsallis entropy is
examined. To validate the proposed complexity measure, actual EEG
recordings from rats (n=9) experiencing 7 min cardiac arrest followed
by resuscitation were analyzed. Experimental results demonstrate that
the use of the multiscale Tsallis entropy leads to better discrimination
of the injury levels and improved correlations with the neurological
deficit evaluation after 72 hours after cardiac arrest, thus suggesting an
effective metric as a prognostic tool.