Abstract: Let Xi be a Lacunary System, we established large
deviations inequality for Lacunary System. Furthermore, we gained
Marcinkiewicz Larger Number Law with dependent random variables
sequences.
Abstract: Text categorization is the problem of classifying text
documents into a set of predefined classes. After a preprocessing
step, the documents are typically represented as large sparse vectors.
When training classifiers on large collections of documents, both the
time and memory restrictions can be quite prohibitive. This justifies
the application of feature selection methods to reduce the
dimensionality of the document-representation vector. In this paper,
we present three feature selection methods: Information Gain,
Support Vector Machine feature selection called (SVM_FS) and
Genetic Algorithm with SVM (called GA_SVM). We show that the
best results were obtained with GA_SVM method for a relatively
small dimension of the feature vector.
Abstract: This paper deals with the formulation of Maxwell-s equations in a cavity resonator in the presence of the gravitational field produced by a blackhole. The metric of space-time due to the blackhole is the Schwarzchild metric. Conventionally, this is expressed in spherical polar coordinates. In order to adapt this metric to our problem, we have considered this metric in a small region close to the blackhole and expressed this metric in a cartesian system locally.
Abstract: A novel behavioral detection framework is proposed
to detect zero day buffer overflow vulnerabilities (based on network
behavioral signatures) using zero-day exploits, instead of the
signature-based or anomaly-based detection solutions currently
available for IDPS techniques. At first we present the detection
model that uses shadow honeypot. Our system is used for the online
processing of network attacks and generating a behavior detection
profile. The detection profile represents the dataset of 112 types of
metrics describing the exact behavior of malware in the network. In
this paper we present the examples of generating behavioral
signatures for two attacks – a buffer overflow exploit on FTP server
and well known Conficker worm. We demonstrated the visualization
of important aspects by showing the differences between valid
behavior and the attacks. Based on these metrics we can detect
attacks with a very high probability of success, the process of
detection is however very expensive.
Abstract: We evaluate the average energy consumption per bit
in Optical Packet Switches equipped with BENES switching fabric
realized in Semiconductor Optical Amplifier (SOA) technology. We
also study the impact that the Amplifier Spontaneous Emission
(ASE) noise generated by a transmission system has on the power
consumption of the BENES switches due to the gain saturation of the
SOAs used to realize the switching fabric. As a matter of example for
32×32 switches supporting 64 wavelengths and offered traffic equal
to 0,8, the average energy consumption per bit is 2, 34 · 10-1 nJ/bit
and increases if ASE noise introduced by the transmission systems
is increased.
Abstract: Defect prevention is the most vital but habitually
neglected facet of software quality assurance in any project. If
functional at all stages of software development, it can condense the
time, overheads and wherewithal entailed to engineer a high quality
product. The key challenge of an IT industry is to engineer a
software product with minimum post deployment defects.
This effort is an analysis based on data obtained for five selected
projects from leading software companies of varying software
production competence. The main aim of this paper is to provide
information on various methods and practices supporting defect
detection and prevention leading to thriving software generation. The
defect prevention technique unearths 99% of defects. Inspection is
found to be an essential technique in generating ideal software
generation in factories through enhanced methodologies of abetted
and unaided inspection schedules. On an average 13 % to 15% of
inspection and 25% - 30% of testing out of whole project effort time
is required for 99% - 99.75% of defect elimination.
A comparison of the end results for the five selected projects
between the companies is also brought about throwing light on the
possibility of a particular company to position itself with an
appropriate complementary ratio of inspection testing.
Abstract: Fine-grained data replication over the Internet allows duplication of frequently accessed data objects, as opposed to entire sites, to certain locations so as to improve the performance of largescale content distribution systems. In a distributed system, agents representing their sites try to maximize their own benefit since they are driven by different goals such as to minimize their communication costs, latency, etc. In this paper, we will use game theoretical techniques and in particular auctions to identify a bidding mechanism that encapsulates the selfishness of the agents, while having a controlling hand over them. In essence, the proposed game theory based mechanism is the study of what happens when independent agents act selfishly and how to control them to maximize the overall performance. A bidding mechanism asks how one can design systems so that agents- selfish behavior results in the desired system-wide goals. Experimental results reveal that this mechanism provides excellent solution quality, while maintaining fast execution time. The comparisons are recorded against some well known techniques such as greedy, branch and bound, game theoretical auctions and genetic algorithms.
Abstract: An electric utility-s main concern is to plan, design, operate and maintain its power supply to provide an acceptable level of reliability to its users. This clearly requires that standards of reliability be specified and used in all three sectors of the power system, i.e., generation, transmission and distribution. That is why reliability of a power system is always a major concern to power system planners. This paper presents the reliability analysis of Bangladesh Power System (BPS). Reliability index, loss of load probability (LOLP) of BPS is evaluated using recursive algorithm and considering no de-rated states of generators. BPS has sixty one generators and a total installed capacity of 5275 MW. The maximum demand of BPS is about 5000 MW. The relevant data of the generators and hourly load profiles are collected from the National Load Dispatch Center (NLDC) of Bangladesh and reliability index 'LOLP' is assessed for the period of last ten years.
Abstract: This document shows a software that shows different chaotic generator, as continuous as discrete time. The software gives the option for obtain the different signals, using different parameters and initial condition value. The program shows then critical parameter for each model. All theses models are capable of encrypter information, this software show it too.
Abstract: In this study, we propose the chaotic cipher combined with Mersenne Twister that is an extremely good pseudo-random number generator for the secure communications. We investigate the Lyapunov exponent of the proposed system, and evaluate the randomness performance by comparing RC4 and the chaotic cipher. In these results, our proposed system gets high chaotic property and more randomness than the conventional ciphers.
Abstract: Ontologies play an important role in semantic web
applications and are often developed by different groups and
continues to evolve over time. The knowledge in ontologies changes
very rapidly that make the applications outdated if they continue to
use old versions or unstable if they jump to new versions. Temporal
frames using frame versioning and slot versioning are used to take
care of dynamic nature of the ontologies. The paper proposes new
tags and restructured OWL format enabling the applications to work
with the old or new version of ontologies. Gene Ontology, a very
dynamic ontology, has been used as a case study to explain the OWL
Ontology with Temporal Tags.
Abstract: Reliable water level forecasts are particularly
important for warning against dangerous flood and inundation. The
current study aims at investigating the suitability of the adaptive
network based fuzzy inference system for continuous water level
modeling. A hybrid learning algorithm, which combines the least
square method and the back propagation algorithm, is used to
identify the parameters of the network. For this study, water levels
data are available for a hydrological year of 2002 with a sampling
interval of 1-hour. The number of antecedent water level that should
be included in the input variables is determined by two statistical
methods, i.e. autocorrelation function and partial autocorrelation
function between the variables. Forecasting was done for 1-hour until
12-hour ahead in order to compare the models generalization at
higher horizons. The results demonstrate that the adaptive networkbased
fuzzy inference system model can be applied successfully and
provide high accuracy and reliability for river water level estimation.
In general, the adaptive network-based fuzzy inference system
provides accurate and reliable water level prediction for 1-hour ahead
where the MAPE=1.15% and correlation=0.98 was achieved. Up to
12-hour ahead prediction, the model still shows relatively good
performance where the error of prediction resulted was less than
9.65%. The information gathered from the preliminary results
provide a useful guidance or reference for flood early warning
system design in which the magnitude and the timing of a potential
extreme flood are indicated.
Abstract: According to previous studies, some muscles present a non-homogeneous spatial distribution of its muscle fiber types and motor unit types. However, available muscle models only deal with muscles with homogeneous distributions. In this paper, a new architecture muscle model is proposed to permit the construction of non-uniform distributions of muscle fibers within the muscle cross section. The idea behind is the use of a motor unit placement algorithm that controls the spatial overlapping of the motor unit territories of each motor unit type. Results show the capabilities of the new algorithm to reproduce arbitrary muscle fiber type distributions.
Abstract: IVE toolkit has been created for facilitating research,education and development in the ?eld of virtual storytelling andcomputer games. Primarily, the toolkit is intended for modellingaction selection mechanisms of virtual humans, investigating level-of-detail AI techniques for large virtual environments, and for exploringjoint behaviour and role-passing technique (Sec. V). Additionally, thetoolkit can be used as an AI middleware without any changes. Themain facility of IVE is that it serves for prototyping both the AI andvirtual worlds themselves. The purpose of this paper is to describeIVE?s features in general and to present our current work - includingan educational game - on this platform.Keywords? AI middleware, simulation, virtual world.
Abstract: The central recirculation zone (CRZ) in a swirl
stabilized gas turbine combustor has a dominant effect on the fuel air
mixing process and flame stability. Most of state of the art swirlers
share one disadvantage; the fixed swirl number for the same swirler
configuration. Thus, in a mathematical sense, Reynolds number
becomes the sole parameter for controlling the flow characteristics
inside the combustor. As a result, at low load operation, the
generated swirl is more likely to become feeble affecting the flame
stabilization and mixing process. This paper introduces a new swirler
concept which overcomes the mentioned weakness of the modern
configurations. The new swirler introduces air tangentially and
axially to the combustor through tangential vanes and an axial vanes
respectively. Therefore, it provides different swirl numbers for the
same configuration by regulating the ratio between the axial and
tangential flow momenta. The swirler aerodynamic performance was
investigated using four CFD simulations in order to demonstrate the
impact of tangential to axial flow rate ratio on the CRZ. It was found
that the length of the CRZ is directly proportional to the tangential to
axial air flow rate ratio.
Abstract: MM-Path, an acronym for Method/Message Path, describes the dynamic interactions between methods in object-oriented systems. This paper discusses the classifications of MM-Path, based on the characteristics of object-oriented software. We categorize it according to the generation reasons, the effect scope and the composition of MM-Path. A formalized representation of MM-Path is also proposed, which has considered the influence of state on response method sequences of messages. .Moreover, an automatic MM-Path generation approach based on UML Statechart diagram has been presented, and the difficulties in identifying and generating MM-Path can be solved. . As a result, it provides a solid foundation for further research on test cases generation based on MM-Path.
Abstract: The steady coupled dissipative layers, called
Marangoni mixed convection boundary layers, in the presence of a
magnetic field and solute concentration that are formed along the
surface of two immiscible fluids with uniform suction or injection
effects is examined. The similarity boundary layer equations are
solved numerically using the Runge-Kutta Fehlberg with shooting
technique. The Marangoni, buoyancy and external pressure gradient
effects that are generated in mixed convection boundary layer flow
are assessed. The velocity, temperature and concentration boundary
layers thickness decrease with the increase of the magnetic field
strength and the injection to suction. For buoyancy-opposed flow, the
Marangoni mixed convection parameter enhances the velocity
boundary layer but decreases the temperature and concentration
boundary layers. However, for the buoyancy-assisted flow, the
Marangoni mixed convection parameter decelerates the velocity but
increases the temperature and concentration boundary layers.
Abstract: Software Reusability is primary attribute of software
quality. There are metrics for identifying the quality of reusable
components but the function that makes use of these metrics to find
reusability of software components is still not clear. These metrics if
identified in the design phase or even in the coding phase can help us
to reduce the rework by improving quality of reuse of the component
and hence improve the productivity due to probabilistic increase in
the reuse level. In this paper, we have devised the framework of
metrics that uses McCabe-s Cyclometric Complexity Measure for
Complexity measurement, Regularity Metric, Halstead Software
Science Indicator for Volume indication, Reuse Frequency metric
and Coupling Metric values of the software component as input
attributes and calculated reusability of the software component. Here,
comparative analysis of the fuzzy, Neuro-fuzzy and Fuzzy-GA
approaches is performed to evaluate the reusability of software
components and Fuzzy-GA results outperform the other used
approaches. The developed reusability model has produced high
precision results as expected by the human experts.
Abstract: This paper presents the use of three-dimensional finite
elements coupled with infinite elements to investigate the ground
vibrations at the surface in terms of the peak particle velocity (PPV)
due to construction of the first bore of the Dublin Port Tunnel. This
situation is analysed using a commercially available general-purpose
finite element package ABAQUS. A series of parametric studies is
carried out to examine the sensitivity of the predicted vibrations to
variations in the various input parameters required by finite element
method, including the stiffness and the damping of ground. The
results of this study show that stiffness has a more significant effect
on the PPV rather than the damping of the ground.
Abstract: The use of neural networks for recognition application is generally constrained by their inherent parameters inflexibility after the training phase. This means no adaptation is accommodated for input variations that have any influence on the network parameters. Attempts were made in this work to design a neural network that includes an additional mechanism that adjusts the threshold values according to the input pattern variations. The new approach is based on splitting the whole network into two subnets; main traditional net and a supportive net. The first deals with the required output of trained patterns with predefined settings, while the second tolerates output generation dynamically with tuning capability for any newly applied input. This tuning comes in the form of an adjustment to the threshold values. Two levels of supportive net were studied; one implements an extended additional layer with adjustable neuronal threshold setting mechanism, while the second implements an auxiliary net with traditional architecture performs dynamic adjustment to the threshold value of the main net that is constructed in dual-layer architecture. Experiment results and analysis of the proposed designs have given quite satisfactory conducts. The supportive layer approach achieved over 90% recognition rate, while the multiple network technique shows more effective and acceptable level of recognition. However, this is achieved at the price of network complexity and computation time. Recognition generalization may be also improved by accommodating capabilities involving all the innate structures in conjugation with Intelligence abilities with the needs of further advanced learning phases.