Abstract: In this paper is presented a Geographic Information System (GIS) approach in order to qualify and monitor the broadband lines in efficient way. The methodology used for interpolation is the Delaunay Triangular Irregular Network (TIN). This method is applied for a case study in ISP Greece monitoring 120,000 broadband lines.
Abstract: In this paper, a new approach based on the extent of
friendship between the nodes is proposed which makes the nodes to
co-operate in an ad hoc environment. The extended DSR protocol is
tested under different scenarios by varying the number of malicious
nodes and node moving speed. It is also tested varying the number of
nodes in simulation used. The result indicates the achieved
throughput by extended DSR is greater than the standard DSR and
indicates the percentage of malicious drops over total drops are less
in the case of extended DSR than the standard DSR.
Abstract: The response surface methodology (RSM) is a
collection of mathematical and statistical techniques useful in the
modeling and analysis of problems in which the dependent variable
receives the influence of several independent variables, in order to
determine which are the conditions under which should operate these
variables to optimize a production process. The RSM estimated a
regression model of first order, and sets the search direction using the
method of maximum / minimum slope up / down MMS U/D.
However, this method selects the step size intuitively, which can
affect the efficiency of the RSM. This paper assesses how the step
size affects the efficiency of this methodology. The numerical
examples are carried out through Monte Carlo experiments,
evaluating three response variables: efficiency gain function, the
optimum distance and the number of iterations. The results in the
simulation experiments showed that in response variables efficiency
and gain function at the optimum distance were not affected by the
step size, while the number of iterations is found that the efficiency if
it is affected by the size of the step and function type of test used.
Abstract: Concerns about low levels of children-s physical activity and motor skill development, prompted the Ministry of Education to trial a physical activity pilot project (PAPP) in 16 New Zealand primary schools. The project comprised professional development and training in physical education for lead teachers and introduced four physical activity coordinators to liaise with and increase physical activity opportunities in the pilot schools. A survey of generalist teachers (128 baseline, 155 post-intervention) from these schools looked at timetabled physical activity sessions and issues related to teaching physical education. The authors calculated means and standard deviations of data relating to timetabled PE sessions and used a one-way analysis of variance to determine significant differences. Results indicated time devoted to physical activity related subjects significantly increased over the course of the intervention. Teacher-s reported improved confidence and competence, which resulted in an improvement in quality physical education delivered more often.
Abstract: Facility Layout Problem (FLP) is one of the essential
problems of several types of manufacturing and service sector. It is
an optimization problem on which the main objective is to obtain the
efficient locations, arrangement and order of the facilities. In the
literature, there are numerous facility layout problem research
presented and have used meta-heuristic approaches to achieve
optimal facility layout design. This paper presented genetic algorithm
to solve facility layout problem; to minimize total cost function. The
performance of the proposed approach was verified and compared
using problems in the literature.
Abstract: Time varying network induced delays in networked
control systems (NCS) are known for degrading control system-s
quality of performance (QoP) and causing stability problems. In
literature, a control method employing modeling of communication
delays as probability distribution, proves to be a better method. This
paper focuses on modeling of network induced delays as probability
distribution.
CAN and MIL-STD-1553B are extensively used to carry periodic
control and monitoring data in networked control systems.
In literature, methods to estimate only the worst-case delays for
these networks are available. In this paper probabilistic network
delay model for CAN and MIL-STD-1553B networks are given.
A systematic method to estimate values to model parameters from
network parameters is given. A method to predict network delay in
next cycle based on the present network delay is presented. Effect of
active network redundancy and redundancy at node level on network
delay and system response-time is also analyzed.
Abstract: The aim of this paper to characterize a larger set of
wavelet functions for implementation in a still image compression
system using SPIHT algorithm. This paper discusses important
features of wavelet functions and filters used in sub band coding to
convert image into wavelet coefficients in MATLAB. Image quality
is measured objectively using peak signal to noise ratio (PSNR) and
its variation with bit rate (bpp). The effect of different parameters is
studied on different wavelet functions. Our results provide a good
reference for application designers of wavelet based coder.
Abstract: Independent component analysis (ICA) is a computational method for finding underlying signals or components from multivariate statistical data. The ICA method has been successfully applied in many fields, e.g. in vision research, brain imaging, geological signals and telecommunications. In this paper, we apply the ICA method to an analysis of mass spectra of oligomeric species emerged from aluminium sulphate. Mass spectra are typically complex, because they are linear combinations of spectra from different types of oligomeric species. The results show that ICA can decomposite the spectral components for useful information. This information is essential in developing coagulation phases of water treatment processes.
Abstract: Within the domain of Systems Engineering the need
to perform property aggregation to understand, analyze and manage
complex systems is unequivocal. This can be seen in numerous
domains such as capability analysis, Mission Essential Competencies
(MEC) and Critical Design Features (CDF). Furthermore, the need
to consider uncertainty propagation as well as the sensitivity of
related properties within such analysis is equally as important when
determining a set of critical properties within such a system.
This paper describes this property breakdown in a number of
domains within Systems Engineering and, within the area of CDFs,
emphasizes the importance of uncertainty analysis. As part of this, a
section of the paper describes possible techniques which may be used
within uncertainty propagation and in conclusion an example is
described utilizing one of the techniques for property and uncertainty
aggregation within an aircraft system to aid the determination of
Critical Design Features.
Abstract: This research was carried out to determine the
possible effects of low electromagnetic field (EMF) exposure to the
developing mice fetuses. Pregnant mice were exposed to EMF
exposure at 0mT (sham) and 1.2 mT for six hours per session, carried
out on gestation day 3, 6, 9, 12 and 15. Samples from the stillborn
offspring were observed for morphological defects. The heart didn-t
show progressive cellular damage, the lungs were congested and
emphysemics. The bones were in advance stage of hypertrophy.
Spectrums of morphological defects were observed over 70% of the
surviving offspring. These results indicate that even at lower
exposure to low EMF, is enough to induce morphological defects in
prenatal mice.
Abstract: Transaction management is one of the most crucial requirements for enterprise application development which often require concurrent access to distributed data shared amongst multiple application / nodes. Transactions guarantee the consistency of data records when multiple users or processes perform concurrent operations. Existing Fault Tolerance Infrastructure for Mobile Agents (FTIMA) provides a fault tolerant behavior in distributed transactions and uses multi-agent system for distributed transaction and processing. In the existing FTIMA architecture, data flows through the network and contains personal, private or confidential information. In banking transactions a minor change in the transaction can cause a great loss to the user. In this paper we have modified FTIMA architecture to ensure that the user request reaches the destination server securely and without any change. We have used triple DES for encryption/ decryption and MD5 algorithm for validity of message.
Abstract: The challenge for software development house in
Bangladesh is to find a path of using minimum process rather than CMMI or ISO type gigantic practice and process area. The small and medium size organization in Bangladesh wants to ensure minimum
basic Software Process Improvement (SPI) in day to day operational
activities. Perhaps, the basic practices will ensure to realize their company's improvement goals. This paper focuses on the key issues in basic software practices for small and medium size software
organizations, who are unable to effort the CMMI, ISO, ITIL etc. compliance certifications. This research also suggests a basic software process practices model for Bangladesh and it will show the mapping of our suggestions with international best practice. In this IT
competitive world for software process improvement, Small and medium size software companies that require collaboration and
strengthening to transform their current perspective into inseparable global IT scenario. This research performed some investigations and analysis on some projects- life cycle, current good practice, effective approach, reality and pain area of practitioners, etc. We did some
reasoning, root cause analysis, comparative analysis of various
approach, method, practice and justifications of CMMI and real life. We did avoid reinventing the wheel, where our focus is for minimal
practice, which will ensure a dignified satisfaction between
organizations and software customer.
Abstract: Octree compression techniques have been used
for several years for compressing large three dimensional data
sets into homogeneous regions. This compression technique
is ideally suited to datasets which have similar values in
clusters. Oil engineers represent reservoirs as a three dimensional
grid where hydrocarbons occur naturally in clusters. This
research looks at the efficiency of storing these grids using
octree compression techniques where grid cells are broken
into active and inactive regions. Initial experiments yielded
high compression ratios as only active leaf nodes and their
ancestor, header nodes are stored as a bitstream to file on
disk. Savings in computational time and memory were possible
at decompression, as only active leaf nodes are sent to the
graphics card eliminating the need of reconstructing the original
matrix. This results in a more compact vertex table, which can
be loaded into the graphics card quicker and generating shorter
refresh delay times.
Abstract: The aim of this paper is to provide an empirical
evidence about the effects that the management of continuous
training have on employability (or employment stability) in the
Spanish labour market. With this purpose a binary logit model with
interaction effect is been used. The dependent variable includes two
situations of the active workers: continuous and discontinuous
employability. To distinguish between them an Employability Index
Stability (ESI) was calculated taking into account two factors: time
worked and job security. Various aspects of the continuous training
and personal workers data are used as independent variables. The
data obtained from a survey of a sample of 918 employed have
revealed a relationship between the likelihood of continuous
employability and continuous training received. The empirical results
support the positive and significant relationship between various
aspects of the training provided by firms and employability
likelihood of the workers, postulate alike from a theoretical point of
view.
Abstract: In this work, we study the impact of dynamically changing link slowdowns on the stability properties of packetswitched networks under the Adversarial Queueing Theory framework. Especially, we consider the Adversarial, Quasi-Static Slowdown Queueing Theory model, where each link slowdown may take on values in the two-valued set of integers {1, D} with D > 1 which remain fixed for a long time, under a (w, p)-adversary. In this framework, we present an innovative systematic construction for the estimation of adversarial injection rate lower bounds, which, if exceeded, cause instability in networks that use the LIS (Longest-in- System) protocol for contention-resolution. In addition, we show that a network that uses the LIS protocol for contention-resolution may result in dropping its instability bound at injection rates p > 0 when the network size and the high slowdown D take large values. This is the best ever known instability lower bound for LIS networks.
Abstract: Whole genome duplication (WGD) increased the
number of yeast Saccharomyces cerevisiae chromosomes from 8 to
16. In spite of retention the number of chromosomes in the genome
of this organism after WGD to date, chromosomal rearrangement
events have caused an evolutionary distance between current genome
and its ancestor. Studies under evolutionary-based approaches on
eukaryotic genomes have shown that the rearrangement distance is an
approximable problem. In the case of S. cerevisiae, we describe that
rearrangement distance is accessible by using dedoubled adjacency
graph drawn for 55 large paired chromosomal regions originated
from WGD. Then, we provide a program extracted from a C program
database to draw a dedoubled genome adjacency graph for S.
cerevisiae. From a bioinformatical perspective, using the duplicated
blocks of current genome in S. cerevisiae, we infer that genomic
organization of eukaryotes has the potential to provide valuable
detailed information about their ancestrygenome.
Abstract: The paper presents the potential of fuzzy logic (FL-I)
and neural network techniques (ANN-I) for predicting the
compressive strength, for SCC mixtures. Six input parameters that is
contents of cement, sand, coarse aggregate, fly ash, superplasticizer
percentage and water-to-binder ratio and an output parameter i.e. 28-
day compressive strength for ANN-I and FL-I are used for modeling.
The fuzzy logic model showed better performance than neural
network model.
Abstract: Estimation of voltage stability based on optimal
filtering method is presented. PV curve is used as a tool for voltage stability analysis. Dynamic voltage stability estimation is done by
using particle filter method. Optimum value (nose point) of PV curve can be estimated by estimating parameter of PV curve equation
optimal value represents critical voltage and
condition at specified point of measurement. Voltage stability is then estimated by analyzing loading margin condition c stimating equation. This
maximum loading
ecified dynamically.
Abstract: Power cables are vulnerable to failure due to aging or
defects that occur with the passage of time under continuous
operation and loading stresses. PD detection and characterization
provide information on the location, nature, form and extent of the
degradation. As a result, PD monitoring has become an important
part of condition based maintenance (CBM) program among power
utilities. Online partial discharge (PD) localization of defect sources
in power cable system is possible using the time of flight method.
The information regarding the time difference between the main and
reflected pulses and cable length can help in locating the partial
discharge source along the cable length. However, if the length of
the cable is not known and the defect source is located at the extreme
ends of the cable or in the middle of the cable, then double ended
measurement is required to indicate the location of PD source. Use of
multiple sensors can also help in discriminating the cable PD or local/
external PD. This paper presents the experience and results from
online partial discharge measurements conducted in the laboratory
and the challenges in partial discharge source localization.
Abstract: In this study, effects of premixed and equivalence
ratios on CO and HC emissions of a dual fuel HCCI engine are
investigated. Tests were conducted on a single-cylinder engine with
compression ratio of 17.5. Premixed gasoline is provided by a
carburetor connected to intake manifold and equipped with a screw
to adjust premixed air-fuel ratio, and diesel fuel is injected directly
into the cylinder through an injector at pressure of 250 bars. A heater
placed at inlet manifold is used to control the intake charge
temperature. Optimal intake charge temperature results in better
HCCI combustion due to formation of a homogeneous mixture,
therefore, all tests were carried out over the optimum intake
temperature of 110-115 ºC. Timing of diesel fuel injection has a great
effect on stratification of in-cylinder charge and plays an important
role in HCCI combustion phasing. Experiments indicated 35 BTDC
as the optimum injection timing. Varying the coolant temperature in
a range of 40 to 70 ºC, better HCCI combustion was achieved at 50
ºC. Therefore, coolant temperature was maintained 50 ºC during all
tests. Simultaneous investigation of effective parameters on HCCI
combustion was conducted to determine optimum parameters
resulting in fast transition to HCCI combustion. One of the
advantages of the method studied in this study is feasibility of easy
and fast transition of typical diesel engine to a dual fuel HCCI
engine. Results show that increasing premixed ratio, while keeping
EGR rate constant, increases unburned hydrocarbon (UHC)
emissions due to quenching phenomena and trapping of premixed
fuel in crevices, but CO emission decreases due to increase in CO to
CO2 reactions.