Abstract: Through inward perceptions, we intuitively expect
distributed software development to increase the risks associated with
achieving cost, schedule, and quality goals. To compound this
problem, agile software development (ASD) insists one of the main
ingredients of its success is cohesive communication attributed to
collocation of the development team. The following study identified
the degree of communication richness needed to achieve comparable
software quality (reduce pre-release defects) between distributed and
collocated teams. This paper explores the relevancy of
communication richness in various development phases and its
impact on quality. Through examination of a large distributed agile
development project, this investigation seeks to understand the levels
of communication required within each ASD phase to produce
comparable quality results achieved by collocated teams. Obviously,
a multitude of factors affects the outcome of software projects.
However, within distributed agile software development teams, the
mode of communication is one of the critical components required to
achieve team cohesiveness and effectiveness. As such, this study
constructs a distributed agile communication model (DAC-M) for
potential application to similar distributed agile development efforts
using the measurement of the suitable level of communication. The
results of the study show that less rich communication methods, in
the appropriate phase, might be satisfactory to achieve equivalent
quality in distributed ASD efforts.
Abstract: Ultra-wide band (UWB) communication is one of
the most promising technologies for high data rate wireless networks
for short range applications. This paper proposes a blind channel
estimation method namely IMM (Interactive Multiple Model) Based
Kalman algorithm for UWB OFDM systems. IMM based Kalman
filter is proposed to estimate frequency selective time varying
channel. In the proposed method, two Kalman filters are concurrently
estimate the channel parameters. The first Kalman filter namely
Static Model Filter (SMF) gives accurate result when the user is static
while the second Kalman filter namely the Dynamic Model Filter
(DMF) gives accurate result when the receiver is in moving state. The
static transition matrix in SMF is assumed as an Identity matrix
where as in DMF, it is computed using Yule-Walker equations. The
resultant filter estimate is computed as a weighted sum of individual
filter estimates. The proposed method is compared with other existing
channel estimation methods.
Abstract: Adopting Zakowski-s upper approximation operator
C and lower approximation operator C, this paper investigates
granularity-wise separations in covering approximation spaces. Some
characterizations of granularity-wise separations are obtained by
means of Pawlak rough sets and some relations among granularitywise
separations are established, which makes it possible to research
covering approximation spaces by logical methods and mathematical
methods in computer science. Results of this paper give further
applications of Pawlak rough set theory in pattern recognition and
artificial intelligence.
Abstract: Investigations of the unimolecular decomposition of
vinyl ethyl ether (VEE), vinyl propyl ether (VPE) and vinyl butyl
ether (VBE) have shown that activation of the molecule of a ether
results in formation of a cyclic construction - the transition state (TS),
which may lead to the displacement of the thermodynamic
equilibrium towards the reaction products. The TS is obtained by
applying energy minimization relative to the ground state of an ether
under the program MM2 when taking into account the hydrogen bond
formation between a hydrogen atom of alkyl residue and the extreme
atom of carbon of the vinyl group. The dissociation of TS up to the
products is studied by energy minimization procedure using the
mathematical program Gaussian. The obtained calculation data for
VEE testify that the decomposition of this ether may be conditioned
by hydrogen bond formation for two possible versions: when α- or β-
hydrogen atoms of the ethyl group are bound to carbon atom of the
vinyl group. Applying the same calculation methods to other ethers
(VPE and VBE) it is shown that only in the case of hydrogen bonding
between α-hydrogen atom of the alkyl residue and the extreme atom
of carbon of the vinyl group (αH---C) results in decay of theses
ethers.
Abstract: The pulp and paper mill effluent is one of the high
polluting effluent amongst the effluents obtained from polluting
industries. All the available methods for treatment of pulp and paper
mill effluent have certain drawbacks. The coagulation is one of the
cheapest process for treatment of various organic effluents. Thus, the
removal of chemical oxygen demand (COD) and colour of paper mill
effluent is studied using coagulation process. The batch coagulation
process was performed using various coagulants like: aluminium
chloride, poly aluminium chloride and copper sulphate. The initial
pH of the effluent (Coagulation pH) has tremendous effect on COD
and colour removal. Poly aluminium chloride (PAC) as coagulant
reduced COD to 84 % and 92 % of colour was removed at an
optimum pH 5 and coagulant dose of 8 ml l-1. With aluminium
chloride at an optimum pH = 4 and coagulant dose of 5 g l-1, 74 %
COD and 86 % colour removal were observed. The results using
copper sulphate as coagulant (a less commercial coagulant) were
encouraging. At an optimum pH 6 and mass loading of 5 g l-1, 76 %
COD reduction and 78 % colour reduction were obtained. It was also
observed that after addition of coagulant, the pH of the effluent
decreases. The decrease in pH was highest for AlCl3, which was
followed by PAC and CuSO4. Significant amount of COD reductions
was obtained by coagulation process. Since the coagulation process
is the first stage for treatment of effluent and some of the coagulant
cations usually remain in the treated effluents. Thus, cation like
copper may be one of the good catalyst for second stage of treatment
process like wet oxidation. The copper has been found to be good
oxidation catalyst then iron and aluminum.
Abstract: Image compression is one of the most important
applications Digital Image Processing. Advanced medical imaging
requires storage of large quantities of digitized clinical data. Due to
the constrained bandwidth and storage capacity, however, a medical
image must be compressed before transmission and storage. There
are two types of compression methods, lossless and lossy. In Lossless
compression method the original image is retrieved without any
distortion. In lossy compression method, the reconstructed images
contain some distortion. Direct Cosine Transform (DCT) and Fractal
Image Compression (FIC) are types of lossy compression methods.
This work shows that lossy compression methods can be chosen for
medical image compression without significant degradation of the
image quality. In this work DCT and Fractal Compression using
Partitioned Iterated Function Systems (PIFS) are applied on different
modalities of images like CT Scan, Ultrasound, Angiogram, X-ray
and mammogram. Approximately 20 images are considered in each
modality and the average values of compression ratio and Peak
Signal to Noise Ratio (PSNR) are computed and studied. The quality
of the reconstructed image is arrived by the PSNR values. Based on
the results it can be concluded that the DCT has higher PSNR values
and FIC has higher compression ratio. Hence in medical image
compression, DCT can be used wherever picture quality is preferred
and FIC is used wherever compression of images for storage and
transmission is the priority, without loosing picture quality
diagnostically.
Abstract: Steganography meaning covered writing. Steganography includes the concealment of information within computer files [1]. In other words, it is the Secret communication by hiding the existence of message. In this paper, we will refer to cover image, to indicate the images that do not yet contain a secret message, while we will refer to stego images, to indicate an image with an embedded secret message. Moreover, we will refer to the secret message as stego-message or hidden message. In this paper, we proposed a technique called RGB intensity based steganography model as RGB model is the technique used in this field to hide the data. The methods used here are based on the manipulation of the least significant bits of pixel values [3][4] or the rearrangement of colors to create least significant bit or parity bit patterns, which correspond to the message being hidden. The proposed technique attempts to overcome the problem of the sequential fashion and the use of stego-key to select the pixels.
Abstract: This paper presents a new heuristic algorithm for the classical symmetric traveling salesman problem (TSP). The idea of the algorithm is to cut a TSP tour into overlapped blocks and then each block is improved separately. It is conjectured that the chance of improving a good solution by moving a node to a position far away from its original one is small. By doing intensive search in each block, it is possible to further improve a TSP tour that cannot be improved by other local search methods. To test the performance of the proposed algorithm, computational experiments are carried out based on benchmark problem instances. The computational results show that algorithm proposed in this paper is efficient for solving the TSPs.
Abstract: The volume of XML data exchange is explosively
increasing, and the need for efficient mechanisms of XML data
management is vital. Many XML storage models have been proposed
for storing XML DTD-independent documents in relational database
systems. Benchmarking is the best way to highlight pros and cons of
different approaches. In this study, we use a common benchmarking
scheme, known as XMark to compare the most cited and newly
proposed DTD-independent methods in terms of logical reads,
physical I/O, CPU time and duration. We show the effect of Label
Path, extracting values and storing in another table and type of join
needed for each method-s query answering.
Abstract: Several methods are available for weight and shape
optimization of structures, among which Evolutionary Structural
Optimization (ESO) is one of the most widely used methods. In ESO,
however, the optimization criterion is completely case-dependent.
Moreover, only the improving solutions are accepted during the
search. In this paper a Simulated Annealing (SA) algorithm is used
for structural optimization problem. This algorithm differs from other
random search methods by accepting non-improving solutions. The
implementation of SA algorithm is done through reducing the
number of finite element analyses (function evaluations).
Computational results show that SA can efficiently and effectively
solve such optimization problems within short search time.
Abstract: This paper presents an application of Artificial Neural Network (ANN) to forecast actual cost of a project based on the earned value management system (EVMS). For this purpose, some projects randomly selected based on the standard data set , and it is produced necessary progress data such as actual cost ,actual percent complete , baseline cost and percent complete for five periods of project. Then an ANN with five inputs and five outputs and one hidden layer is trained to produce forecasted actual costs. The comparison between real and forecasted data show better performance based on the Mean Absolute Percentage Error (MAPE) criterion. This approach could be applicable to better forecasting the project cost and result in decreasing the risk of project cost overrun, and therefore it is beneficial for planning preventive actions.
Abstract: Researchers have been applying artificial/ computational intelligence (AI/CI) methods to computer games. In this research field, further researchesare required to compare AI/CI methods with respect to each game application. In thispaper, we report our experimental result on the comparison of evolution strategy, genetic algorithm and their hybrids, applied to evolving controller agents for MarioAI. GA revealed its advantage in our experiment, whereas the expected ability of ES in exploiting (fine-tuning) solutions was not clearly observed. The blend crossover operator and the mutation operator of GA might contribute well to explore the vast search space.
Abstract: Several numerical schemes utilizing central difference
approximations have been developed to solve the Goursat problem.
However, in a recent years compact discretization methods which
leads to high-order finite difference schemes have been used since it
is capable of achieving better accuracy as well as preserving certain
features of the equation e.g. linearity. The basic idea of the new
scheme is to find the compact approximations to the derivative terms
by differentiating centrally the governing equations. Our primary
interest is to study the performance of the new scheme when applied
to two Goursat partial differential equations against the traditional
finite difference scheme.
Abstract: Independent component analysis can estimate unknown
source signals from their mixtures under the assumption that the
source signals are statistically independent. However, in a real environment,
the separation performance is often deteriorated because
the number of the source signals is different from that of the sensors.
In this paper, we propose an estimation method for the number of
the sources based on the joint distribution of the observed signals
under two-sensor configuration. From several simulation results, it
is found that the number of the sources is coincident to that of
peaks in the histogram of the distribution. The proposed method can
estimate the number of the sources even if it is larger than that of
the observed signals. The proposed methods have been verified by
several experiments.
Abstract: Intrusion detection is a mechanism used to protect a
system and analyse and predict the behaviours of system users. An
ideal intrusion detection system is hard to achieve due to
nonlinearity, and irrelevant or redundant features. This study
introduces a new anomaly-based intrusion detection model. The
suggested model is based on particle swarm optimisation and
nonlinear, multi-class and multi-kernel support vector machines.
Particle swarm optimisation is used for feature selection by applying
a new formula to update the position and the velocity of a particle;
the support vector machine is used as a classifier. The proposed
model is tested and compared with the other methods using the KDD
CUP 1999 dataset. The results indicate that this new method achieves
better accuracy rates than previous methods.
Abstract: Saturated hydraulic conductivity is one of the soil
hydraulic properties which is widely used in environmental studies
especially subsurface ground water. Since, its direct measurement is
time consuming and therefore costly, indirect methods such as
pedotransfer functions have been developed based on multiple linear
regression equations and neural networks model in order to estimate
saturated hydraulic conductivity from readily available soil
properties e.g. sand, silt, and clay contents, bulk density, and organic
matter. The objective of this study was to develop neural networks
(NNs) model to estimate saturated hydraulic conductivity from
available parameters such as sand and clay contents, bulk density,
van Genuchten retention model parameters (i.e. r
θ , α , and n) as well
as effective porosity. We used two methods to calculate effective
porosity: : (1) eff s FC φ =θ -θ , and (2) inf φ =θ -θ eff s , in which s
θ is
saturated water content, FC θ is water content retained at -33 kPa
matric potential, and inf θ is water content at the inflection point.
Total of 311 soil samples from the UNSODA database was divided
into three groups as 187 for the training, 62 for the validation (to
avoid over training), and 62 for the test of NNs model. A commercial
neural network toolbox of MATLAB software with a multi-layer
perceptron model and back propagation algorithm were used for the
training procedure. The statistical parameters such as correlation
coefficient (R2), and mean square error (MSE) were also used to
evaluate the developed NNs model. The best number of neurons in
the middle layer of NNs model for methods (1) and (2) were
calculated 44 and 6, respectively. The R2 and MSE values of the test
phase were determined for method (1), 0.94 and 0.0016, and for
method (2), 0.98 and 0.00065, respectively, which shows that method
(2) estimates saturated hydraulic conductivity better than method (1).
Abstract: Variational methods for optical flow estimation are
known for their excellent performance. The method proposed by Brox
et al. [5] exemplifies the strength of that framework. It combines
several concepts into single energy functional that is then minimized
according to clear numerical procedure. In this paper we propose
a modification of that algorithm starting from the spatiotemporal
gradient constancy assumption. The numerical scheme allows to
establish the connection between our model and the CLG(H) method
introduced in [18]. Experimental evaluation carried out on synthetic
sequences shows the significant superiority of the spatial variant of
the proposed method. The comparison between methods for the realworld
sequence is also enclosed.
Abstract: This paper describes an interfacing of C and the
TMS320C6713 assembly language which is crucially important for
many real-time applications. Similarly, interfacing of C with the
assembly language of a conventional microprocessor such as
MC68000 is presented for comparison. However, it should be noted
that the way the C compiler passes arguments among various
functions in the TMS320C6713-based environment is totally
different from the way the C compiler passes arguments in a
conventional microprocessor such as MC68000. Therefore, it is very
important for a user of the TMS320C6713-based system to properly
understand and follow the register conventions when interfacing C
with the TMS320C6713 assembly language subroutine. It should be
also noted that in some cases (examples 6-9) the endian-mode of the
board needs to be taken into consideration. In this paper, one method
is presented in great detail. Other methods will be presented in the
future.
Abstract: Understanding patient factors related to physical activity behavior is important in the management of Type 2 Diabetes. This study applied the Theory of Planned Behavior model to understand physical activity behavior among sampled Type 2 diabetics in Kenya. The study was conducted within the diabetic clinic at Kisii Level 5 Hospital and adopted sequential mixed methods design beginning with qualitative phase and ending with quantitative phase. Qualitative data was analyzed using grounded theory analysis method. Structural equation modeling using maximum likelihood was used to analyze quantitative data. The common fit indices revealed that the theory of planned behavior fitted the data acceptably well among the Type 2 diabetes and within physical activity behavior {¤ç2 = 213, df = 84, n=230, p = .061, ¤ç2/df = 2.53; TLI = .97; CFI =.96; RMSEA (90CI) = .073(.029, .08)}. This theory proved to be useful in understanding physical activity behavior among Type 2 diabetics.
Abstract: Severe symptoms, such as dissociation, depersonalization, self-mutilation, suicidal ideations and gestures, are the main reasons for a person to be diagnosed with Borderline Personality Disorder (BPD) and admitted to an inpatient Psychiatric Hospital. However, these symptoms are also indicators of a severe traumatic history as indicated by the extensive research on the topic. Unfortunately patients with such clinical presentation often are treated repeatedly only for their symptomatic behavior, while the main cause for their suffering, the trauma itself, is usually left unaddressed therapeutically. All of the highly structured, replicable, and manualized treatments lack the recognition of the uniqueness of the person and fail to respect his/her rights to experience and react in an idiosyncratic manner. Thus the communicative and adaptive meaning of such symptomatic behavior is missed. Only its pathological side is recognized and subjected to correction and stigmatization, and the message that the person is damaged goods that needs fixing is conveyed once again. However, this time the message would be even more convincing for the victim, because it is sent by mental health providers, who have the credibility to make such a judgment. The result is a revolving door of very expensive hospitalizations for only a temporary and patchy fix. In this way the patients, once victims of abuse and hardship are left invalidated and thus their re-victimization is perpetuated in their search for understanding and help. Keywordsborderline personality disorder (BPD), complex PTSD, integrative treatment of trauma, re-victimization of trauma victims.