Abstract: Software project effort estimation is frequently seen
as complex and expensive for individual software engineers.
Software production is in a crisis. It suffers from excessive costs.
Software production is often out of control. It has been suggested that
software production is out of control because we do not measure.
You cannot control what you cannot measure. During last decade, a
number of researches on cost estimation have been conducted. The
metric-set selection has a vital role in software cost estimation
studies; its importance has been ignored especially in neural network
based studies. In this study we have explored the reasons of those
disappointing results and implemented different neural network
models using augmented new metrics. The results obtained are
compared with previous studies using traditional metrics. To be able
to make comparisons, two types of data have been used. The first
part of the data is taken from the Constructive Cost Model
(COCOMO'81) which is commonly used in previous studies and the
second part is collected according to new metrics in a leading
international company in Turkey. The accuracy of the selected
metrics and the data samples are verified using statistical techniques.
The model presented here is based on Multi-Layer Perceptron
(MLP). Another difficulty associated with the cost estimation studies
is the fact that the data collection requires time and care. To make a
more thorough use of the samples collected, k-fold, cross validation
method is also implemented. It is concluded that, as long as an
accurate and quantifiable set of metrics are defined and measured
correctly, neural networks can be applied in software cost estimation
studies with success
Abstract: On the basis of the linearized Phillips-Herffron model of a single-machine power system, a novel method for designing unified power flow controller (UPFC) based output feedback controller is presented. The design problem of output feedback controller for UPFC is formulated as an optimization problem according to with the time domain-based objective function which is solved by iteration particle swarm optimization (IPSO) that has a strong ability to find the most optimistic results. To ensure the robustness of the proposed damping controller, the design process takes into account a wide range of operating conditions and system configurations. The simulation results prove the effectiveness and robustness of the proposed method in terms of a high performance power system. The simulation study shows that the designed controller by Iteration PSO performs better than Classical PSO in finding the solution.
Abstract: The security of their network remains the priorities of almost all companies. Existing security systems have shown their limit; thus a new type of security systems was born: honeypots. Honeypots are defined as programs or intended servers which have to attract pirates to study theirs behaviours. It is in this context that the leurre.com project of gathering about twenty platforms was born. This article aims to specify a model of honeypots attack. Our model describes, on a given platform, the evolution of attacks according to theirs hours. Afterward, we show the most attacked services by the studies of attacks on the various ports. It is advisable to note that this article was elaborated within the framework of the research projects on honeyspots within the LABTIC (Laboratory of Information Technologies and Communication).
Abstract: In this paper we present a novel approach for face image coding. The proposed method makes a use of the features of video encoders like motion prediction. At first encoder selects appropriate prototype from the database and warps it according to features of encoding face. Warped prototype is placed as first I frame. Encoding face is placed as second frame as P frame type. Information about features positions, color change, selected prototype and data flow of P frame will be sent to decoder. The condition is both encoder and decoder own the same database of prototypes. We have run experiment with H.264 video encoder and obtained results were compared to results achieved by JPEG and JPEG2000. Obtained results show that our approach is able to achieve 3 times lower bitrate and two times higher PSNR in comparison with JPEG. According to comparison with JPEG2000 the bitrate was very similar, but subjective quality achieved by proposed method is better.
Abstract: Robot manipulators are highly coupled nonlinear
systems, therefore real system and mathematical model of dynamics
used for control system design are not same. Hence, fine-tuning of
controller is always needed. For better tuning fast simulation speed
is desired. Since, Matlab incorporates LAPACK to increase the speed
and complexity of matrix computation, dynamics, forward and
inverse kinematics of PUMA 560 is modeled on Matlab/Simulink in
such a way that all operations are matrix based which give very less
simulation time. This paper compares PID parameter tuning using
Genetic Algorithm, Simulated Annealing, Generalized Pattern Search
(GPS) and Hybrid Search techniques. Controller performances for all
these methods are compared in terms of joint space ITSE and
cartesian space ISE for tracking circular and butterfly trajectories.
Disturbance signal is added to check robustness of controller. GAGPS
hybrid search technique is showing best results for tuning PID
controller parameters in terms of ITSE and robustness.
Abstract: The incorporation of renewable energy sources for the sustainable electricity production is undertaking a more prominent role in electric power systems. Thus, it will be an indispensable incident that the characteristics of future power networks, their prospective stability for instance, get influenced by the imposed features of sustainable energy sources. One of the distinctive attributes of the sustainable energy sources is exhibiting the stochastic behavior. This paper investigates the impacts of this stochastic behavior on the small disturbance rotor angle stability in the upcoming electric power networks. Considering the various types of renewable energy sources and the vast variety of system configurations, the sensitivity analysis can be an efficient breakthrough towards generalizing the effects of new energy sources on the concept of stability. In this paper, the definition of small disturbance angle stability for future power systems and the iterative-stochastic way of its analysis are presented. Also, the effects of system parameters on this type of stability are described by performing a sensitivity analysis for an electric power test system.
Abstract: Trace element speciation of an integrated soil
amendment matrix was studied with a modified BCR sequential
extraction procedure. The analysis included pseudo-total
concentration determinations according to USEPA 3051A and
relevant physicochemical properties by standardized methods. Based
on the results, the soil amendment matrix possessed neutralization
capacity comparable to commercial fertilizers. Additionally, the
pseudo-total concentrations of all trace elements included in the
Finnish regulation for agricultural fertilizers were lower than the
respective statutory limit values. According to chemical speciation,
the lability of trace elements increased in the following order: Hg <
Cr < Co < Cu < As < Zn < Ni < Pb < Cd < V < Mo < Ba. The
validity of the BCR approach as a tool for chemical speciation was
confirmed by the additional acid digestion phase. Recovery of trace
elements during the procedure assured the validity of the approach
and indicated good quality of the analytical work.
Abstract: In this paper we proposed comparison of four content based objective metrics with results of subjective tests from 80 video sequences. We also include two objective metrics VQM and SSIM to our comparison to serve as “reference” objective metrics because their pros and cons have already been published. Each of the video sequence was preprocessed by the region recognition algorithm and then the particular objective video quality metric were calculated i.e. mutual information, angular distance, moment of angle and normalized cross-correlation measure. The Pearson coefficient was calculated to express metrics relationship to accuracy of the model and the Spearman rank order correlation coefficient to represent the metrics relationship to monotonicity. The results show that model with the mutual information as objective metric provides best result and it is suitable for evaluating quality of video sequences.
Abstract: Reducing river sediments through path correction and
preservation of river walls leads to considerable reduction of
sedimentation at the pumping stations. Path correction and
preservation of walls is not limited to one particular method but,
depending on various conditions, a combination of several methods
can be employed. In this article, we try to review and evaluate
methods for preservation of river banks in order to reduce sediments.
Abstract: The protection of parallel transmission lines has been a challenging task due to mutual coupling between the adjacent circuits of the line. This paper presents a novel scheme for detection and classification of faults on parallel transmission lines. The proposed approach uses combination of wavelet transform and neural network, to solve the problem. While wavelet transform is a powerful mathematical tool which can be employed as a fast and very effective means of analyzing power system transient signals, artificial neural network has a ability to classify non-linear relationship between measured signals by identifying different patterns of the associated signals. The proposed algorithm consists of time-frequency analysis of fault generated transients using wavelet transform, followed by pattern recognition using artificial neural network to identify the type of the fault. MATLAB/Simulink is used to generate fault signals and verify the correctness of the algorithm. The adaptive discrimination scheme is tested by simulating different types of fault and varying fault resistance, fault location and fault inception time, on a given power system model. The simulation results show that the proposed scheme for fault diagnosis is able to classify all the faults on the parallel transmission line rapidly and correctly.
Abstract: This paper presents the development of a Bayesian
belief network classifier for prediction of graft status and survival
period in renal transplantation using the patient profile information
prior to the transplantation. The objective was to explore feasibility
of developing a decision making tool for identifying the most suitable
recipient among the candidate pool members. The dataset was
compiled from the University of Toledo Medical Center Hospital
patients as reported to the United Network Organ Sharing, and had
1228 patient records for the period covering 1987 through 2009. The
Bayes net classifiers were developed using the Weka machine
learning software workbench. Two separate classifiers were induced
from the data set, one to predict the status of the graft as either failed
or living, and a second classifier to predict the graft survival period.
The classifier for graft status prediction performed very well with a
prediction accuracy of 97.8% and true positive values of 0.967 and
0.988 for the living and failed classes, respectively. The second
classifier to predict the graft survival period yielded a prediction
accuracy of 68.2% and a true positive rate of 0.85 for the class
representing those instances with kidneys failing during the first year
following transplantation. Simulation results indicated that it is
feasible to develop a successful Bayesian belief network classifier for
prediction of graft status, but not the graft survival period, using the
information in UNOS database.
Abstract: In this study, a new criterion for determining the number of classes an image should be segmented is proposed. This criterion is based on discriminant analysis for measuring the separability among the segmented classes of pixels. Based on the new discriminant criterion, two algorithms for recursively segmenting the image into determined number of classes are proposed. The proposed methods can automatically and correctly segment objects with various illuminations into separated images for further processing. Experiments on the extraction of text strings from complex document images demonstrate the effectiveness of the proposed methods.1
Abstract: Tracing and locating the geographical location of users (Geolocation) is used extensively in todays Internet. Whenever we, e.g., request a page from google we are - unless there was a specific configuration made - automatically forwarded to the page with the relevant language and amongst others, dependent on our location identified, specific commercials are presented. Especially within the area of Network Security, Geolocation has a significant impact. Because of the way the Internet works, attacks can be executed from almost everywhere. Therefore, for an attribution, knowledge of the origination of an attack - and thus Geolocation - is mandatory in order to be able to trace back an attacker. In addition, Geolocation can also be used very successfully to increase the security of a network during operation (i.e. before an intrusion actually has taken place). Similar to greylisting in emails, Geolocation allows to (i) correlate attacks detected with new connections and (ii) as a consequence to classify traffic a priori as more suspicious (thus particularly allowing to inspect this traffic in more detail). Although numerous techniques for Geolocation are existing, each strategy is subject to certain restrictions. Following the ideas of Endo et al., this publication tries to overcome these shortcomings with a combined solution of different methods to allow improved and optimized Geolocation. Thus, we present our architecture for improved Geolocation, by designing a new algorithm, which combines several Geolocation techniques to increase the accuracy.
Abstract: This paper presents an approach for early breast
cancer diagnostic by employing combination of artificial neural
networks (ANN) and multiwaveletpacket based subband image
decomposition. The microcalcifications correspond to high-frequency
components of the image spectrum, detection of microcalcifications
is achieved by decomposing the mammograms into different
frequency subbands,, reconstructing the mammograms from the
subbands containing only high frequencies. For this approach we
employed different types of multiwaveletpacket. We used the result
as an input of neural network for classification. The proposed
methodology is tested using the Nijmegen and the Mammographic
Image Analysis Society (MIAS) mammographic databases and
images collected from local hospitals. Results are presented as the
receiver operating characteristic (ROC) performance and are
quantified by the area under the ROC curve.
Abstract: This paper proposes a new approach for image encryption
using a combination of different permutation techniques.
The main idea behind the present work is that an image can be
viewed as an arrangement of bits, pixels and blocks. The intelligible
information present in an image is due to the correlations among the
bits, pixels and blocks in a given arrangement. This perceivable information
can be reduced by decreasing the correlation among the bits,
pixels and blocks using certain permutation techniques. This paper
presents an approach for a random combination of the aforementioned
permutations for image encryption. From the results, it is observed
that the permutation of bits is effective in significantly reducing the
correlation thereby decreasing the perceptual information, whereas
the permutation of pixels and blocks are good at producing higher
level security compared to bit permutation. A random combination
method employing all the three techniques thus is observed to be
useful for tactical security applications, where protection is needed
only against a casual observer.
Abstract: Design for cost (DFC) is a method that reduces life
cycle cost (LCC) from the angle of designers. Multiple domain
features mapping (MDFM) methodology was given in DFC. Using
MDFM, we can use design features to estimate the LCC. From the
angle of DFC, the design features of family cars were obtained, such
as all dimensions, engine power and emission volume. At the
conceptual design stage, cars- LCC were estimated using back
propagation (BP) artificial neural networks (ANN) method and
case-based reasoning (CBR). Hamming space was used to measure the
similarity among cases in CBR method. Levenberg-Marquardt (LM)
algorithm and genetic algorithm (GA) were used in ANN. The
differences of LCC estimation model between CBR and artificial
neural networks (ANN) were provided. ANN and CBR separately
each method has its shortcomings. By combining ANN and CBR
improved results accuracy was obtained. Firstly, using ANN selected
some design features that affect LCC. Then using LCC estimation
results of ANN could raise the accuracy of LCC estimation in CBR
method. Thirdly, using ANN estimate LCC errors and correct errors in
CBR-s estimation results if the accuracy is not enough accurate.
Finally, economically family cars and sport utility vehicle (SUV) was
given as LCC estimation cases using this hybrid approach combining
ANN and CBR.
Abstract: Today-s Wi Fi generation utilize the latest technology in their daily lives. Instructors at National University, the second largest non profit private institution of higher learning in California, are incorporating these new tools to modify their Online class formats to better accommodate these new skills in their distance education delivery modes. The University provides accelerated learning in a one-course per month format both Onsite and Online. Since there has been such a significant increase in Online classes over the past three years, and it is expected to grow even more over the over the next five years, Instructors cannot afford to maintain the status quo and not take advantage of these new options. It is at the discretion of the instructors which accessory they use and how comfortable and familiar they are with the technology. This paper explores the effects and summarizes students- comments of some of these new technological options which have been recently provided in order to make students- online learning experience more exciting and meaningful.
Abstract: A new topology of unified power quality conditioner
(UPQC) is proposed for different power quality (PQ) improvement in
a three-phase four-wire (3P-4W) distribution system. For neutral
current mitigation, a star-hexagon transformer is connected in shunt
near the load along with three-leg voltage source inverters (VSIs)
based UPQC. For the mitigation of source neutral current, the uses of
passive elements are advantageous over the active compensation due
to ruggedness and less complexity of control. In addition to this, by
connecting a star-hexagon transformer for neutral current mitigation
the over all rating of the UPQC is reduced. The performance of the
proposed topology of 3P-4W UPQC is evaluated for power-factor
correction, load balancing, neutral current mitigation and mitigation
of voltage and currents harmonics. A simple control algorithm based
on Unit Vector Template (UVT) technique is used as a control
strategy of UPQC for mitigation of different PQ problems. In this
control scheme, the current/voltage control is applied over the
fundamental supply currents/voltages instead of fast changing APFs
currents/voltages, thereby reducing the computational delay.
Moreover, no extra control is required for neutral source current
compensation; hence the numbers of current sensors are reduced. The
performance of the proposed topology of UPQC is analyzed through
simulations results using MATLAB software with its Simulink and
Power System Block set toolboxes.
Abstract: Variable channel conditions in underwater networks,
and variable distances between sensors due to water current, leads to
variable bit error rate (BER). This variability in BER has great
effects on energy efficiency of error correction techniques used. In
this paper an efficient energy adaptive hybrid error correction
technique (AHECT) is proposed. AHECT adaptively changes error
technique from pure retransmission (ARQ) in a low BER case to a
hybrid technique with variable encoding rates (ARQ & FEC) in a
high BER cases. An adaptation algorithm depends on a precalculated
packet acceptance rate (PAR) look-up table, current BER,
packet size and error correction technique used is proposed. Based
on this adaptation algorithm a periodically 3-bit feedback is added to
the acknowledgment packet to state which error correction technique
is suitable for the current channel conditions and distance.
Comparative studies were done between this technique and other
techniques, and the results show that AHECT is more energy
efficient and has high probability of success than all those
techniques.
Abstract: Natural convection heat transfer from a heated
horizontal semi-circular cylinder (flat surface upward) has been
investigated for the following ranges of conditions; Grashof number,
and Prandtl number. The governing partial differential equations
(continuity, Navier-Stokes and energy equations) have been solved
numerically using a finite volume formulation. In addition, the role of
the type of the thermal boundary condition imposed at cylinder
surface, namely, constant wall temperature (CWT) and constant heat
flux (CHF) are explored. Natural convection heat transfer from a
heated horizontal semi-circular cylinder (flat surface upward) has
been investigated for the following ranges of conditions; Grashof
number, and Prandtl number, . The governing partial differential
equations (continuity, Navier-Stokes and energy equations) have
been solved numerically using a finite volume formulation. In
addition, the role of the type of the thermal boundary condition
imposed at cylinder surface, namely, constant wall temperature
(CWT) and constant heat flux (CHF) are explored. The resulting flow
and temperature fields are visualized in terms of the streamline and
isotherm patterns in the proximity of the cylinder. The flow remains
attached to the cylinder surface over the range of conditions spanned
here except that for and ; at these conditions, a separated flow
region is observed when the condition of the constant wall
temperature is prescribed on the surface of the cylinder. The heat
transfer characteristics are analyzed in terms of the local and average
Nusselt numbers. The maximum value of the local Nusselt number
always occurs at the corner points whereas it is found to be minimum
at the rear stagnation point on the flat surface. Overall, the average
Nusselt number increases with Grashof number and/ or Prandtl
number in accordance with the scaling considerations. The numerical
results are used to develop simple correlations as functions of
Grashof and Prandtl number thereby enabling the interpolation of the
present numerical results for the intermediate values of the Prandtl or
Grashof numbers for both thermal boundary conditions.