Abstract: The counting and analysis of blood cells allows the
evaluation and diagnosis of a vast number of diseases. In particular,
the analysis of white blood cells (WBCs) is a topic of great interest to
hematologists. Nowadays the morphological analysis of blood cells is
performed manually by skilled operators. This involves numerous
drawbacks, such as slowness of the analysis and a nonstandard
accuracy, dependent on the operator skills. In literature there are only
few examples of automated systems in order to analyze the white
blood cells, most of which only partial. This paper presents a
complete and fully automatic method for white blood cells
identification from microscopic images. The proposed method firstly
individuates white blood cells from which, subsequently, nucleus and
cytoplasm are extracted. The whole work has been developed using
MATLAB environment, in particular the Image Processing Toolbox.
Abstract: A new technique based on Pattern search optimization is proposed for estimating different solar cell parameters in this paper. The estimated parameters are the generated photocurrent, saturation current, series resistance, shunt resistance, and ideality factor. The proposed approach is tested and validated using double diode model to show its potential. Performance of the developed approach is quite interesting which signifies its potential as a promising estimation tool.
Abstract: It is well known that during the developments in the
economic sector and through the financial crises occur everywhere in
the whole world, volatility measurement is the most important
concept in financial time series. Therefore in this paper we discuss
the volatility for Amman stocks market (Jordan) for certain period of
time. Since wavelet transform is one of the most famous filtering
methods and grows up very quickly in the last decade, we compare
this method with the traditional technique, Fast Fourier transform to
decide the best method for analyzing the volatility. The comparison
will be done on some of the statistical properties by using Matlab
program.
Abstract: An empirical study of web applications that use
software frameworks is presented here. The analysis is based on two
approaches. In the first, developers using such frameworks are
required, based on their experience, to assign weights to parameters
such as database connection. In the second approach, a performance
testing tool, OpenSTA, is used to compute start time and other such
measures. From such an analysis, it is concluded that open source
software is superior to proprietary software. The motivation behind
this research is to examine ways in which a quantitative assessment
can be made of software in general and frameworks in particular.
Concepts such as metrics and architectural styles are discussed along
with previously published research.
Abstract: Reciprocating compressors are flexible to handle wide capacity and condition swings, offer a very efficient method of compressing almost any gas mixture in wide range of pressure, can generate high head independent of density, and have numerous applications and wide power ratings. These make them vital component in various units of industrial plants. In this paper optimum reciprocating compressor configuration regarding interstage pressures, low suction pressure, non-lubricated cylinder, speed of machine, capacity control system, compressor valve, lubrication system, piston rod coating, cylinder liner material, barring device, pressure drops, rod load, pin reversal, discharge temperature, cylinder coolant system, performance, flow, coupling, special tools, condition monitoring (including vibration, thermal and rod drop monitoring), commercial points, delivery and acoustic conditions are presented.
Abstract: In recent years, everything is trending toward digitalization
and with the rapid development of the Internet technologies,
digital media needs to be transmitted conveniently over the network.
Attacks, misuse or unauthorized access of information is of great
concern today which makes the protection of documents through
digital media a priority problem. This urges us to devise new data
hiding techniques to protect and secure the data of vital significance.
In this respect, steganography often comes to the fore as a tool for
hiding information. Steganography is a process that involves hiding
a message in an appropriate carrier like image or audio. It is of
Greek origin and means "covered or hidden writing". The goal of
steganography is covert communication. Here the carrier can be sent
to a receiver without any one except the authenticated receiver only
knows existence of the information. Considerable amount of work
has been carried out by different researchers on steganography. In this
work the authors propose a novel Steganographic method for hiding
information within the spatial domain of the gray scale image. The
proposed approach works by selecting the embedding pixels using
some mathematical function and then finds the 8 neighborhood of
the each selected pixel and map each bit of the secret message in
each of the neighbor pixel coordinate position in a specified manner.
Before embedding a checking has been done to find out whether the
selected pixel or its neighbor lies at the boundary of the image or not.
This solution is independent of the nature of the data to be hidden
and produces a stego image with minimum degradation.
Abstract: The paper deals with an application of quantitative analysis – the Data Envelopment Analysis (DEA) method to performance evaluation of the European Union Member States, in the reference years 2000 and 2011. The main aim of the paper is to measure efficiency changes over the reference years and to analyze a level of productivity in individual countries based on DEA method and to classify the EU Member States to homogeneous units (clusters) according to efficiency results. The theoretical part is devoted to the fundamental basis of performance theory and the methodology of DEA. The empirical part is aimed at measuring degree of productivity and level of efficiency changes of evaluated countries by basic DEA model – CCR CRS model, and specialized DEA approach – the Malmquist Index measuring the change of technical efficiency and the movement of production possibility frontier. Here, DEA method becomes a suitable tool for setting a competitive/uncompetitive position of each country because there is not only one factor evaluated, but a set of different factors that determine the degree of economic development.
Abstract: Actual load, material characteristics and other
quantities often differ from the design values. This can cause worse
function, shorter life or failure of a civil engineering structure, a
machine, vehicle or another appliance. The paper shows main causes
of the uncertainties and deviations and presents a systematic
approach and efficient tools for their elimination or mitigation of
consequences. Emphasis is put on the design stage, which is most
important for reliability ensuring. Principles of robust design and
important tools are explained, including FMEA, sensitivity analysis
and probabilistic simulation methods. The lifetime prediction of
long-life objects can be improved by long-term monitoring of the
load response and damage accumulation in operation. The condition
evaluation of engineering structures, such as bridges, is often based
on visual inspection and verbal description. Here, methods based on
fuzzy logic can reduce the subjective influences.
Abstract: The development of competences and practical
capacities of students is getting an important incidence into the
guidelines of the European Higher Education Area (EHEA). The
methodology applied in this work is based on the education through
directed resolution of practical cases. All cases are related to
professional tasks that the students will have to develop in their
future career. The method is intended to form the necessary
competences of students of the Marine Engineering and Maritime
Transport Degree in the matter of “Physics".
The experience was applied in the course of 2011/2012. Students
were grouped, and a practical task was assigned to them, that should
be developed and solved within the team. The aim was to realize
students learning by three ways: their own knowledge, the
contribution of their teammates and the teacher's direction. The
results of the evaluation were compared with those obtained
previously by the traditional teaching method.
Abstract: In this paper an attempt has been made to correlate the usefulness of electrodes made through powder metallurgy (PM) in comparison with conventional copper electrode during electric discharge machining. Experimental results are presented on electric discharge machining of AISI D2 steel in kerosene with copper tungsten (30% Cu and 70% W) tool electrode made through powder metallurgy (PM) technique and Cu electrode. An L18 (21 37) orthogonal array of Taguchi methodology was used to identify the effect of process input factors (viz. current, duty cycle and flushing pressure) on the output factors {viz. material removal rate (MRR) and surface roughness (SR)}. It was found that CuW electrode (made through PM) gives high surface finish where as the Cu electrode is better for higher material removal rate.
Abstract: In the last few years, three multivariate spectral
analysis techniques namely, Principal Component Analysis (PCA),
Independent Component Analysis (ICA) and Non-negative Matrix
Factorization (NMF) have emerged as effective tools for oscillation
detection and isolation. While the first method is used in determining
the number of oscillatory sources, the latter two methods
are used to identify source signatures by formulating the detection
problem as a source identification problem in the spectral domain.
In this paper, we present a critical drawback of the underlying linear
(mixing) model which strongly limits the ability of the associated
source separation methods to determine the number of sources
and/or identify the physical source signatures. It is shown that the
assumed mixing model is only valid if each unit of the process gives
equal weighting (all-pass filter) to all oscillatory components in its
inputs. This is in contrast to the fact that each unit, in general, acts
as a filter with non-uniform frequency response. Thus, the model
can only facilitate correct identification of a source with a single
frequency component, which is again unrealistic. To overcome
this deficiency, an iterative post-processing algorithm that correctly
identifies the physical source(s) is developed. An additional issue
with the existing methods is that they lack a procedure to pre-screen
non-oscillatory/noisy measurements which obscure the identification
of oscillatory sources. In this regard, a pre-screening procedure
is prescribed based on the notion of sparseness index to eliminate
the noisy and non-oscillatory measurements from the data set used
for analysis.