Abstract: One of the most critical decision points in the design of a
face recognition system is the choice of an appropriate face representation.
Effective feature descriptors are expected to convey sufficient, invariant
and non-redundant facial information. In this work we propose a set of
Hahn moments as a new approach for feature description. Hahn moments
have been widely used in image analysis due to their invariance, nonredundancy
and the ability to extract features either globally and locally.
To assess the applicability of Hahn moments to Face Recognition we
conduct two experiments on the Olivetti Research Laboratory (ORL)
database and University of Notre-Dame (UND) X1 biometric collection.
Fusion of the global features along with the features from local facial
regions are used as an input for the conventional k-NN classifier. The
method reaches an accuracy of 93% of correctly recognized subjects for
the ORL database and 94% for the UND database.
Abstract: Speaker Identification (SI) is the task of establishing
identity of an individual based on his/her voice characteristics. The SI
task is typically achieved by two-stage signal processing: training and
testing. The training process calculates speaker specific feature
parameters from the speech and generates speaker models
accordingly. In the testing phase, speech samples from unknown
speakers are compared with the models and classified. Even though
performance of speaker identification systems has improved due to
recent advances in speech processing techniques, there is still need of
improvement. In this paper, a Closed-Set Tex-Independent Speaker
Identification System (CISI) based on a Multiple Classifier System
(MCS) is proposed, using Mel Frequency Cepstrum Coefficient
(MFCC) as feature extraction and suitable combination of vector
quantization (VQ) and Gaussian Mixture Model (GMM) together
with Expectation Maximization algorithm (EM) for speaker
modeling. The use of Voice Activity Detector (VAD) with a hybrid
approach based on Short Time Energy (STE) and Statistical
Modeling of Background Noise in the pre-processing step of the
feature extraction yields a better and more robust automatic speaker
identification system. Also investigation of Linde-Buzo-Gray (LBG)
clustering algorithm for initialization of GMM, for estimating the
underlying parameters, in the EM step improved the convergence rate
and systems performance. It also uses relative index as confidence
measures in case of contradiction in identification process by GMM
and VQ as well. Simulation results carried out on voxforge.org
speech database using MATLAB highlight the efficacy of the
proposed method compared to earlier work.
Abstract: Barley (Hordeum vulgare L.), vetch (Vicia villosa),
and grass pea (Lathyrus sativus L.) monocultures as well as mixtures
of barley with each of the above legumes, in three seeding ratios (i.e.,
barley: legume 75:25, 50:50 and 25:75, based on seed numbers) were
used to investigated forage yield and competition indices. The results
showed that intercropping reduced the dry matter yield of the three
component plants, compared with their respective monocrops. The
greatest value of total dry matter yield was obtained from barley25-
grasspea75 (5.44 t ha-1) mixture, followed by grass pea sole crop (4.99
t ha-1). The total actual yield loss (AYL) values were positive and
greater than 0 in all mixtures, indicating an advantage from
intercropping over sole crops. Intercropped barley had a higher
relative crowding coefficient (K=1.64) than intercropped legumes
(K=1.20), indicating that barley was more competitive than legumes
in mixtures. Furthermore, grass pea was more competitive than vetch
in mixtures with barley. The highest land equivalent ratio (LER),
system productivity index (SPI) and monetary advantage index
(MAI) were obtained when barley was mixed at a rate of 25% with
75% seed rate of grass pea. It is concluded that intercropping of
barley with grass pea has a good potential to improve the
performance of forage with high land-use efficiency.
Abstract: A solar dish collector has been designed, fabricated
and tested for its performance on 10-03-2015 in Salem, Tamilnadu,
India. The experiments on cooking vessels of coated and un-coated
with 5 Liters capacity have been used for cooking Rice. The results
are shown in graphs. The solar cooker is always capable of cooking
food within the expected length of time and based on the solar
radiation levels. With minimum cooking power, the coated pressure
cooker of 5 Liters capacity cooks the food at faster manner. This is
due to the conductivity of the coating material provided in the cooker.
Abstract: The use of Flexible AC Transmission System
(FACTS) devices in a power system can potentially overcome
limitations of the present mechanically controlled transmission
system. Also, the advance of technology makes possible to include
new energy storage devices in the electrical power system. The
integration of Superconducting Magnetic Energy Storage (SMES)
into Static Synchronous Compensator (STATCOM) can lead to
increase their flexibility in improvement of power system dynamic
behaviour by exchanging both active and reactive powers with power
grids. This paper describes structure and behaviour of SMES,
specifications and performance principles of the STATCOM/SMES
compensator. Moreover, the benefits and effectiveness of integrated
SMES with STATCOM in power systems is presented. Also, the
performance of the STATCOM/SMES compensator is evaluated
using an IEEE 3-bus system through the dynamic simulation by
PSCAD/EMTDC software.
Abstract: In this paper, we report the development of the device
for diagnostics of cardiovascular system state and associated
automated workstation for large-scale medical measurement data
collection and analysis. It was shown that optimal design for the
monitoring device is wristband as it represents engineering trade-off
between accuracy and usability. Monitoring device is based on the
infrared reflective photoplethysmographic sensor, which allows
collecting multiple physiological parameters, such as heart rate and
pulsing wave characteristics. Developed device uses BLE interface
for medical and supplementary data transmission to the coupled
mobile phone, which processes it and send it to the doctor's
automated workstation. Results of this experimental model
approbation confirmed the applicability of the proposed approach.
Abstract: Discursive practices enacted by educators in
kindergarten create a blueprint for how the educational trajectories of
students with disabilities are constructed. This two-year ethnographic
case study critically examines educators’ relationships with students
considered to present challenging behaviors in one kindergarten
classroom located in a predominantly White middle class school
district in the Northeast of the United States. Focusing on the
language and practices used by one special education teacher and
three teaching assistants, this paper analyzes how teacher responses
to students’ behaviors constructs and positions students over one year
of kindergarten education. Using a critical discourse analysis it shows
that educators understand students’ behaviors as deficit and needing
consequences. This study highlights how educators’ responses reflect
students' individual characteristics including family background,
socioeconomics and ability status. This paper offers in depth analysis
of two students’ stories, which evidenced that the language used by
educators amplifies the social positioning of students within the
classroom and creates a foundation for who they are constructed to
be. Through exploring routine language and practices, this paper
demonstrates that educators outlined a blueprint of kindergartners,
which positioned students as learners in ways that became the ground
for either a limited or a promising educational pathway for them.
Abstract: An approach was evaluated for the retrieval of soil
moisture of bare soil surface using bistatic scatterometer data in the
angular range of 200 to 700 at VV- and HH- polarization. The
microwave data was acquired by specially designed X-band (10
GHz) bistatic scatterometer. The linear regression analysis was done
between scattering coefficients and soil moisture content to select the
suitable incidence angle for retrieval of soil moisture content. The 250
incidence angle was found more suitable. The support vector
regression analysis was used to approximate the function described
by the input output relationship between the scattering coefficient and
corresponding measured values of the soil moisture content. The
performance of support vector regression algorithm was evaluated by
comparing the observed and the estimated soil moisture content by
statistical performance indices %Bias, root mean squared error
(RMSE) and Nash-Sutcliffe Efficiency (NSE). The values of %Bias,
root mean squared error (RMSE) and Nash-Sutcliffe Efficiency
(NSE) were found 2.9451, 1.0986 and 0.9214 respectively at HHpolarization.
At VV- polarization, the values of %Bias, root mean
squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were
found 3.6186, 0.9373 and 0.9428 respectively.
Abstract: In addition to the advantages of light weight, resistant
corrosion and ease of processing, aluminum is also applied to the
long-span spatial structures. However, the elastic modulus of
aluminum is lower than that of the steel. This paper combines the
high performance aluminum honeycomb panel with the aluminum
latticed shell, forming a new panel-and-rod composite shell structure.
Through comparative analysis between the static and dynamic
performance, the conclusion that the structure of composite shell is
noticeably superior to the structure combined before.
Abstract: Localization of nodes is one of the key issues of
Wireless Sensor Network (WSN) that gained a wide attention in
recent years. The existing localization techniques can be generally
categorized into two types: range-based and range-free. Compared
with rang-based schemes, the range-free schemes are more costeffective,
because no additional ranging devices are needed. As a
result, we focus our research on the range-free schemes. In this paper
we study three types of range-free location algorithms to compare the
localization error and energy consumption of each one. Centroid
algorithm requires a normal node has at least three neighbor anchors,
while DV-hop algorithm doesn’t have this requirement. The third
studied algorithm is the amorphous algorithm similar to DV-Hop
algorithm, and the idea is to calculate the hop distance between two
nodes instead of the linear distance between them. The simulation
results show that the localization accuracy of the amorphous
algorithm is higher than that of other algorithms and the energy
consumption does not increase too much.
Abstract: Effects of nicotine on pre-partum body weight and
preimplantation embryonic development has been reported
previously. Present study was conducted to determine the effects of
annatto (Bixa orellana)-derived delta-tocotrienol (TCT) (with
presence of 10% gamma-TCT isomer) on the nicotine-induced
reduction in body weight and 8-cell embryonic growth in mice.
Twenty-four 6-8 weeks old (23-25g) female balb/c mice were
randomly divided into four groups (G1-G4; n=6). Those groups were
subjected to the following treatments for 7 consecutive days: G1
(control) were gavaged with 0.1 ml tocopherol stripped corn oil. G2
was subcutaneously (s.c.) injected with 3 mg/kg/day of nicotine. G3
received concurrent treatment of nicotine (3 mg/kg/day) and 60
mg/kg/day of δ-TCT mixture (contains 90% delta & 10% gamma
isomers) and G4 was given 60 mg/kg/day of δ-TCT mixture alone.
Body weights were recorded daily during the treatment. On Day 8,
females were superovulated with 5 IU Pregnant Mare’s Serum
Gonadotropin (PMSG) for 48 hours followed with 5 IU human
Chorionic Gonadotropin (hCG) before mated with males at the ratio
of 1:1. Females were sacrificed by cervical dislocation for embryo
collection 48 hours post-coitum. Collected embryos were cultured in
vitro. Results showed that throughout Day 1 to Day 7, the body
weight of nicotine treated group (G2) was significantly lower
(p
Abstract: This paper is concerned with the single-item
continuous review inventory system in which demand is stochastic
and discrete. The budget consumed for purchasing the ordered items
is not restricted but it incurs extra cost when exceeding specific
value. The unit purchasing price depends on the quantity ordered
under the all-units discounts cost structure. In many actual systems,
the budget as a resource which is occupied by the purchased items is
limited and the system is able to confront the resource shortage by
charging more costs. Thus, considering the resource shortage costs as
a part of system costs, especially when the amount of resource
occupied by the purchased item is influenced by quantity discounts,
is well motivated by practical concerns. In this paper, an optimization
problem is formulated for finding the optimal (r, Q) policy, when the
system is influenced by the budget limitation and a discount pricing
simultaneously. Properties of the cost function are investigated and
then an algorithm based on a one-dimensional search procedure is
proposed for finding an optimal (r, Q) policy which minimizes the
expected system costs.
Abstract: The phytotoxicity of heavy metals can be expressed
on roots and visible part of plants and is characterized by molecular
and metabolic answers at various levels of organization of the whole
plant. The present study was undertaken on two varieties of broad
bean Vicia faba (Sidi Aïch and Super Aguadulce). The device was
mounted on a substrate prepared by mixing sand, soil and compost,
the substrate was artificially contaminated with three doses of lead
nitrate [Pb(NO3)2] 0, 500 and 1000 ppm. Our objective is to follow
the behavior of plant opposite the stress by evaluating the
physiological parameters. The results reveal a reduction in the
parameters of the productivity (chlorophyll and proteins production)
with an increase in the osmoregulators (soluble sugars and
proline).These results show that the production of broad bean is
strongly modified by the disturbance of its internal physiology under
lead exposure.
Abstract: This study analyzes the critical gaps in the
architecture of European stability and the expected role of the
banking union as the new important step towards completing the
Economic and Monetary Union that should enable the creation of
safe and sound financial sector for the euro area market. The single
rulebook together with the Single Supervisory Mechanism and the
Single Resolution Mechanism - as two main pillars of the banking
union, should provide a consistent application of common rules and
administrative standards for supervision, recovery and resolution of
banks – with the final aim of replacing the former bail-out practice
with the bail-in system through which possible future bank failures
would be resolved by their own funds, i.e. with minimal costs for
taxpayers and real economy. In this way, the vicious circle between
banks and sovereigns would be broken. It would also reduce the
financial fragmentation recorded in the years of crisis as the result of
divergent behaviors in risk premium, lending activities and interest
rates between the core and the periphery. In addition, it should
strengthen the effectiveness of monetary transmission channels, in
particular the credit channels and overflows of liquidity on the money
market which, due to the fragmentation of the common financial
market, has been significantly disabled in period of crisis. However,
contrary to all the positive expectations related to the future
functioning of the banking union, major findings of this study
indicate that characteristics of the economic system in which the
banking union will operate should not be ignored. The euro area is an
integration of strong and weak entities with large differences in
economic development, wealth, assets of banking systems, growth
rates and accountability of fiscal policy. The analysis indicates that
low and unbalanced economic growth remains a challenge for the
maintenance of financial stability and this problem cannot be
resolved just by a single supervision. In many countries bank assets
exceed their GDP by several times and large banks are still a matter
of concern, because of their systemic importance for individual
countries and the euro zone as a whole. The creation of the Single
Supervisory Mechanism and the Single Resolution Mechanism is a
response to the European crisis, which has particularly affected
peripheral countries and caused the associated loop between the
banking crisis and the sovereign debt crisis, but has also influenced
banks’ balance sheets in the core countries, as the result of crossborder
capital flows. The creation of the SSM and the SRM should
prevent the similar episodes to happen again and should also provide
a new opportunity for strengthening of economic and financial
systems of the peripheral countries. On the other hand, there is a
potential threat that future focus of the ECB, resolution mechanism
and other relevant institutions will be extremely oriented towards
large and significant banks (whereby one half of them operate in the
core and most important euro area countries), and therefore it remains
questionable to what extent will the common resolution funds will be used for rescue of less important institutions. Recent geopolitical
developments will be the optimal indicator to show whether the
previously established mechanisms are sufficient enough to maintain
the adequate financial stability in the euro area market.
Abstract: As the Silicon oxide scaled down in MOSFET
technology to few nanometers, gate Direct Tunneling (DT) in
Floating gate (FGMOSFET) devices has become a major concern for
analog designers. FGMOSFET has been used in many low-voltage
and low-power applications, however, there is no accurate model that
account for DT gate leakage in nano-scale. This paper studied and
analyzed different simulation models for FGMOSFET using TSMC
90-nm technology. The simulation results for FGMOSFET cascade
current mirror shows the impact of DT on circuit performance in
terms of current and voltage without the need for fabrication. This
works shows the significance of using an accurate model for
FGMOSFET in nan-scale technologies.
Abstract: Geographical routing protocol requires node physical
location information to make forwarding decision. Geographical
routing uses location service or position service to obtain the position
of a node. The geographical information is a geographic coordinates
or can be obtained through reference points on some fixed coordinate
system. Link can be formed between two nodes. Link lifetime plays a
crucial role in MANET. Link lifetime represent how long the link is
stable without any failure between the nodes. Link failure may occur
due to mobility and because of link failure energy of nodes can be
drained. Thus this paper proposes survey about link lifetime
prediction using geographical information.
Abstract: Life cycle assessment is a technique to assess the
environmental aspects and potential impacts associated with a
product, process, or service, by compiling an inventory of relevant
energy and material inputs and environmental releases; evaluating the
potential environmental impacts associated with identified inputs and
releases; and interpreting the results to help you make a more
informed decision. In this paper, the life cycle assessment of
aluminum and beech wood as two commonly used materials in Egypt
for window frames are heading, highlighting their benefits and
weaknesses. Window frames of the two materials have been assessed
on the basis of their production, energy consumption and
environmental impacts. It has been found that the climate change of
the windows made of aluminum and beech wood window, for a
reference window (1.2m×1.2m), are 81.7 mPt and -52.5 mPt impacts
respectively. Among the most important results are: fossil fuel
consumption, potential contributions to the green building effect and
quantities of solid waste tend to be minor for wood products
compared to aluminum products; incineration of wood products can
cause higher impacts of acidification and eutrophication than
aluminum, whereas thermal energy can be recovered.
Abstract: The purpose of this study is the discrimination of 28
postmenopausal with osteoporotic femoral fractures from an agematched
control group of 28 women using texture analysis based on
fractals. Two pre-processing approaches are applied on radiographic
images; these techniques are compared to highlight the choice of the
pre-processing method. Furthermore, the values of the fractal
dimension are compared to those of the fractal signature in terms of
the classification of the two populations. In a second analysis, the
BMD measure at proximal femur was compared to the fractal
analysis, the latter, which is a non-invasive technique, allowed a
better discrimination; the results confirm that the fractal analysis of
texture on calcaneus radiographs is able to discriminate osteoporotic
patients with femoral fracture from controls. This discrimination was
efficient compared to that obtained by BMD alone. It was also
present in comparing subgroups with overlapping values of BMD.
Abstract: Workflow scheduling is an important part of cloud
computing and based on different criteria it decides cost, execution
time, and performances. A cloud workflow system is a platform
service facilitating automation of distributed applications based on
new cloud infrastructure. An aspect which differentiates cloud
workflow system from others is market-oriented business model, an
innovation which challenges conventional workflow scheduling
strategies. Time and Cost optimization algorithm for scheduling
Hybrid Clouds (TCHC) algorithm decides which resource should be
chartered from public providers is combined with a new De-De
algorithm considering that every instance of single and multiple
workflows work without deadlocks. To offset this, two new concepts
- De-De Dodging Algorithm and Priority Based Decisive Algorithm -
combine with conventional deadlock avoidance issues by proposing
one algorithm that maximizes active (not just allocated) resource use
and reduces Makespan.
Abstract: Particle size distribution, the most important
characteristics of aerosols, is obtained through electrical
characterization techniques. The dynamics of charged nanoparticles
under the influence of electric field in Electrical Mobility
Spectrometer (EMS) reveals the size distribution of these particles.
The accuracy of this measurement is influenced by flow conditions,
geometry, electric field and particle charging process, therefore by
the transfer function (transfer matrix) of the instrument. In this work,
a wire-cylinder corona charger was designed and the combined fielddiffusion
charging process of injected poly-disperse aerosol particles
was numerically simulated as a prerequisite for the study of a
multichannel EMS. The result, a cloud of particles with no uniform
charge distribution, was introduced to the EMS. The flow pattern and
electric field in the EMS were simulated using Computational Fluid
Dynamics (CFD) to obtain particle trajectories in the device and
therefore to calculate the reported signal by each electrometer.
According to the output signals (resulted from bombardment of
particles and transferring their charges as currents), we proposed a
modification to the size of detecting rings (which are connected to
electrometers) in order to evaluate particle size distributions more
accurately. Based on the capability of the system to transfer
information contents about size distribution of the injected particles,
we proposed a benchmark for the assessment of optimality of the
design. This method applies the concept of Von Neumann entropy
and borrows the definition of entropy from information theory
(Shannon entropy) to measure optimality. Entropy, according to the
Shannon entropy, is the ''average amount of information contained in
an event, sample or character extracted from a data stream''.
Evaluating the responses (signals) which were obtained via various
configurations of detecting rings, the best configuration which gave
the best predictions about the size distributions of injected particles,
was the modified configuration. It was also the one that had the
maximum amount of entropy. A reasonable consistency was also
observed between the accuracy of the predictions and the entropy
content of each configuration. In this method, entropy is extracted
from the transfer matrix of the instrument for each configuration.
Ultimately, various clouds of particles were introduced to the
simulations and predicted size distributions were compared to the
exact size distributions.