Abstract: Cole-Cole parameters of 40 post-menopausal women
are compared with their DEXA bone mineral density measurements.
Impedance characteristics of four extremities are compared; left and
right extremities are statistically same, but lower extremities are
statistically different than upper ones due to their different fat
content. The correlation of Cole-Cole impedance parameters to bone
mineral density (BMD) is observed to be higher for dominant arm.
With the post-menopausal population, ANOVA tests of the dominant
arm characteristic frequency, as a predictor for DEXA classified
osteopenic and osteoporic population around lumbar spine, is
statistically very significant. When used for total lumbar spine
osteoporosis diagnosis, the area under the Receiver Operating Curve
of the characteristic frequency is 0.830, suggesting that the Cole-Cole
plot characteristic frequency could be a useful diagnostic parameter
when integrated into standard screening methods for osteoporosis.
Moreover, the characteristic frequency can be directly measured by
monitoring frequency driven angular behavior of the dominant arm
without performing any complex calculation.
Abstract: In this paper we present a classification of the various technologies applied for the solution of the portfolio selection problem according to the discipline and the methodological framework followed. We provide a concise presentation of the emerged categories and we are trying to identify which methods considered obsolete and which lie at the heart of the debate. On top of that, we provide a comparative study of the different technologies applied for efficient portfolio construction and we suggest potential paths for future work that lie at the intersection of the presented techniques.
Abstract: The growing number of computer viruses and the
detection of zero day malware have been the concern for security
researchers for a large period of time. Existing antivirus products
(AVs) rely on detecting virus signatures which do not provide a full
solution to the problems associated with these viruses. The use of
logic formulae to model the behaviour of viruses is one of the most
encouraging recent developments in virus research, which provides
alternatives to classic virus detection methods. In this paper, we
proposed a comparative study about different virus detection
techniques. This paper provides the advantages and drawbacks of
different detection techniques. Different techniques will be used in
this paper to provide a discussion about what technique is more
effective to detect computer viruses.
Abstract: The access to relevant information that is adapted to
user’s needs, preferences and environment is a challenge in many
applications running. That causes an appearance of context-aware
systems. To facilitate the development of this class of applications, it
is necessary that these applications share a common context
metamodel. In this article, we will present our context metamodel
that is defined using the OMG Meta Object facility (MOF).This
metamodel is based on the analysis and synthesis of context concepts
proposed in literature.
Abstract: The goal of the modern education system is to prepare
students to be able to adapt to ever-changing life situations. They
must be able to acquire required knowledge independently; apply
such knowledge in practice to solve various problems by using
modern technologies; think critically and creatively; competently use
information; be communicative, work in a team; and develop their
own moral values, intellect and cultural awareness. As a result, the
status of education significantly increases; new requirements to its
quality have been formed. In recent years the competency-based
approach in education has become of significant interest. This
approach is a strengthening of applied and practical characteristics of
a school education and leads to the forming of the key students’
competencies which define their success in future life. In this article,
the authors’ attention focuses on a range of key competencies,
educational, informational and communicative and on the possibility
to develop such competencies via STEM education. This research
shows the change in students’ attitude towards scientific disciplines
such as mathematics, general science, technology and engineering as
a result of STEM education. Two staged analyzed questionnaires
completed by students of forms II to IV in the republic of Trinidad
and Tobago allowed the authors to categorize students between two
levels that represent students’ attitude to various disciplines. The
significance of differences between selected levels was confirmed
with the use of Pearson’s chi-squared test. In summary, the analysis
of obtained data makes it possible to conclude that STEM education
has a great potential for development of core students’ competencies
and encourage the development of positive student attitude towards
the above mentioned above scientific disciplines.
Abstract: Financial innovations can be regarded as the cause
and the effect of the evolution of the financial system. Most of
financial innovations are created by various financial institutions for
their own purposes and needs. However, due to their diversity,
financial innovations can be also applied by various business entities
(other than financial institutions).
This paper focuses on the potential application of financial
innovations by non-financial companies. It is assumed that financial
innovations may be effectively applied in all fields of corporate
financial decisions integrating financial management with the risk
management process. Appropriate application of financial
innovations may enhance the development of the company and
increase its value by improving its financial situation and reducing
the level of risk. On the other hand, misused financial innovations
may become the source of extra risk for the company threatening its
further operation.
The main objective of the paper is to identify the major types of
financial innovations offered to non-financial companies by the
banking system in Poland. It also aims at identifying the main factors
determining the creation of financial innovations in the banking
system in Poland and indicating future directions of their
development.
This paper consists of conceptual and empirical part. Conceptual
part based on theoretical study is focused on the determinants of the
process of financial innovations and their application by the nonfinancial
companies. Theoretical study is followed by the empirical
research based on the analysis of the actual offer of the 20 biggest
banks operating in Poland with regard to financial innovations
offered to SMEs and large corporations. These innovations are
classified according to the main functions of the integrated financial
management, such as financing, investment, working capital
management and risk management.
Empirical study has proved that the biggest banks operating in the
Polish market offer to their business customers many types and
classes of financial innovations. This offer appears vast and adequate
to the needs and purposes of the Polish non-financial companies. It
was observed that financial innovations pertained to financing
decisions dominate in the banks’ offer. However, due to high
diversification of the offered financial innovations, business
customers may effectively apply them in all fields and areas of
integrated financial management. It should be underlined, that the
banks’ offer is highly dispersed, which may limit the implementation
of financial innovations in the corporate finance. It would be also
recommended for the banks operating in the Polish market to
intensify the education campaign aiming at increasing knowledge
about financial innovations among business customers.
Abstract: Twin steel plates-concrete composite shear walls are
composed of a pair of steel plate layers and a concrete layer
sandwiched between them, which have the characteristics of both
reinforced concrete shear walls and steel plate shear walls. Twin steel
plates-composite shear walls contain very high ultimsate bearing
capacity and ductility, which have great potential to be applied in the
super high-rise buildings and special structures. In this paper, we
analyzed the basic characteristics and stress mechanism of the twin
steel plates-composite shear walls. Specifically, we analyzed the
effects of the steel plate thickness, wall thickness and concrete
strength on the bearing capacity of the twin steel plates-composite
shear walls. The analysis results indicate that: (1) the initial shear
stiffness and ultimate shear-carrying capacity is not significantly
affected by the thickness of concrete wall but by the class of concrete,
(2) both factors significantly impact the shear distribution of the
shear walls in ultimate shear-carrying capacity. The technique of twin
steel plates-composite shear walls has been successfully applied in
the construction of an 88-meter Huge Statue of Buddha located in
Hunan Province, China. The analysis results and engineering
experiences showed that the twin steel plates-composite shear walls
have great potential for future research and applications.
Abstract: Facility location is a complex real-world problem
which needs a strategic management decision. This paper provides a
general review on studies, efforts and developments in Facility
Location Problems which are classical optimization problems having
a wide-spread applications in various areas such as transportation,
distribution, production, supply chain decisions and
telecommunication. Our goal is not to review all variants of different
studies in FLPs or to describe very detailed computational techniques
and solution approaches, but rather to provide a broad overview of
major location problems that have been studied, indicating how they
are formulated and what are proposed by researchers to tackle the
problem. A brief, elucidative table based on a grouping according to
“General Problem Type” and “Methods Proposed” used in the studies
is also presented at the end of the work.
Abstract: As smartphones are equipped with various sensors,
there have been many studies focused on using these sensors to create
valuable applications. Human activity recognition is one such
application motivated by various welfare applications, such as the
support for the elderly, measurement of calorie consumption, lifestyle
and exercise patterns analyses, and so on. One of the challenges one
faces when using smartphone sensors for activity recognition is that
the number of sensors should be minimized to save battery power. In
this paper, we show that a fairly accurate classifier can be built that
can distinguish ten different activities by using only a single sensor
data, i.e., the smartphone accelerometer data. The approach that we
adopt to deal with this twelve-class problem uses various methods.
The features used for classifying these activities include not only the
magnitude of acceleration vector at each time point, but also the
maximum, the minimum, and the standard deviation of vector
magnitude within a time window. The experiments compared the
performance of four kinds of basic multi-class classifiers and the
performance of four kinds of ensemble learning methods based on
three kinds of basic multi-class classifiers. The results show that
while the method with the highest accuracy is ECOC based on
Random forest.
Abstract: This paper treats different aspects of entropy measure
in classical information theory and statistical quantum mechanics, it
presents the possibility of extending the definition of Von Neumann
entropy to image and array processing. In the first part, we generalize
the quantum entropy using singular values of arbitrary rectangular
matrices to measure the randomness and the quality of denoising
operation, this new definition of entropy can be implemented to
compare the performance analysis of filtering methods. In the second
part, we apply the concept of pure state in quantum formalism
to generalize the maximum entropy method for narrowband and
farfield source localization problem. Several computer simulation
results are illustrated to demonstrate the effectiveness of the proposed
techniques.
Abstract: This work is on decision tree-based classification for
the disbursement of scholarship. Tree-based data mining
classification technique is used in other to determine the generic rule
to be used to disburse the scholarship. The system based on the
defined rules from the tree is able to determine the class (status) to
which an applicant shall belong whether Granted or Not Granted. The
applicants that fall to the class of granted denote a successful
acquirement of scholarship while those in not granted class are
unsuccessful in the scheme. An algorithm that can be used to classify
the applicants based on the rules from tree-based classification was
also developed. The tree-based classification is adopted because of its
efficiency, effectiveness, and easy to comprehend features. The
system was tested with the data of National Information Technology
Development Agency (NITDA) Abuja, a Parastatal of Federal
Ministry of Communication Technology that is mandated to develop
and regulate information technology in Nigeria. The system was
found working according to the specification. It is therefore
recommended for all scholarship disbursement organizations.
Abstract: Computer aided diagnosis systems provide vital
opinion to radiologists in the detection of early signs of breast cancer
from mammogram images. Architectural distortions, masses and
microcalcifications are the major abnormalities. In this paper, a
computer aided diagnosis system has been proposed for
distinguishing abnormal mammograms with architectural distortion
from normal mammogram. Four types of texture features GLCM
texture, GLRLM texture, fractal texture and spectral texture features
for the regions of suspicion are extracted. Support vector machine
has been used as classifier in this study. The proposed system yielded
an overall sensitivity of 96.47% and an accuracy of 96% for
mammogram images collected from digital database for screening
mammography database.
Abstract: The global demand for long-tailed macaques for
medical experimentation has continued to increase. Fulfillment of
Indonesian export demands has been mostly from natural habitats,
based on a harvesting quota. This quota has been determined
according to the total catch for a given year, and not based on
consideration of any demographic parameters or physical
environmental factors with regard to the animal; hence threatening
the sustainability of the various populations. It is therefore necessary
to formulate a method for calculating a sustainable harvesting quota,
based on population parameters in natural habitats. Considering the
possibility of variations in habitat characteristics and population
parameters, a time series observation of demographic and
physical/biotic parameters, in various habitats, was performed on 13
groups of long-tailed macaques, distributed throughout the West
Java, Lampung and Yogyakarta areas of Indonesia. These provinces
were selected for comparison of the influence of human/tourism
activities. Data on population parameters that was collected included
data on life expectancy according to age class, numbers of
individuals by sex and age class, and ‘ratio of infants to reproductive
females’. The estimation of population growth was based on a
population dynamic growth model: the Leslie matrix. The harvesting
quota was calculated as being the difference between the actual
population size and the MVP (minimum viable population) for each
sex and age class. Observation indicated that there were variations within group size
(24–106 individuals), gender (sex) ratio (1:1 to 1:1.3), life expectancy
value (0.30 to 0.93), and ‘ratio of infants to reproductive females’
(0.23 to 1.56). Results of subsequent calculations showed that
sustainable harvesting quotas for each studied group of long-tailed
macaques, ranged from 29 to 110 individuals. An estimation model
of the MVP for each age class was formulated as Log Y = 0.315 +
0.884 Log Ni (number of individual on ith age class). This study also
found that life expectancy for the juvenile age class was affected by
the humidity under tree stands, and dietary plants’ density at sapling,
pole and tree stages (equation: Y=2.296 – 1.535 RH + 0.002 Kpcg –
0.002 Ktg – 0.001 Kphn, R2 = 89.6% with a significance value of
0.001). By contrast, for the sub-adult-adult age class, life expectancy
was significantly affected by slope (equation: Y=0.377 = 0.012 Kml,
R2 = 50.4%, with significance level of 0.007). The infant-toreproductive-
female ratio was affected by humidity under tree stands,
and dietary plant density at sapling and pole stages (equation: Y = -
1.432 + 2.172 RH – 0.004 Kpcg + 0.003 Ktg, R2 = 82.0% with
significance level of 0.001). This research confirmed the importance
of population parameters in determining the minimum viable
population, and that MVP varied according to habitat characteristics
(especially food availability). It would be difficult therefore, to
formulate a general mathematical equation model for determining a
harvesting quota for the species as a whole.
Abstract: One of the most critical decision points in the design of a
face recognition system is the choice of an appropriate face representation.
Effective feature descriptors are expected to convey sufficient, invariant
and non-redundant facial information. In this work we propose a set of
Hahn moments as a new approach for feature description. Hahn moments
have been widely used in image analysis due to their invariance, nonredundancy
and the ability to extract features either globally and locally.
To assess the applicability of Hahn moments to Face Recognition we
conduct two experiments on the Olivetti Research Laboratory (ORL)
database and University of Notre-Dame (UND) X1 biometric collection.
Fusion of the global features along with the features from local facial
regions are used as an input for the conventional k-NN classifier. The
method reaches an accuracy of 93% of correctly recognized subjects for
the ORL database and 94% for the UND database.
Abstract: Speaker Identification (SI) is the task of establishing
identity of an individual based on his/her voice characteristics. The SI
task is typically achieved by two-stage signal processing: training and
testing. The training process calculates speaker specific feature
parameters from the speech and generates speaker models
accordingly. In the testing phase, speech samples from unknown
speakers are compared with the models and classified. Even though
performance of speaker identification systems has improved due to
recent advances in speech processing techniques, there is still need of
improvement. In this paper, a Closed-Set Tex-Independent Speaker
Identification System (CISI) based on a Multiple Classifier System
(MCS) is proposed, using Mel Frequency Cepstrum Coefficient
(MFCC) as feature extraction and suitable combination of vector
quantization (VQ) and Gaussian Mixture Model (GMM) together
with Expectation Maximization algorithm (EM) for speaker
modeling. The use of Voice Activity Detector (VAD) with a hybrid
approach based on Short Time Energy (STE) and Statistical
Modeling of Background Noise in the pre-processing step of the
feature extraction yields a better and more robust automatic speaker
identification system. Also investigation of Linde-Buzo-Gray (LBG)
clustering algorithm for initialization of GMM, for estimating the
underlying parameters, in the EM step improved the convergence rate
and systems performance. It also uses relative index as confidence
measures in case of contradiction in identification process by GMM
and VQ as well. Simulation results carried out on voxforge.org
speech database using MATLAB highlight the efficacy of the
proposed method compared to earlier work.
Abstract: This research paper presents guiding on how to design
social media into higher education courses. The research
methodology used a survey approach. The research instrument was a
questionnaire about guiding on how to design social media into
higher education courses. Thirty-one lecturers completed the
questionnaire. The data were scored by frequency and percentage.
The research results were the lecturers’ opinions concerning the
designing social media into higher education courses as follows: 1)
Lecturers deem that the most suitable learning theory is Collaborative
Learning. 2) Lecturers consider that the most important learning and
innovation Skill in the 21st century is communication and
collaboration skills. 3) Lecturers think that the most suitable
evaluation technique is authentic assessment. 4) Lecturers consider
that the most appropriate portion used as blended learning should be
70% in the classroom setting and 30% online.
Abstract: Discursive practices enacted by educators in
kindergarten create a blueprint for how the educational trajectories of
students with disabilities are constructed. This two-year ethnographic
case study critically examines educators’ relationships with students
considered to present challenging behaviors in one kindergarten
classroom located in a predominantly White middle class school
district in the Northeast of the United States. Focusing on the
language and practices used by one special education teacher and
three teaching assistants, this paper analyzes how teacher responses
to students’ behaviors constructs and positions students over one year
of kindergarten education. Using a critical discourse analysis it shows
that educators understand students’ behaviors as deficit and needing
consequences. This study highlights how educators’ responses reflect
students' individual characteristics including family background,
socioeconomics and ability status. This paper offers in depth analysis
of two students’ stories, which evidenced that the language used by
educators amplifies the social positioning of students within the
classroom and creates a foundation for who they are constructed to
be. Through exploring routine language and practices, this paper
demonstrates that educators outlined a blueprint of kindergartners,
which positioned students as learners in ways that became the ground
for either a limited or a promising educational pathway for them.
Abstract: In this article, we deal with a variant of the classical
course timetabling problem that has a practical application in many
areas of education. In particular, in this paper we are interested in
high schools remedial courses. The purpose of such courses is to
provide under-prepared students with the skills necessary to succeed
in their studies. In particular, a student might be under prepared in
an entire course, or only in a part of it. The limited availability
of funds, as well as the limited amount of time and teachers at
disposal, often requires schools to choose which courses and/or which
teaching units to activate. Thus, schools need to model the training
offer and the related timetabling, with the goal of ensuring the
highest possible teaching quality, by meeting the above-mentioned
financial, time and resources constraints. Moreover, there are some
prerequisites between the teaching units that must be satisfied. We
first present a Mixed-Integer Programming (MIP) model to solve
this problem to optimality. However, the presence of many peculiar
constraints contributes inevitably in increasing the complexity of
the mathematical model. Thus, solving it through a general-purpose
solver may be performed for small instances only, while solving
real-life-sized instances of such model requires specific techniques
or heuristic approaches. For this purpose, we also propose a heuristic
approach, in which we make use of a fast constructive procedure
to obtain a feasible solution. To assess our exact and heuristic
approaches we perform extensive computational results on both
real-life instances (obtained from a high school in Lecce, Italy) and
randomly generated instances. Our tests show that the MIP model is
never solved to optimality, with an average optimality gap of 57%.
On the other hand, the heuristic algorithm is much faster (in about the
50% of the considered instances it converges in approximately half of
the time limit) and in many cases allows achieving an improvement
on the objective function value obtained by the MIP model. Such an
improvement ranges between 18% and 66%.
Abstract: Determination of genetic variation is useful for plant
breeding and hence production of more efficient plant species under
different conditions, like drought stress. In this study a sample of 28
recombinant inbred lines (RILs) of wheat developed from the cross of
Norstar and Zagross varieties, together with their parents, were
evaluated for two years (2010-2012) under normal and water stress
conditions using split plot design with three replications. Main plots
included two irrigation treatments of 70 and 140 mm evaporation
from Class A pan and sub-plots consisted of 30 genotypes. The effect
of genotypes and interaction of genotypes with years and water
regimes were significant for all characters. Significant genotypic
effect implies the existence of genetic variation among the lines
under study. Heritability estimates were high for 1000 grain weight
(0.87). Biomass and grain yield showed the lowest heritability values
(0.42 and 0.50, respectively). Highest genotypic and phenotypic
coefficients of variation (GCV and PCV) belonged to harvest index.
Moderate genetic advance for most of the traits suggested the
feasibility of selection among the RILs under investigation. Some
RILs were higher yielding than either parent at both environments.
Abstract: In order to help the expert to validate association rules
extracted from data, some quality measures are proposed in the
literature. We distinguish two categories: objective and subjective
measures. The first one depends on a fixed threshold and on data
quality from which the rules are extracted. The second one consists
on providing to the expert some tools in the objective to explore and
visualize rules during the evaluation step. However, the number of
extracted rules to validate remains high. Thus, the manually mining
rules task is very hard. To solve this problem, we propose, in this
paper, a semi-automatic method to assist the expert during the
association rule's validation. Our method uses rule-based
classification as follow: (i) We transform association rules into
classification rules (classifiers), (ii) We use the generated classifiers
for data classification. (iii) We visualize association rules with their
quality classification to give an idea to the expert and to assist him
during validation process.