Abstract: STRIM (Statistical Test Rule Induction Method) has been proposed as a method to effectively induct if-then rules from the decision table which is considered as a sample set obtained from the population of interest. Its usefulness has been confirmed by simulation experiments specifying rules in advance, and by comparison with conventional methods. However, scope for future development remains before STRIM can be applied to the analysis of real-world data sets. The first requirement is to determine the size of the dataset needed for inducting true rules, since finding statistically significant rules is the core of the method. The second is to examine the capacity of rule induction from datasets with contaminated attribute values created by missing data and noise, since real-world datasets usually contain such contaminated data. This paper examines the first problem theoretically, in connection with the rule length. The second problem is then examined in a simulation experiment, utilizing the critical size of dataset derived from the first step. The experimental results show that STRIM is highly robust in the analysis of datasets with contaminated attribute values, and hence is applicable to real-world data
Abstract: Obesity and osteoporosis are the two diseases whose
increasing prevalence and high impact on the global morbidity and
mortality, during the two recent decades, have gained a status of
major health threats worldwide. Obesity purports to affect the bone
metabolism through complex mechanisms. Debated data on the
connection between the bone mineral density and fracture prevalence
in the obese patients are widely presented in literature. There is
evidence that the correlation of weight and fracture risk is sitespecific.
This study is aimed at determining the connection between
the bone mineral density (BMD) and trabecular bone score (TBS)
parameters in Ukrainian women suffering from obesity. We
examined 1025 40-89-year-old women, divided them into the groups
according to their body mass index: Group A included 360 women
with obesity whose BMI was ≥30 kg/m2, and Group B – 665 women
with no obesity and BMI of
Abstract: This paper reviews the model-based qualitative and
quantitative Operations Management research in the context of
Construction Supply Chain Management (CSCM). Construction
industry has been traditionally blamed for low productivity, cost and
time overruns, waste, high fragmentation and adversarial
relationships. The construction industry has been slower than other
industries to employ the Supply Chain Management (SCM) concept
and develop models that support the decision-making and planning.
However the last decade there is a distinct shift from a project-based
to a supply-based approach of construction management. CSCM
comes up as a new promising management tool of construction
operations and improves the performance of construction projects in
terms of cost, time and quality. Modeling the Construction Supply
Chain (CSC) offers the means to reap the benefits of SCM, make
informed decisions and gain competitive advantage. Different
modeling approaches and methodologies have been applied in the
multi-disciplinary and heterogeneous research field of CSCM. The
literature review reveals that a considerable percentage of the CSC
modeling research accommodates conceptual or process models
which present general management frameworks and do not relate to
acknowledged soft Operations Research methods. We particularly
focus on the model-based quantitative research and categorize the
CSCM models depending on their scope, objectives, modeling
approach, solution methods and software used. Although over the last
few years there has been clearly an increase of research papers on
quantitative CSC models, we identify that the relevant literature is
very fragmented with limited applications of simulation,
mathematical programming and simulation-based optimization. Most
applications are project-specific or study only parts of the supply
system. Thus, some complex interdependencies within construction
are neglected and the implementation of the integrated supply chain
management is hindered. We conclude this paper by giving future
research directions and emphasizing the need to develop optimization
models for integrated CSCM. We stress that CSC modeling needs a
multi-dimensional, system-wide and long-term perspective. Finally,
prior applications of SCM to other industries have to be taken into
account in order to model CSCs, but not without translating the
generic concepts to the context of construction industry.
Abstract: In present study, it was aimed to determine potential
agricultural lands (PALs) in Gokceada (Imroz) Island of Canakkale
province, Turkey. Seven-band Landsat 8 OLI images acquired on
July 12 and August 13, 2013, and their 14-band combination image
were used to identify current Land Use Land Cover (LULC) status.
Principal Component Analysis (PCA) was applied to three Landsat
datasets in order to reduce the correlation between the bands. A total
of six Original and PCA images were classified using supervised
classification method to obtain the LULC maps including 6 main
classes (“Forest”, “Agriculture”, “Water Surface”, “Residential Area-
Bare Soil”, “Reforestation” and “Other”). Accuracy assessment was
performed by checking the accuracy of 120 randomized points for
each LULC maps. The best overall accuracy and Kappa statistic
values (90.83%, 0.8791% respectively) were found for PCA images
which were generated from 14-bands combined images called 3-
B/JA.
Digital Elevation Model (DEM) with 15 m spatial resolution
(ASTER) was used to consider topographical characteristics. Soil
properties were obtained by digitizing 1:25000 scaled soil maps of
Rural Services Directorate General. Potential Agricultural Lands
(PALs) were determined using Geographic information Systems
(GIS). Procedure was applied considering that “Other” class of
LULC map may be used for agricultural purposes in the future
properties. Overlaying analysis was conducted using Slope (S), Land
Use Capability Class (LUCC), Other Soil Properties (OSP) and Land
Use Capability Sub-Class (SUBC) properties.
A total of 901.62 ha areas within “Other” class (15798.2 ha) of
LULC map were determined as PALs. These lands were ranked as
“Very Suitable”, “Suitable”, “Moderate Suitable” and “Low
Suitable”. It was determined that the 8.03 ha were classified as “Very
Suitable” while 18.59 ha as suitable and 11.44 ha as “Moderate
Suitable” for PALs. In addition, 756.56 ha were found to be “Low
Suitable”. The results obtained from this preliminary study can serve
as basis for further studies.
Abstract: Boiling process is characterized by the rapid
formation of vapour bubbles at the solid–liquid interface (nucleate
boiling) with pre-existing vapour or gas pockets. Computational fluid
dynamics (CFD) is an important tool to study bubble dynamics. In
the present study, CFD simulation has been carried out to determine
the bubble detachment diameter and its terminal velocity. Volume of
fluid method is used to model the bubble and the surrounding by
solving single set of momentum equations and tracking the volume
fraction of each of the fluids throughout the domain. In the
simulation, bubble is generated by allowing water-vapour to enter a
cylinder filled with liquid water through an inlet at the bottom. After
the bubble is fully formed, the bubble detaches from the surface and
rises up during which the bubble accelerates due to the net balance
between buoyancy force and viscous drag. Finally when these forces
exactly balance each other, it attains a constant terminal velocity. The
bubble detachment diameter and the terminal velocity of the bubble
are captured by the monitor function provided in FLUENT. The
detachment diameter and the terminal velocity obtained are compared
with the established results based on the shape of the bubble. A good
agreement is obtained between the results obtained from simulation
and the equations in comparison with the established results.
Abstract: In this paper, we present a model-based regression test
suite reducing approach that uses EFSM model dependence analysis
and probability-driven greedy algorithm to reduce software regression
test suites. The approach automatically identifies the difference
between the original model and the modified model as a set of
elementary model modifications. The EFSM dependence analysis is
performed for each elementary modification to reduce the regression
test suite, and then the probability-driven greedy algorithm is adopted
to select the minimum set of test cases from the reduced regression test
suite that cover all interaction patterns. Our initial experience shows
that the approach may significantly reduce the size of regression test
suites.
Abstract: Academicians at the Arab Open University have
always voiced their concern about the efficacy of the blended
learning process. Based on 75% independent study and 25% face-toface
tutorial, it poses the challenge of the predisposition to
adjustment. Being used to the psychology of traditional educational
systems, AOU students cannot be easily weaned from being spoonfed.
Hence they lack the motivation to plunge into self-study. For
better involvement of AOU students into the learning practices, it is
imperative to diagnose the factors that impede or increase their
motivation. This is conducted through an empirical study grounded
upon observations and tested hypothesis and aimed at monitoring and
optimizing the students’ learning outcome. Recommendations of the
research will follow the findings.
Abstract: Designing cost-efficient, secure network protocols for
Wireless Sensor Networks (WSNs) is a challenging problem because
sensors are resource-limited wireless devices. Security services such
as authentication and improved pairwise key establishment are
critical to high efficient networks with sensor nodes. For sensor
nodes to correspond securely with each other efficiently, usage of
cryptographic techniques is necessary. In this paper, two key
predistribution schemes that enable a mobile sink to establish a
secure data-communication link, on the fly, with any sensor nodes.
The intermediate nodes along the path to the sink are able to verify
the authenticity and integrity of the incoming packets using a
predicted value of the key generated by the sender’s essential power.
The proposed schemes are based on the pairwise key with the mobile
sink, our analytical results clearly show that our schemes perform
better in terms of network resilience to node capture than existing
schemes if used in wireless sensor networks with mobile sinks.
Abstract: By the evolvement in technology, the way of
expressing opinions switched direction to the digital world. The
domain of politics, as one of the hottest topics of opinion mining
research, merged together with the behavior analysis for affiliation
determination in texts, which constitutes the subject of this paper.
This study aims to classify the text in news/blogs either as
Republican or Democrat with the minimum number of features. As
an initial set, 68 features which 64 were constituted by Linguistic
Inquiry and Word Count (LIWC) features were tested against 14
benchmark classification algorithms. In the later experiments, the
dimensions of the feature vector reduced based on the 7 feature
selection algorithms. The results show that the “Decision Tree”,
“Rule Induction” and “M5 Rule” classifiers when used with “SVM”
and “IGR” feature selection algorithms performed the best up to
82.5% accuracy on a given dataset. Further tests on a single feature
and the linguistic based feature sets showed the similar results. The
feature “Function”, as an aggregate feature of the linguistic category,
was found as the most differentiating feature among the 68 features
with the accuracy of 81% in classifying articles either as Republican
or Democrat.
Abstract: In this paper, the goal programming methodology for
solving multiple objective problem of the technological variants and
production plan optimization has been applied. The optimization
criteria are determined and the multiple objective linear programming
model for solving a problem of the technological variants and
production plan optimization is formed and solved. Then the obtained
results are analysed. The obtained results point out to the possibility
of efficient application of the goal programming methodology in
solving the problem of the technological variants and production plan
optimization. The paper points out on the advantages of the
application of the goal programming methodology compare to the
Surrogat Worth Trade-off method in solving this problem.
Abstract: Any signal transmitted over a channel is corrupted by noise and interference. A host of channel coding techniques has been proposed to alleviate the effect of such noise and interference. Among these Turbo codes are recommended, because of increased capacity at higher transmission rates and superior performance over convolutional codes. The multimedia elements which are associated with ample amount of data are best protected by Turbo codes. Turbo decoder employs Maximum A-posteriori Probability (MAP) and Soft Output Viterbi Decoding (SOVA) algorithms. Conventional Turbo coded systems employ Equal Error Protection (EEP) in which the protection of all the data in an information message is uniform. Some applications involve Unequal Error Protection (UEP) in which the level of protection is higher for important information bits than that of other bits. In this work, enhancement to the traditional Log MAP decoding algorithm is being done by using optimized scaling factors for both the decoders. The error correcting performance in presence of UEP in Additive White Gaussian Noise channel (AWGN) and Rayleigh fading are analyzed for the transmission of image with Discrete Cosine Transform (DCT) as source coding technique. This paper compares the performance of log MAP, Modified log MAP (MlogMAP) and Enhanced log MAP (ElogMAP) algorithms used for image transmission. The MlogMAP algorithm is found to be best for lower Eb/N0 values but for higher Eb/N0 ElogMAP performs better with optimized scaling factors. The performance comparison of AWGN with fading channel indicates the robustness of the proposed algorithm. According to the performance of three different message classes, class3 would be more protected than other two classes. From the performance analysis, it is observed that ElogMAP algorithm with UEP is best for transmission of an image compared to Log MAP and MlogMAP decoding algorithms.
Abstract: The purpose of the study was to find out the effects of
Aquatic and Land plyometric training on selected physical variables
in intercollegiate male handball players. To achieve this purpose of
the study, forty five handball players of Sardar Vallabhbhai National
Institute of Technology, Surat, Gujarat were selected as players at
random and their age ranged between 18 to 21 years. The selected
players were divided into three equal groups of fifteen players each.
Group I underwent Aquatic plyometric training, Group II underwent
Land plyometric training and Group III Control group for three days
per week for twelve weeks. Control Group did not participate in any
special training programme apart from their regular activities as per
their curriculum. The following physical fitness variables namely
speed; leg explosive power and agility were selected as dependent
variables. All the players of three groups were tested on selected
dependent variables prior to and immediately after the training
programme. The analysis of covariance was used to analyze the
significant difference, if any among the groups. Since, three groups
were compared, whenever the obtained ‘F’ ratio for adjusted posttest
was found to be significant, the Scheffe’s test to find out the paired
mean differences, if any. The 0.05 level of confidence was fixed as
the level of significance to test the ‘F’ ratio obtained by the analysis
of covariance, which was considered as an appropriate. The result of
the study indicates due to Aquatic and Land plyometric training on
speed, explosive power, and agility has been improved significantly.
Abstract: This paper shows the connection between emoticons and politeness in written computer-mediated communication. It studies if there are some differences in the use of emoticon between Czech and English written tweets. The assumptions about the use of emoticons were based on the use of greetings and thanks in real, faceto-face situations. The first assumption, that welcome greeting phrase would be accompanied by positive emoticon, was correct. But for the farewell greeting are both positive and negative emoticons possible. The results show lower frequency of negative emoticons in this context. There were also quite often found both positive and negative emoticon in the same tweet. The expression of gratitude is associated with positive emotions. The results show that emoticons accompany polite phrases of greeting and thanks very often both in Czech and English. The use of emoticons with studied polite phrases shows that emoticons have become an integral part of these phrases.
Abstract: The main function of Medium Access Control (MAC) is to share the channel efficiently between all nodes. In the real-time scenario, there will be certain amount of wastage in bandwidth due to back-off periods. More bandwidth will be wasted in idle state if the back-off period is very high and collision may occur if the back-off period is small. So, an optimization is needed for this problem. The main objective of the work is to reduce delay due to back-off period thereby reducing collision and increasing throughput. Here a method, called the virtual back-off algorithm (VBA) is used to optimize the back-off period and thereby it increases throughput and reduces collisions. The main idea is to optimize the number of transmission for every node. A counter is introduced at each node to implement this idea. Here counter value represents the sequence number. VBA is classified into two types VBA with counter sharing (VBA-CS) and VBA with no counter sharing (VBA-NCS). These two classifications of VBA are compared for various parameters. Simulation is done in NS-2 environment. The results obtained are found to be promising.
Abstract: The venture capital becomes more and more advanced
and effective source of the innovation project financing, connected
with a high-risk level. In the developed countries, it plays a key role
in transforming innovation projects into successful businesses and
creating the prosperity of the modern economy. In Russia, there are
many necessary preconditions for creation of the effective venture
investment system: the network of the public institutes for innovation
financing operates; there is a significant number of the small and
medium-sized enterprises, capable to sell production with good
market potential. However, the current system does not confirm the
necessary level of efficiency in practice that can be substantially
explained by the absence of the accurate plan of action to form the
national venture model and by the lack of experience of successful
venture deals with profitable exits in Russian economy. This paper
studies the influence of various factors on the venture industry
development by the example of the IT-sector in Russia. The choice of
the sector is based on the fact, that this segment is the main driver of
the venture capital market growth in Russia, and the necessary set of
data exists. The size of investment of the second round is used as the
dependent variable. To analyse the influence of the previous round,
such determinant as the volume of the previous (first) round
investments is used. There is also used a dummy variable in
regression to examine that the participation of an investor with high
reputation and experience in the previous round can influence the size
of the next investment round. The regression analysis of short-term
interrelations between studied variables reveals prevailing influence
of the volume of the first round investments on the venture
investments volume of the second round. The most important
determinant of the value of the second-round investment is the value
of first–round investment, so it means that the most competitive on
the Russian market are the start-up teams that can attract more money
on the start, and the target market growth is not the factor of crucial
importance. This supports the point of view that VC in Russia is
driven by endogenous factors and not by exogenous ones that are
based on global market growth.
Abstract: There is not much effective guideline on development of design parameters selection on spring back for advanced high strength steel sheet metal in U-channel process during cold forming process. This paper presents the development of predictive model for spring back in U-channel process on advanced high strength steel sheet employing Response Surface Methodology (RSM). The experimental was performed on dual phase steel sheet, DP590 in Uchannel forming process while design of experiment (DoE) approach was used to investigates the effects of four factors namely blank holder force (BHF), clearance (C) and punch travel (Tp) and rolling direction (R) were used as input parameters using two level values by applying Full Factorial design (24 ). From a statistical analysis of variant (ANOVA), result showed that blank holder force (BHF), clearance (C) and punch travel (Tp) displayed significant effect on spring back of flange angle (β2 ) and wall opening angle (β1 ), while rolling direction (R) factor is insignificant. The significant parameters are optimized in order to reduce the spring back behavior using Central Composite Design (CCD) in RSM and the optimum parameters were determined. A regression model for spring back was developed. The effect of individual parameters and their response was also evaluated. The results obtained from optimum model are in agreement with the experimental values.
Abstract: The present study is concerned with the problem of determining the shape of the free surface flow in a hydraulic channel which has an uneven bottom. For the mathematical formulation of the problem, the fluid of the two-dimensional irrotational steady flow in water is assumed inviscid and incompressible. The solutions of the nonlinear problem are obtained by using the usual conformal mapping theory and Hilbert’s technique. An experimental study, for comparing the obtained results, has been conducted in a hydraulic channel (subcritical regime and supercritical regime).
Abstract: This study addresses a concept of the Sustainable Building Environmental Model (SBEM) developed to optimize energy consumption in air conditioning and ventilation (ACV) systems without any deterioration of indoor environmental quality (IEQ). The SBEM incorporates two main components: an adaptive comfort temperature control module (ACT) and a new carbon dioxide demand control module (nDCV). These two modules take an innovative approach to maintain satisfaction of the Indoor Environmental Quality (IEQ) with optimum energy consumption; they provide a rational basis of effective control. A total of 2133 sets of measurement data of indoor air temperature (Ta), relative humidity (Rh) and carbon dioxide concentration (CO2) were conducted in some Hong Kong offices to investigate the potential of integrating the SBEM. A simulation was used to evaluate the dynamic performance of the energy and air conditioning system with the integration of the SBEM in an air-conditioned building. It allows us make a clear picture of the control strategies and performed any pre-tuned of controllers before utilized in real systems. With the integration of SBEM, it was able to save up to 12.3% in simulation of overall electricity consumption, and maintain the average carbon dioxide concentration within 1000ppm and occupant dissatisfaction in 20%.
Abstract: Advance in techniques of image and video processing has enabled the development of intelligent video surveillance systems. This study was aimed to automatically detect moving human objects and to analyze events of dual human interaction in a surveillance scene. Our system was developed in four major steps: image preprocessing, human object detection, human object tracking, and motion trajectory analysis. The adaptive background subtraction and image processing techniques were used to detect and track moving human objects. To solve the occlusion problem during the interaction, the Kalman filter was used to retain a complete trajectory for each human object. Finally, the motion trajectory analysis was developed to distinguish between the interaction and non-interaction events based on derivatives of trajectories related to the speed of the moving objects. Using a database of 60 video sequences, our system could achieve the classification accuracy of 80% in interaction events and 95% in non-interaction events, respectively. In summary, we have explored the idea to investigate a system for the automatic classification of events for interaction and non-interaction events using surveillance cameras. Ultimately, this system could be incorporated in an intelligent surveillance system for the detection and/or classification of abnormal or criminal events (e.g., theft, snatch, fighting, etc.).
Abstract: To explore how the brain may recognise objects in its
general,accurate and energy-efficient manner, this paper proposes the
use of a neuromorphic hardware system formed from a Dynamic
Video Sensor (DVS) silicon retina in concert with the SpiNNaker
real-time Spiking Neural Network (SNN) simulator. As a first step
in the exploration on this platform a recognition system for dynamic
hand postures is developed, enabling the study of the methods used
in the visual pathways of the brain. Inspired by the behaviours of
the primary visual cortex, Convolutional Neural Networks (CNNs)
are modelled using both linear perceptrons and spiking Leaky
Integrate-and-Fire (LIF) neurons.
In this study’s largest configuration using these approaches, a
network of 74,210 neurons and 15,216,512 synapses is created and
operated in real-time using 290 SpiNNaker processor cores in parallel
and with 93.0% accuracy. A smaller network using only 1/10th of the
resources is also created, again operating in real-time, and it is able
to recognise the postures with an accuracy of around 86.4% - only
6.6% lower than the much larger system. The recognition rate of the
smaller network developed on this neuromorphic system is sufficient
for a successful hand posture recognition system, and demonstrates
a much improved cost to performance trade-off in its approach.