Abstract: Even though most researchers would agree that in
symbiotic relationships, like the one between parent and child,
influences become reciprocal over time, empirical evidence
supporting this claim is limited. The aim of the current study was to
develop and test a model describing the reciprocal influence between
characteristics of the parent-child relationship, such as closeness and
conflict, and the child-s bullying and victimization experiences at
school. The study used data from the longitudinal Study of Early
Child-Care, conducted by the National Institute of Child Health and
Human Development. The participants were dyads of early
adolescents (5th and 6th graders during the two data collection waves)
and their mothers (N=1364). Supporting our hypothesis, the findings
suggested a reciprocal association between bullying and positive
parenting, although this association was only significant for boys.
Victimization and positive parenting were not significantly
interrelated.
Abstract: In this paper we propose a new traffic simulation
package, TDMSim, which supports both macroscopic and
microscopic simulation on free-flowing and regulated traffic systems.
Both simulators are based on travel demands, which specify the
numbers of vehicles departing from origins to arrive at different
destinations. The microscopic simulator implements the carfollowing
model given the pre-defined routes of the vehicles but also
supports the rerouting of vehicles. We also propose a macroscopic
simulator which is built in integration with the microscopic simulator
to allow the simulation to be scaled for larger networks without
sacrificing the precision achievable through the microscopic
simulator. The macroscopic simulator also enables the reuse of
previous simulation results when simulating traffic on the same
networks at later time. Validations have been conducted to show the
correctness of both simulators.
Abstract: The problem of frequent pattern discovery is defined
as the process of searching for patterns such as sets of features or items that appear in data frequently. Finding such frequent patterns
has become an important data mining task because it reveals associations, correlations, and many other interesting relationships
hidden in a database. Most of the proposed frequent pattern mining
algorithms have been implemented with imperative programming
languages. Such paradigm is inefficient when set of patterns is large
and the frequent pattern is long. We suggest a high-level declarative
style of programming apply to the problem of frequent pattern
discovery. We consider two languages: Haskell and Prolog. Our
intuitive idea is that the problem of finding frequent patterns should
be efficiently and concisely implemented via a declarative paradigm
since pattern matching is a fundamental feature supported by most
functional languages and Prolog. Our frequent pattern mining
implementation using the Haskell and Prolog languages confirms our
hypothesis about conciseness of the program. The comparative
performance studies on line-of-code, speed and memory usage of
declarative versus imperative programming have been reported in the
paper.
Abstract: Tumor classification is a key area of research in the
field of bioinformatics. Microarray technology is commonly used in
the study of disease diagnosis using gene expression levels. The
main drawback of gene expression data is that it contains thousands
of genes and a very few samples. Feature selection methods are used
to select the informative genes from the microarray. These methods
considerably improve the classification accuracy. In the proposed
method, Genetic Algorithm (GA) is used for effective feature
selection. Informative genes are identified based on the T-Statistics,
Signal-to-Noise Ratio (SNR) and F-Test values. The initial candidate
solutions of GA are obtained from top-m informative genes. The
classification accuracy of k-Nearest Neighbor (kNN) method is used
as the fitness function for GA. In this work, kNN and Support Vector
Machine (SVM) are used as the classifiers. The experimental results
show that the proposed work is suitable for effective feature
selection. With the help of the selected genes, GA-kNN method
achieves 100% accuracy in 4 datasets and GA-SVM method
achieves in 5 out of 10 datasets. The GA with kNN and SVM
methods are demonstrated to be an accurate method for microarray
based tumor classification.
Abstract: In Both developed and developing countries,
governments play a basic role in making policies, programs and
instruments which support the development of micro, small and
medium enterprises. One of the mechanisms employed to nurture
small firms for more than two decades is business incubation. One of
the mechanisms employed to nurture small firms for more than two
decades is technology business incubation. The main aim of this
research was to establish influencing factors in Technology Business
Incubator's effectiveness and their explanatory model. Therefore,
among 56 Technology Business Incubators in Iran, 32 active
incubators were selected and by stratified random sampling, 528
start-ups were chosen. The validity of research questionnaires
was determines by expert consensus, item analysis and factor
analysis; and their reliability calculated by Cronbach-s alpha.
Data analysis was then made through SPSS and LISREL soft wares.
Both organizational procedures and entrepreneurial behaviors were
the meaningful mediators. Organizational procedures with (P < .01, β
=0.45) was stronger mediator for the improvement of Technology
Business Incubator's effectiveness comparing to entrepreneurial
behavior with (P < .01, β =0.36).
Abstract: Data mining has been integrated into application systems to enhance the quality of the decision-making process. This study aims to focus on the integration of data mining technology and Knowledge Management System (KMS), due to the ability of data mining technology to create useful knowledge from large volumes of data. Meanwhile, KMS vitally support the creation and use of knowledge. The integration of data mining technology and KMS are popularly used in business for enhancing and sustaining organizational performance. However, there is a lack of studies that applied data mining technology and KMS in the education sector; particularly students- academic performance since this could reflect the IHL performance. Realizing its importance, this study seeks to integrate data mining technology and KMS to promote an effective management of knowledge within IHLs. Several concepts from literature are adapted, for proposing the new integrative data mining technology and KMS framework to an IHL.
Abstract: Simulation is a very powerful method used for highperformance
and high-quality design in distributed system, and now
maybe the only one, considering the heterogeneity, complexity and
cost of distributed systems. In Grid environments, foe example, it is
hard and even impossible to perform scheduler performance
evaluation in a repeatable and controllable manner as resources and
users are distributed across multiple organizations with their own
policies. In addition, Grid test-beds are limited and creating an
adequately-sized test-bed is expensive and time consuming.
Scalability, reliability and fault-tolerance become important
requirements for distributed systems in order to support distributed
computation. A distributed system with such characteristics is called
dependable. Large environments, like Cloud, offer unique
advantages, such as low cost, dependability and satisfy QoS for all
users. Resource management in large environments address
performant scheduling algorithm guided by QoS constrains. This
paper presents the performance evaluation of scheduling heuristics
guided by different optimization criteria. The algorithms for
distributed scheduling are analyzed in order to satisfy users
constrains considering in the same time independent capabilities of
resources. This analysis acts like a profiling step for algorithm
calibration. The performance evaluation is based on simulation. The
simulator is MONARC, a powerful tool for large scale distributed
systems simulation. The novelty of this paper consists in synthetic
analysis results that offer guidelines for scheduler service
configuration and sustain the empirical-based decision. The results
could be used in decisions regarding optimizations to existing Grid
DAG Scheduling and for selecting the proper algorithm for DAG
scheduling in various actual situations.
Abstract: Increasing use of cell phone as a medium of human interaction is playing a vital role in solving riddles of crime as well. A young girl went missing from her home late in the evening in the month of August, 2008 when her enraged relatives and villagers physically assaulted and chased her fiancée who often frequented her home. Two years later, her mother lodged a complaint against the relatives and the villagers alleging that after abduction her daughter was either sold or killed as she had failed to trace her. On investigation, a rusted cell phone with partial visible IMEI number, clothes, bangles, human skeleton etc. recovered from abandoned well in the month of May, 2011 were examined in the lab. All hopes pinned on identity of cell phone, for only linking evidence to fix the scene of occurrence supported by call detail record (CDR) and to dispel doubts about mode of sudden disappearance or death as DNA technology did not help in establishing identity of the deceased. The conventional scientific methods were used without success and international mobile equipment identification number of the cell phone could be generated by using statistical analysis followed by online verification.
Abstract: It has become crucial over the years for nations to
improve their credit scoring methods and techniques in light of the
increasing volatility of the global economy. Statistical methods or
tools have been the favoured means for this; however artificial
intelligence or soft computing based techniques are becoming
increasingly preferred due to their proficient and precise nature and
relative simplicity. This work presents a comparison between Support
Vector Machines and Artificial Neural Networks two popular soft
computing models when applied to credit scoring. Amidst the
different criteria-s that can be used for comparisons; accuracy,
computational complexity and processing times are the selected
criteria used to evaluate both models. Furthermore the German credit
scoring dataset which is a real world dataset is used to train and test
both developed models. Experimental results obtained from our study
suggest that although both soft computing models could be used with
a high degree of accuracy, Artificial Neural Networks deliver better
results than Support Vector Machines.
Abstract: Titanium oxide films with different morphologies have for the first time been fabricated through hydrothermal reactions between a titanium substrate and iodine powder in water or ethanol. SEM revealed that iodine supported titanium (Ti-I2) surface shows different morphologies with variable treatment conditions. The mean surface roughness (Ra) was increased in the different groups. Use of surfactant has a role to increase the roughness of the film. The surface roughness was in the range of 0.15 μm-0.42 μm. Furthermore, the electrochemical examinations showed that the Ti-I2 surface fabricated in alcoholic medium has high corrosion resistance than in aqueous medium.
Abstract: Within the realm of e-government, the development has moved towards testing new means for democratic decisionmaking, like e-panels, electronic discussion forums, and polls. Although such new developments seem promising, they are not problem-free, and the outcomes are seldom used in the subsequent formal political procedures. Nevertheless, process models offer promising potential when it comes to structuring and supporting transparency of decision processes in order to facilitate the integration of the public into decision-making procedures in a reasonable and manageable way. Based on real-life cases of urban planning processes in Sweden, we present an outline for an integrated framework for public decision making to: a) provide tools for citizens to organize discussion and create opinions; b) enable governments, authorities, and institutions to better analyse these opinions; and c) enable governments to account for this information in planning and societal decision making by employing a process model for structured public decision making.
Abstract: Several combinations of the preprocessing algorithms,
feature selection techniques and classifiers can be applied to the data
classification tasks. This study introduces a new accurate classifier,
the proposed classifier consist from four components: Signal-to-
Noise as a feature selection technique, support vector machine,
Bayesian neural network and AdaBoost as an ensemble algorithm.
To verify the effectiveness of the proposed classifier, seven well
known classifiers are applied to four datasets. The experiments show
that using the suggested classifier enhances the classification rates for
all datasets.
Abstract: We fabricated the inverted-staggered etch stopper
structure oxide-based TFT and investigated the characteristics of oxide
TFT under the 400 nm wavelength light illumination. When 400 nm
light was illuminated, the threshold voltage (Vth) decreased and
subthreshold slope (SS) increased at forward sweep, while Vth and SS
were not altered when larger wavelength lights, such as 650 nm, 550
nm and 450 nm, were illuminated. At reverse sweep, the transfer curve
barely changed even under 400 nm light. Our experimental results
support that photo-induced hole carriers are captured by donor-like
interface trap and it caused the decrease of Vth and increase of SS. We
investigated the interface trap density increases proportionally to the
photo-induced hole concentration at active layer.
Abstract: This paper presents dynamic voltage collapse prediction on an actual power system using support vector machines.
Dynamic voltage collapse prediction is first determined based on the PTSI calculated from information in dynamic simulation output. Simulations were carried out on a practical 87 bus test system by considering load increase as the contingency. The data collected from the time domain simulation is then used as input to the SVM in which support vector regression is used as a predictor to determine the
dynamic voltage collapse indices of the power system. To reduce training time and improve accuracy of the SVM, the Kernel function type and Kernel parameter are considered. To verify the
effectiveness of the proposed SVM method, its performance is compared with the multi layer perceptron neural network (MLPNN). Studies show that the SVM gives faster and more accurate results for dynamic voltage collapse prediction compared with the MLPNN.
Abstract: Proteomics is one of the largest areas of research for
bioinformatics and medical science. An ambitious goal of proteomics
is to elucidate the structure, interactions and functions of all proteins
within cells and organisms. Predicting Protein-Protein Interaction
(PPI) is one of the crucial and decisive problems in current research.
Genomic data offer a great opportunity and at the same time a lot of
challenges for the identification of these interactions. Many methods
have already been proposed in this regard. In case of in-silico
identification, most of the methods require both positive and negative
examples of protein interaction and the perfection of these examples
are very much crucial for the final prediction accuracy. Positive
examples are relatively easy to obtain from well known databases. But
the generation of negative examples is not a trivial task. Current PPI
identification methods generate negative examples based on some
assumptions, which are likely to affect their prediction accuracy.
Hence, if more reliable negative examples are used, the PPI prediction
methods may achieve even more accuracy. Focusing on this issue, a
graph based negative example generation method is proposed, which
is simple and more accurate than the existing approaches. An
interaction graph of the protein sequences is created. The basic
assumption is that the longer the shortest path between two
protein-sequences in the interaction graph, the less is the possibility of
their interaction. A well established PPI detection algorithm is
employed with our negative examples and in most cases it increases
the accuracy more than 10% in comparison with the negative pair
selection method in that paper.
Abstract: To provide a better understanding of fair share policies supported by current production schedulers and their impact on scheduling performance, A relative fair share policy supported in four well-known production job schedulers is evaluated in this study. The experimental results show that fair share indeed reduces heavy-demand users from dominating the system resources. However, the detailed per-user performance analysis show that some types of users may suffer unfairness under fair share, possibly due to priority mechanisms used by the current production schedulers. These users typically are not heavy-demands users but they have mixture of jobs that do not spread out.
Abstract: The objective of this paper, is to apply support vector machine (SVM) approach for the classification of cancerous and normal regions of prostate images. Three kinds of textural features are extracted and used for the analysis: parameters of the Gauss- Markov random field (GMRF), correlation function and relative entropy. Prostate images are acquired by the system consisting of a microscope, video camera and a digitizing board. Cross-validated classification over a database of 46 images is implemented to evaluate the performance. In SVM classification, sensitivity and specificity of 96.2% and 97.0% are achieved for the 32x32 pixel block sized data, respectively, with an overall accuracy of 96.6%. Classification performance is compared with artificial neural network and k-nearest neighbor classifiers. Experimental results demonstrate that the SVM approach gives the best performance.
Abstract: Sol-gel immobilization of enzymes, which can improve considerably their properties, is now one of the most used techniques. By deposition of the entrapped lipase on a solid support, a new and improved biocatalyst was obtained, which can be used with excellent results in acylation reactions. In this paper, lipase B from Candida antarctica was double immobilized on different adsorbents. These biocatalysts were employed in the kinetic resolution of several aliphatic secondary alcohols in organic medium. High total recovery yields of enzymatic activity, up to 560%, were obtained. For all the studied alcohols the enantiomeric ratios E were over 200. The influence of the reaction medium was studied for the kinetic resolution of 2-pentanol.
Abstract: The Ministry of Defense (MoD) spends hundreds of
millions of dollars on software to support its infrastructure, operate
its weapons and provide command, control, communications,
computing, intelligence, surveillance, and reconnaissance (C4ISR)
functions. These and other all new advanced systems have a common
critical component is information technology. Defense and
Aerospace environment is continuously striving to keep up with
increasingly sophisticated Information Technology (IT) in order to
remain effective in today-s dynamic and unpredictable threat
environment. This makes it one of the largest and fastest growing
expenses of Defense. Hundreds of millions of dollars spent a year on
IT projects. But, too many of those millions are wasted on costly
mistakes. Systems that do not work properly, new components that
are not compatible with old once, trendily new applications that do
not really satisfy defense needs or lost though poorly managed
contracts.
This paper investigates and compiles the effective strategies that
aim to end exasperation with low returns and high cost of
Information Technology Acquisition for defense; it tries to show how
to maximize value while reducing time and expenditure.
Abstract: This paper describes an experience of research,
development and innovation applied in Industrial Naval at (Science
and Technology Corporation for the Development of Shipbuilding
Industry, Naval in Colombia (COTECMAR) particularly through
processes of research, innovation and technological development,
based on theoretical models related to organizational knowledge
management, technology management and management of human
talent and integration of technology platforms. It seeks ways to
facilitate the initial establishment of environments rich in
information, knowledge and content-supported collaborative
strategies on dynamic processes missionary, seeking further
development in the context of research, development and innovation
of the Naval Engineering in Colombia, making it a distinct basis for
the generation of knowledge assets from COTECMAR.
The integration of information and communication technologies,
supported on emerging technologies (mobile technologies, wireless,
digital content via PDA, and content delivery services on the Web 2.0
and Web 3.0) as a view of the strategic thrusts in any organization
facilitates the redefinition of processes for managing information and
knowledge, enabling the redesign of workflows, the adaptation of
new forms of organization - preferably in networking and support the
creation of symbolic-inside-knowledge promotes the development of
new skills, knowledge and attitudes of the knowledge worker