Abstract: This paper describes an experience of research,
development and innovation applied in Industrial Naval at (Science
and Technology Corporation for the Development of Shipbuilding
Industry, Naval in Colombia (COTECMAR) particularly through
processes of research, innovation and technological development,
based on theoretical models related to organizational knowledge
management, technology management and management of human
talent and integration of technology platforms. It seeks ways to
facilitate the initial establishment of environments rich in
information, knowledge and content-supported collaborative
strategies on dynamic processes missionary, seeking further
development in the context of research, development and innovation
of the Naval Engineering in Colombia, making it a distinct basis for
the generation of knowledge assets from COTECMAR.
The integration of information and communication technologies,
supported on emerging technologies (mobile technologies, wireless,
digital content via PDA, and content delivery services on the Web 2.0
and Web 3.0) as a view of the strategic thrusts in any organization
facilitates the redefinition of processes for managing information and
knowledge, enabling the redesign of workflows, the adaptation of
new forms of organization - preferably in networking and support the
creation of symbolic-inside-knowledge promotes the development of
new skills, knowledge and attitudes of the knowledge worker
Abstract: The set of all abelian subalgebras is computationally
obtained for any given finite-dimensional Lie algebra, starting from the nonzero brackets in its law. More concretely, an algorithm
is described and implemented to compute a basis for each nontrivial abelian subalgebra with the help of the symbolic computation package MAPLE. Finally, it is also shown a brief computational study
for this implementation, considering both the computing time and the
used memory.
Abstract: Technological innovation capability (TIC) is
defined as a comprehensive set of characteristics of a firm that
facilities and supports its technological innovation strategies.
An audit to evaluate the TICs of a firm may trigger
improvement in its future practices. Such an audit can be used
by the firm for self assessment or third-party independent
assessment to identify problems of its capability status. This
paper attempts to develop such an auditing framework that
can help to determine the subtle links between innovation
capabilities and business performance; and to enable the
auditor to determine whether good practice is in place. The
seven TICs in this study include learning, R&D, resources
allocation, manufacturing, marketing, organization and
strategic planning capabilities. Empirical data was acquired
through a survey study of 200 manufacturing firms in the
Hong Kong/Pearl River Delta (HK/PRD) region. Structural
equation modelling was employed to examine the
relationships among TICs and various performance indicators:
sales performance, innovation performance, product
performance, and sales growth. The results revealed that
different TICs have different impacts on different
performance measures. Organization capability was found to
have the most influential impact. Hong Kong manufacturers
are now facing the challenge of high-mix-low-volume
customer orders. In order to cope with this change, good
capability in organizing different activities among various
departments is critical to the success of a company.
Abstract: The equivalence class subset algorithm is a powerful
tool for solving a wide variety of constraint satisfaction problems and
is based on the use of a decision function which has a very high but
not perfect accuracy. Perfect accuracy is not required in the decision
function as even a suboptimal solution contains valuable information
that can be used to help find an optimal solution. In the hardest
problems, the decision function can break down leading to a
suboptimal solution where there are more equivalence classes than
are necessary and which can be viewed as a mixture of good decision
and bad decisions. By choosing a subset of the decisions made in
reaching a suboptimal solution an iterative technique can lead to an
optimal solution, using series of steadily improved suboptimal
solutions. The goal is to reach an optimal solution as quickly as
possible. Various techniques for choosing the decision subset are
evaluated.
Abstract: In this paper, multi-processors job shop scheduling problems are solved by a heuristic algorithm based on the hybrid of priority dispatching rules according to an ant colony optimization algorithm. The objective function is to minimize the makespan, i.e. total completion time, in which a simultanous presence of various kinds of ferons is allowed. By using the suitable hybrid of priority dispatching rules, the process of finding the best solution will be improved. Ant colony optimization algorithm, not only promote the ability of this proposed algorithm, but also decreases the total working time because of decreasing in setup times and modifying the working production line. Thus, the similar work has the same production lines. Other advantage of this algorithm is that the similar machines (not the same) can be considered. So, these machines are able to process a job with different processing and setup times. According to this capability and from this algorithm evaluation point of view, a number of test problems are solved and the associated results are analyzed. The results show a significant decrease in throughput time. It also shows that, this algorithm is able to recognize the bottleneck machine and to schedule jobs in an efficient way.
Abstract: A spatial classification technique incorporating a State of Art Feature Extraction algorithm is proposed in this paper for classifying a heterogeneous classes present in hyper spectral images. The classification accuracy can be improved if and only if both the feature extraction and classifier selection are proper. As the classes in the hyper spectral images are assumed to have different textures, textural classification is entertained. Run Length feature extraction is entailed along with the Principal Components and Independent Components. A Hyperspectral Image of Indiana Site taken by AVIRIS is inducted for the experiment. Among the original 220 bands, a subset of 120 bands is selected. Gray Level Run Length Matrix (GLRLM) is calculated for the selected forty bands. From GLRLMs the Run Length features for individual pixels are calculated. The Principle Components are calculated for other forty bands. Independent Components are calculated for next forty bands. As Principal & Independent Components have the ability to represent the textural content of pixels, they are treated as features. The summation of Run Length features, Principal Components, and Independent Components forms the Combined Features which are used for classification. SVM with Binary Hierarchical Tree is used to classify the hyper spectral image. Results are validated with ground truth and accuracies are calculated.
Abstract: The acoustic and articulatory properties of fricative speech sounds are being studied using magnetic resonance imaging (MRI) and acoustic recordings from a single subject. Area functions were derived from a complete set of axial and coronal MR slices using two different methods: the Mermelstein technique and the Blum transform. Area functions derived from the two techniques were shown to differ significantly in some cases. Such differences will lead to different acoustic predictions and it is important to know which is the more accurate. The vocal tract acoustic transfer function (VTTF) was derived from these area functions for each fricative and compared with measured speech signals for the same fricative and same subject. The VTTFs for /f/ in two vowel contexts and the corresponding acoustic spectra are derived here; the Blum transform appears to show a better match between prediction and measurement than the Mermelstein technique.
Abstract: The effect of dry milling on the carbothermic
reduction of celestite was investigated. Mixtures of celestite
concentrate (98% SrSO4) and activated carbon (99% carbon) was
milled for 1 and 24 hours in a planetary ball mill. Un-milled and
milled mixtures and their products after carbothermic reduction were
studied by a combination of XRD and TGA/DTA experiments. The
thermogravimetric analyses and XRD results showed that by milling
celestite-carbon mixtures for one hour, the formation temperature of
strontium sulfide decreased from about 720°C (in un-milled sample)
to about 600°C, after 24 hours milling it decreased to 530°C. It was
concluded that milling induces increasingly thorough mixing of the
reactants to reduction occurring at lower temperatures
Abstract: In this paper an ant colony optimization algorithm is
developed to solve the permutation flow shop scheduling problem. In
the permutation flow shop scheduling problem which has been vastly
studied in the literature, there are a set of m machines and a set of n
jobs. All the jobs are processed on all the machines and the sequence
of jobs being processed is the same on all the machines. Here this
problem is optimized considering two criteria, makespan and total
flow time. Then the results are compared with the ones obtained by
previously developed algorithms. Finally it is visible that our
proposed approach performs best among all other algorithms in the
literature.
Abstract: This work deals with aspects of support vector machine learning for large-scale data mining tasks. Based on a decomposition algorithm for support vector machine training that can be run in serial as well as shared memory parallel mode we introduce a transformation of the training data that allows for the usage of an expensive generalized kernel without additional costs. We present experiments for the Gaussian kernel, but usage of other kernel functions is possible, too. In order to further speed up the decomposition algorithm we analyze the critical problem of working set selection for large training data sets. In addition, we analyze the influence of the working set sizes onto the scalability of the parallel decomposition scheme. Our tests and conclusions led to several modifications of the algorithm and the improvement of overall support vector machine learning performance. Our method allows for using extensive parameter search methods to optimize classification accuracy.
Abstract: Since dealing with high dimensional data is
computationally complex and sometimes even intractable, recently
several feature reductions methods have been developed to reduce
the dimensionality of the data in order to simplify the calculation
analysis in various applications such as text categorization, signal
processing, image retrieval, gene expressions and etc. Among feature
reduction techniques, feature selection is one the most popular
methods due to the preservation of the original features.
In this paper, we propose a new unsupervised feature selection
method which will remove redundant features from the original
feature space by the use of probability density functions of various
features. To show the effectiveness of the proposed method, popular
feature selection methods have been implemented and compared.
Experimental results on the several datasets derived from UCI
repository database, illustrate the effectiveness of our proposed
methods in comparison with the other compared methods in terms of
both classification accuracy and the number of selected features.
Abstract: In this paper we propose new method for
simultaneous generating multiple quantiles corresponding to given
probability levels from data streams and massive data sets. This
method provides a basis for development of single-pass low-storage
quantile estimation algorithms, which differ in complexity, storage
requirement and accuracy. We demonstrate that such algorithms may
perform well even for heavy-tailed data.
Abstract: This paper proposes a novel methodology for enabling
debugging and tracing of production web applications without
affecting its normal flow and functionality. This method of debugging
enables developers and maintenance engineers to replace a set of
existing resources such as images, server side scripts, cascading
style sheets with another set of resources per web session. The new
resources will only be active in the debug session and other sessions
will not be affected. This methodology will help developers in tracing
defects, especially those that appear only in production environments
and in exploring the behaviour of the system. A realization of the
proposed methodology has been implemented in Java.
Abstract: This paper presents an approach for repairing word order errors in English text by reordering words in a sentence and choosing the version that maximizes the number of trigram hits according to a language model. A possible way for reordering the words is to use all the permutations. The problem is that for a sentence with length N words the number of all permutations is N!. The novelty of this method concerns the use of an efficient confusion matrix technique for reordering the words. The confusion matrix technique has been designed in order to reduce the search space among permuted sentences. The limitation of search space is succeeded using the statistical inference of N-grams. The results of this technique are very interesting and prove that the number of permuted sentences can be reduced by 98,16%. For experimental purposes a test set of TOEFL sentences was used and the results show that more than 95% can be repaired using the proposed method.
Abstract: In this paper an algorithm for fast wavelength calibration of Optical Spectrum Analyzers (OSAs) using low power reference gas spectra is proposed. In existing OSAs a reference spectrum with low noise for precise detection of the reference extreme values is needed. To generate this spectrum costly hardware with high optical power is necessary. With this new wavelength calibration algorithm it is possible to use a noisy reference spectrum and therefore hardware costs can be cut. With this algorithm the reference spectrum is filtered and the key information is extracted by segmenting and finding the local minima and maxima. Afterwards slope and offset of a linear correction function for best matching the measured and theoretical spectra are found by correlating the measured with the stored minima. With this algorithm a reliable wavelength referencing of an OSA can be implemented on a microcontroller with a calculation time of less than one second.
Abstract: This paper presents an alternate approach that uses
artificial neural network to simulate the flood level dynamics in a
river basin. The algorithm was developed in a decision support
system environment in order to enable users to process the data. The
decision support system is found to be useful due to its interactive
nature, flexibility in approach and evolving graphical feature and can
be adopted for any similar situation to predict the flood level. The
main data processing includes the gauging station selection, input
generation, lead-time selection/generation, and length of prediction.
This program enables users to process the flood level data, to
train/test the model using various inputs and to visualize results. The
program code consists of a set of files, which can as well be modified
to match other purposes. This program may also serve as a tool for
real-time flood monitoring and process control. The running results
indicate that the decision support system applied to the flood level
seems to have reached encouraging results for the river basin under
examination. The comparison of the model predictions with the
observed data was satisfactory, where the model is able to forecast
the flood level up to 5 hours in advance with reasonable prediction
accuracy. Finally, this program may also serve as a tool for real-time
flood monitoring and process control.
Abstract: Graph partitioning is a NP-hard problem with multiple
conflicting objectives. The graph partitioning should minimize the
inter-partition relationship while maximizing the intra-partition
relationship. Furthermore, the partition load should be evenly
distributed over the respective partitions. Therefore this is a multiobjective
optimization problem (MOO). One of the approaches to
MOO is Pareto optimization which has been used in this paper. The
proposed methods of this paper used to improve the performance are
injecting best solutions of previous runs into the first generation of
next runs and also storing the non-dominated set of previous
generations to combine with later generation's non-dominated set.
These improvements prevent the GA from getting stuck in the local
optima and increase the probability of finding more optimal
solutions. Finally, a simulation research is carried out to investigate
the effectiveness of the proposed algorithm. The simulation results
confirm the effectiveness of the proposed method.
Abstract: An ethical mandate of the social work profession in the
United States is that BSW and MSW graduates are sufficiently
prepared to both understand diverse cultural values and beliefs and
offer services that are culturally sensitive and relevant to clients. This
skill set is particularly important for social workers in the 21st Century,
given the increasing globalization of the U.S. and world. The purpose
of this paper is to outline a pedagogical model for teaching cultural
competency that resulted in a significant increase in cultural
competency for MSW graduates at Western Kentucky University
(WKU). More specifically, this model is predicated on five specific
culturally sensitive principles and activities that were found to be
highly effective in conveying culturally relevant knowledge and skills
to MSW students at WKU. Future studies can assess the effectiveness
of these principles in other MSW programs across the U.S. and abroad.
Abstract: B2E portals represent a new class of web-based
information technologies which many organisations are introducing
in recent years to stay in touch with their distributed workforces and
enable them to perform value added activities for organisations.
However, actual usage of these emerging systems (measured using
suitable instruments) has not been reported in the contemporary
scholarly literature. We argue that many of the instruments to
measure usage of various types of IT-enabled information systems
are not directly applicable for B2E portals because they were
developed for the context of traditional mainframe and PC-based
information systems. It is therefore important to develop a new
instrument for web-based portal technologies aimed at employees. In
this article, we report on the development and initial qualitative
evaluation of an instrument that seeks to operationaise a set of
independent factors affecting the usage of portals by employees. The
proposed instrument is useful to IT/e-commerce researchers and
practitioners alike as it enhances their confidence in predicting
employee usage of portals in organisations.
Abstract: FlexRay, as a communication protocol for automotive
control systems, is developed to fulfill the increasing demand on the
electronic control units for implementing systems with higher safety
and more comfort. In this work, we study the impact of
radiation-induced soft errors on FlexRay-based steer-by-wire system.
We injected the soft errors into general purpose register set of FlexRay
nodes to identify the most critical registers, the failure modes of the
steer-by-wire system, and measure the probability distribution of
failure modes when an error occurs in the register file.