Abstract: This paper describes an experience of research,
development and innovation applied in Industrial Naval at (Science
and Technology Corporation for the Development of Shipbuilding
Industry, Naval in Colombia (COTECMAR) particularly through
processes of research, innovation and technological development,
based on theoretical models related to organizational knowledge
management, technology management and management of human
talent and integration of technology platforms. It seeks ways to
facilitate the initial establishment of environments rich in
information, knowledge and content-supported collaborative
strategies on dynamic processes missionary, seeking further
development in the context of research, development and innovation
of the Naval Engineering in Colombia, making it a distinct basis for
the generation of knowledge assets from COTECMAR.
The integration of information and communication technologies,
supported on emerging technologies (mobile technologies, wireless,
digital content via PDA, and content delivery services on the Web 2.0
and Web 3.0) as a view of the strategic thrusts in any organization
facilitates the redefinition of processes for managing information and
knowledge, enabling the redesign of workflows, the adaptation of
new forms of organization - preferably in networking and support the
creation of symbolic-inside-knowledge promotes the development of
new skills, knowledge and attitudes of the knowledge worker
Abstract: Global temperature had increased by about 0.5oC over
the past century, increasing temperature leads to a loss or a decrease
of soil organic matter (SOM). Whereas soil organic matter in many
tropical soils is less stable than that of temperate soils, and it will be
easily affected by climate change. Therefore, conservation of soil
organic matter is urgent issue nowadays. This paper presents the
effect of different doses (5%, 15%) of Ca-type zeolite in conjunction
with organic manure, applied to soil samples from Philippines,
Paraguay and Japan, on the decomposition resistance of soil organic
matter under high temperature. Results showed that a remain or
slightly increase the C/N ratio of soil. There are an increase in
percent of humic acid (PQ) that extracted with Na4P2O7. A decrease
of percent of free humus (fH) after incubation was determined. A
larger the relative color intensity (RF) value and a lower the color
coefficient (6logK) value following increasing zeolite rates leading
to a higher degrees of humification. The increase in the aromatic
condensation of humic acid (HA) after incubation, as indicates by the
decrease of H/C and O/C ratios of HA. This finding indicates that the
use of zeolite could be beneficial with respect to SOM conservation
under global warming condition.
Abstract: A modified Saleh-Valenzuela channel model has been
adapted for Ultra Wideband (UWB) system. The suggested realistic
channel is assessed by its distribution of fading amplitude and time of
arrivals. Furthermore, the propagation characteristic has been distinct
into four channel models, namely CM 1 to 4. Each are differentiate in
terms of cluster arrival rates, rays arrival rate within each cluster and
its respective constant decay rates. This paper described the
multiband OFDM system performance simulates under these
multipath conditions. Simulation work described in this paper is
based on WiMedia ECMA-368 standard, which has been deployed
for practical implementation of low cost and low power UWB
devices.
Abstract: The purpose of this research is to develop and apply the
RSCMAC to enhance the dynamic accuracy of Global Positioning
System (GPS). GPS devices provide services of accurate positioning,
speed detection and highly precise time standard for over 98% area on
the earth. The overall operation of Global Positioning System includes
24 GPS satellites in space; signal transmission that includes 2
frequency carrier waves (Link 1 and Link 2) and 2 sets random
telegraphic codes (C/A code and P code), on-earth monitoring stations
or client GPS receivers. Only 4 satellites utilization, the client position
and its elevation can be detected rapidly. The more receivable
satellites, the more accurate position can be decoded. Currently, the
standard positioning accuracy of the simplified GPS receiver is greatly
increased, but due to affected by the error of satellite clock, the
troposphere delay and the ionosphere delay, current measurement
accuracy is in the level of 5~15m. In increasing the dynamic GPS
positioning accuracy, most researchers mainly use inertial navigation
system (INS) and installation of other sensors or maps for the
assistance. This research utilizes the RSCMAC advantages of fast
learning, learning convergence assurance, solving capability of
time-related dynamic system problems with the static positioning
calibration structure to improve and increase the GPS dynamic
accuracy. The increasing of GPS dynamic positioning accuracy can be
achieved by using RSCMAC system with GPS receivers collecting
dynamic error data for the error prediction and follows by using the
predicted error to correct the GPS dynamic positioning data. The
ultimate purpose of this research is to improve the dynamic positioning
error of cheap GPS receivers and the economic benefits will be
enhanced while the accuracy is increased.
Abstract: A new blind symbol by symbol equalizer is proposed.
The operation of the proposed equalizer is based on the geometric
properties of the two dimensional data constellation. An unsupervised
clustering technique is used to locate the clusters formed by the
received data. The symmetric properties of the clusters labels are
subsequently utilized in order to label the clusters. Following this
step, the received data are compared to clusters and decisions are
made on a symbol by symbol basis, by assigning to each data
the label of the nearest cluster. The operation of the equalizer is
investigated both in linear and nonlinear channels. The performance
of the proposed equalizer is compared to the performance of a CMAbased
blind equalizer.
Abstract: Online trading is an alternative to conventional shopping method. People trade goods which are new or pre-owned before. However, there are times when a user is not able to search the items wanted online. This is because the items may not be posted as yet, thus ending the search. Conventional search mechanism only works by searching and matching search criteria (requirement) with data available in a particular database. This research aims to match current search requirements with future postings. This would involve the time factor in the conventional search method. A Car Matching Alert System (CMAS) prototype was developed to test the matching algorithm. When a buyer-s search returns no result, the system saves the search and the buyer will be alerted if there is a match found based on future postings. The algorithm developed is useful and as it can be applied in other search context.
Abstract: There is a complex situation on the transport environment in the cities of the world. For the analysis and prevention of environmental problems an accurate calculation hazardous substances concentrations at each point of the investigated area is required. In the turbulent atmosphere of the city the wellknown methods of mathematical statistics for these tasks cannot be applied with a satisfactory level of accuracy. Therefore, to solve this class of problems apparatus of mathematical physics is more appropriate. In such models, because of the difficulty as a rule the influence of uneven land surface on streams of air masses in the turbulent atmosphere of the city are not taken into account. In this paper the influence of the surface roughness, which can be quite large, is mathematically shown. The analysis of this problem under certain conditions identified the possibility of areas appearing in the atmosphere with pressure tending to infinity, i.e. so-called "wall effect".
Abstract: A lot of Scientific and Engineering problems require the solution of large systems of linear equations of the form bAx in an effective manner. LU-Decomposition offers good choices for solving this problem. Our approach is to find the lower bound of processing elements needed for this purpose. Here is used the so called Omega calculus, as a computational method for solving problems via their corresponding Diophantine relation. From the corresponding algorithm is formed a system of linear diophantine equalities using the domain of computation which is given by the set of lattice points inside the polyhedron. Then is run the Mathematica program DiophantineGF.m. This program calculates the generating function from which is possible to find the number of solutions to the system of Diophantine equalities, which in fact gives the lower bound for the number of processors needed for the corresponding algorithm. There is given a mathematical explanation of the problem as well. Keywordsgenerating function, lattice points in polyhedron, lower bound of processor elements, system of Diophantine equationsand : calculus.
Abstract: The present work is concerned with the effect of turning process parameters (cutting speed, feed rate, and depth of cut) and distance from the center of work piece as input variables on the chip micro-hardness as response or output. Three experiments were conducted; they were used to investigate the chip micro-hardness behavior at diameter of work piece for 30[mm], 40[mm], and 50[mm]. Response surface methodology (R.S.M) is used to determine and present the cause and effect of the relationship between true mean response and input control variables influencing the response as a two or three dimensional hyper surface. R.S.M has been used for designing a three factor with five level central composite rotatable factors design in order to construct statistical models capable of accurate prediction of responses. The results obtained showed that the application of R.S.M can predict the effect of machining parameters on chip micro-hardness. The five level factorial designs can be employed easily for developing statistical models to predict chip micro-hardness by controllable machining parameters. Results obtained showed that the combined effect of cutting speed at it?s lower level, feed rate and depth of cut at their higher values, and larger work piece diameter can result increasing chi micro-hardness.
Abstract: All the available algorithms for blind estimation namely constant modulus algorithm (CMA), Decision-Directed Algorithm (DDA/DFE) suffer from the problem of convergence to local minima. Also, if the channel drifts considerably, any DDA looses track of the channel. So, their usage is limited in varying channel conditions. The primary limitation in such cases is the requirement of certain overhead bits in the transmit framework which leads to wasteful use of the bandwidth. Also such arrangements fail to use channel state information (CSI) which is an important aid in improving the quality of reception. In this work, the main objective is to reduce the overhead imposed by the pilot symbols, which in effect reduces the system throughput. Also we formulate an arrangement based on certain dynamic Artificial Neural Network (ANN) topologies which not only contributes towards the lowering of the overhead but also facilitates the use of the CSI. A 2×2 Multiple Input Multiple Output (MIMO) system is simulated and the performance variation with different channel estimation schemes are evaluated. A new semi blind approach based on dynamic ANN is proposed for channel tracking in varying channel conditions and the performance is compared with perfectly known CSI and least square (LS) based estimation.
Abstract: In this paper, we propose a modified version of the
Constant Modulus Algorithm (CMA) tailored for blind Decision
Feedback Equalizer (DFE) of first order Markovian time varying
channels. The proposed NonStationary CMA (NSCMA) is designed
so that it explicitly takes into account the Markovian structure of
the channel nonstationarity. Hence, unlike the classical CMA, the
NSCMA is not blind with respect to the channel time variations.
This greatly helps the equalizer in the case of realistic channels, and
avoids frequent transmissions of training sequences.
This paper develops a theoretical analysis of the steady state
performance of the CMA and the NSCMA for DFEs within a time
varying context. Therefore, approximate expressions of the mean
square errors are derived. We prove that in the steady state, the
NSCMA exhibits better performance than the classical CMA. These
new results are confirmed by simulation.
Through an experimental study, we demonstrate that the Bit Error
Rate (BER) is reduced by the NSCMA-DFE, and the improvement
of the BER achieved by the NSCMA-DFE is as significant as the
channel time variations are severe.
Abstract: Most of the commonly used blind equalization algorithms are based on the minimization of a nonconvex and nonlinear cost function and a neural network gives smaller residual error as compared to a linear structure. The efficacy of complex valued feedforward neural networks for blind equalization of linear and nonlinear communication channels has been confirmed by many studies. In this paper we present two neural network models for blind equalization of time-varying channels, for M-ary QAM and PSK signals. The complex valued activation functions, suitable for these signal constellations in time-varying environment, are introduced and the learning algorithms based on the CMA cost function are derived. The improved performance of the proposed models is confirmed through computer simulations.
Abstract: The dynamic or complex modulus test is considered
to be a mechanistically based laboratory test to reliably characterize
the strength and load-resistance of Hot-Mix Asphalt (HMA) mixes
used in the construction of roads. The most common observation is
that the data collected from these tests are often noisy and somewhat
non-sinusoidal. This hampers accurate analysis of the data to obtain
engineering insight. The goal of the work presented in this paper is to
develop and compare automated evolutionary computational
techniques to filter test noise in the collection of data for the HMA
complex modulus test. The results showed that the Covariance
Matrix Adaptation-Evolutionary Strategy (CMA-ES) approach is
computationally efficient for filtering data obtained from the HMA
complex modulus test.
Abstract: Schema matching plays a key role in many different
applications, such as schema integration, data integration, data
warehousing, data transformation, E-commerce, peer-to-peer data
management, ontology matching and integration, semantic Web,
semantic query processing, etc. Manual matching is expensive and
error-prone, so it is therefore important to develop techniques to
automate the schema matching process. In this paper, we present a
solution for XML schema automated matching problem which
produces semantic mappings between corresponding schema
elements of given source and target schemas. This solution
contributed in solving more comprehensively and efficiently XML
schema automated matching problem. Our solution based on
combining linguistic similarity, data type compatibility and structural
similarity of XML schema elements. After describing our solution,
we present experimental results that demonstrate the effectiveness of
this approach.
Abstract: In this paper an algorithm is used to detect the color defects of ceramic tiles. First the image of a normal tile is clustered using GCMA; Genetic C-means Clustering Algorithm; those results in best cluster centers. C-means is a common clustering algorithm which optimizes an objective function, based on a measure between data points and the cluster centers in the data space. Here the objective function describes the mean square error. After finding the best centers, each pixel of the image is assigned to the cluster with closest cluster center. Then, the maximum errors of clusters are computed. For each cluster, max error is the maximum distance between its center and all the pixels which belong to it. After computing errors all the pixels of defected tile image are clustered based on the centers obtained from normal tile image in previous stage. Pixels which their distance from their cluster center is more than the maximum error of that cluster are considered as defected pixels.
Abstract: A frictionless contact problem for a two-layer orthotropic elastic medium loaded through a rigid flat stamp is considered. It is assumed that tensile tractions are not allowed and only compressive tractions can be transmitted across the interface. In the solution, effect of gravity is taken into consideration. If the external load on the rigid stamp is less than or equal to a critical value, continuous contact between the layers is maintained. The problem is expressed in terms of a singular integral equation by using the theory of elasticity and the Fourier transforms. Numerical results for initial separation point, critical separation load and contact stress distribution are presented.
Abstract: In this study, a robust intelligent backstepping tracking control (RIBTC) system combined with adaptive output recurrent cerebellar model articulation control (AORCMAC) and H∞ control technique is proposed for wheeled inverted pendulums (WIPs) real-time control with exact system dynamics unknown. Moreover, a robust H∞ controller is designed to attenuate the effect of the residual approximation errors and external disturbances with desired attenuation level. The experimental results indicate that the WIPs can stand upright stably when using the proposed RIBTC.
Abstract: The aim of this article is to narrate the utility of novel simulation approach i.e. convolution method to predict blood concentration of drug utilizing dissolution data of salbutamol sulphate microparticulate formulations with different release patterns (1:1, 1:2 and 1:3, drug:polymer). Dissolution apparatus II USP 2007 and 900 ml double distilled water stirrd at 50 rpm was employed for dissolution analysis. From dissolution data, blood drug concentration was determined, and in return predicted blood drug concentration data was used to calculate the pharmacokinetic parameters i.e. Cmax, Tmax, and AUC. Convolution is a good biwaiver technique; however its better utility needs it application in the conditions where biorelevant dissolution media are used.
Abstract: This study compares three meta heuristics to minimize makespan (Cmax) for Hybrid Flow Shop (HFS) Scheduling Problem with Parallel Machines. This problem is known to be NP-Hard. This study proposes three algorithms among improvement heuristic searches which are: Genetic Algorithm (GA), Simulated Annealing (SA), and Tabu Search (TS). SA and TS are known as deterministic improvement heuristic search. GA is known as stochastic improvement heuristic search. A comprehensive comparison from these three improvement heuristic searches is presented. The results for the experiments conducted show that TS is effective and efficient to solve HFS scheduling problems.
Abstract: In this research, an aerobic composting method is
studied to reuse organic waste from rubber factory waste as soil fertilizer and to study the effect of cellulolytic microbial activator
(CMA) as the activator in the rubber factory waste composting. The
performance of the composting process was monitored as a function
of carbon and organic matter decomposition rate, temperature and
moisture content. The results indicate that the rubber factory waste is best composted with water hyacinth and sludge than composted
alone. In addition, the CMA is more affective when mixed with the rubber factory waste, water hyacinth and sludge since a good fertilizer is achieved. When adding CMA into the rubber factory
waste composted alone, the finished product does not achieve a
standard of fertilizer, especially the C/N ratio.
Finally, the finished products of composting rubber factory waste and water hyacinth and sludge (both CMA and without CMA), can be an environmental friendly alternative to solve the disposal problems of rubber factory waste. Since the C/N ratio, pH, moisture
content, temperature, and nutrients of the finished products are acceptable for agriculture use.