Abstract: Corner detection and optical flow are common techniques for feature-based video stabilization. However, these algorithms are computationally expensive and should be performed at a reasonable rate. This paper presents an algorithm for discarding irrelevant feature points and maintaining them for future use so as to improve the computational cost. The algorithm starts by initializing a maintained set. The feature points in the maintained set are examined against its accuracy for modeling. Corner detection is required only when the feature points are insufficiently accurate for future modeling. Then, optical flows are computed from the maintained feature points toward the consecutive frame. After that, a motion model is estimated based on the simplified affine motion model and least square method, with outliers belonging to moving objects presented. Studentized residuals are used to eliminate such outliers. The model estimation and elimination processes repeat until no more outliers are identified. Finally, the entire algorithm repeats along the video sequence with the points remaining from the previous iteration used as the maintained set. As a practical application, an efficient video stabilization can be achieved by exploiting the computed motion models. Our study shows that the number of times corner detection needs to perform is greatly reduced, thus significantly improving the computational cost. Moreover, optical flow vectors are computed for only the maintained feature points, not for outliers, thus also reducing the computational cost. In addition, the feature points after reduction can sufficiently be used for background objects tracking as demonstrated in the simple video stabilizer based on our proposed algorithm.
Abstract: Xanthan gum is one of the major commercial
biopolymers. Due to its excellent rheological properties xanthan gum
is used in many applications, mainly in food industry. Commercial
production of xanthan gum uses glucose as the carbon substrate;
consequently the price of xanthan production is high. One of the
ways to decrease xanthan price, is using cheaper substrate like
agricultural wastes. Iran is one of the biggest date producer countries.
However approximately 50% of date production is wasted annually.
The goal of this study is to produce xanthan gum from waste date
using Xanthomonas campestris PTCC1473 by submerged
fermentation. In this study the effect of three variables including
phosphor and nitrogen amount and agitation rate in three levels using
response surface methodology (RSM) has been studied. Results
achieved from statistical analysis Design Expert 7.0.0 software
showed that xanthan increased with increasing level of phosphor.
Low level of nitrogen leaded to higher xanthan production. Xanthan
amount, increasing agitation had positive influence. The statistical
model identified the optimum conditions nitrogen amount=3.15g/l,
phosphor amount=5.03 g/l and agitation=394.8 rpm for xanthan. To
model validation, experiments in optimum conditions for xanthan
gum were carried out. The mean of result for xanthan was 6.72±0.26.
The result was closed to the predicted value by using RSM.
Abstract: Large volumes of fingerprints are collected and stored
every day in a wide range of applications, including forensics, access
control etc. It is evident from the database of Federal Bureau of
Investigation (FBI) which contains more than 70 million finger
prints. Compression of this database is very important because of this
high Volume. The performance of existing image coding standards
generally degrades at low bit-rates because of the underlying block
based Discrete Cosine Transform (DCT) scheme. Over the past
decade, the success of wavelets in solving many different problems
has contributed to its unprecedented popularity. Due to
implementation constraints scalar wavelets do not posses all the
properties which are needed for better performance in compression.
New class of wavelets called 'Multiwavelets' which posses more
than one scaling filters overcomes this problem. The objective of this
paper is to develop an efficient compression scheme and to obtain
better quality and higher compression ratio through multiwavelet
transform and embedded coding of multiwavelet coefficients through
Set Partitioning In Hierarchical Trees algorithm (SPIHT) algorithm.
A comparison of the best known multiwavelets is made to the best
known scalar wavelets. Both quantitative and qualitative measures of
performance are examined for Fingerprints.
Abstract: Early detection of lung cancer through chest radiography is a widely used method due to its relatively affordable cost. In this paper, an approach to improve lung nodule visualization on chest radiographs is presented. The approach makes use of linear phase high-frequency emphasis filter for digital filtering and
histogram equalization for contrast enhancement to achieve improvements. Results obtained indicate that a filtered image can
reveal sharper edges and provide more details. Also, contrast enhancement offers a way to further enhance the global (or local) visualization by equalizing the histogram of the pixel values within
the whole image (or a region of interest). The work aims to improve lung nodule visualization of chest radiographs to aid detection of lung cancer which is currently the leading cause of cancer deaths worldwide.
Abstract: Since the feasibility study of R&D programs have been
initiated for efficient public R&D investments, year 2008, feasibility
studies have improved in terms of precision. Although experience
related to these studies of R&D programs have increased to a certain
point, still methodological improvement is required. The feasibility
studies of R&D programs are consisted of various viewpoints, such as
technology, policy, and economics. This research is to provide
improvement methods to the economic perspective; especially the cost
estimation process of R&D activities. First of all, the fundamental
concept of cost estimation is reviewed. After the review, a statistical
and econometric analysis method is applied as empirical analysis.
Conclusively, limitations and further research directions are provided.
Abstract: Internet infrastructures in most places of the world
have been supported by the advancement of optical fiber technology,
most notably wavelength division multiplexing (WDM) system.
Optical technology by means of WDM system has revolutionized
long distance data transport and has resulted in high data capacity,
cost reductions, extremely low bit error rate, and operational
simplification of the overall Internet infrastructure. This paper
analyses and compares the system impairments, which occur at data
transmission rates of 2.5Gb/s and 10 Gb/s per wavelength channel in
our proposed optical WDM system for Internet infrastructure in
Tanzania. The results show that the data transmission rate of 2.5 Gb/s
has minimum system impairments compared with a rate of 10 Gb/s
per wavelength channel, and achieves a sufficient system
performance to provide a good Internet access service.
Abstract: The security of power systems against malicious cyberphysical
data attacks becomes an important issue. The adversary
always attempts to manipulate the information structure of the power
system and inject malicious data to deviate state variables while
evading the existing detection techniques based on residual test. The
solutions proposed in the literature are capable of immunizing the
power system against false data injection but they might be too costly
and physically not practical in the expansive distribution network.
To this end, we define an algebraic condition for trustworthy power
system to evade malicious data injection. The proposed protection
scheme secures the power system by deterministically reconfiguring
the information structure and corresponding residual test. More
importantly, it does not require any physical effort in either microgrid
or network level. The identification scheme of finding meters being
attacked is proposed as well. Eventually, a well-known IEEE 30-bus
system is adopted to demonstrate the effectiveness of the proposed
schemes.
Abstract: Access to information is the key to the empowerment of everybody despite where they are living. This research is to be carried out in respect of the people living in developing countries, considering their plight and complex geographical, demographic, social-economic conditions surrounding the areas they live, which hinder access to information and of professionals providing services such as medical workers, which has led to high death rates and development stagnation. Research on Unified Communications and Integrated Collaborations (UCIC) system in the health sector of developing countries comes in to create a possible solution of bridging the digital canyon among the communities. The aim is to deliver services in a seamless manner to assist health workers situated anywhere to be accessed easily and access information which will help in service delivery. The proposed UCIC provides the most immersive Telepresence experience for one-to-one or many-tomany meetings. Extending to locations anywhere in the world, the transformative platform delivers Ultra-low operating costs through the use of general purpose networks and using special lenses and track systems.
Abstract: The fuel cost of the motor vehicle operating on its
common route is an important part of the operating cost. Therefore,
the importance of the fuel saving is increasing day by day. One of the
parameters which improve fuel saving is the regulation of driving
characteristics. The number and duration of stop is increased by the
heavy traffic load. It is possible to improve the fuel saving with
regulation of traffic flow and driving characteristics. The researches
show that the regulation of the traffic flow decreases fuel
consumption, but it is not enough to improve fuel saving without the
regulation of driving characteristics. This study analyses the fuel
consumption of two trips of city bus operating on its common route
and determines the effect of traffic density and driving characteristics
on fuel consumption. Finally it offers some suggestions about
regulation of driving characteristics to improve the fuel saving. Fuel
saving is determined according to the results obtained from
simulation program. When experimental and simulation results are
compared, it has been found that the fuel saving was reached up the
to 40 percent ratios.
Abstract: Coastal resource management, community empowerment and socio economic development are the cornerstones for uplifting the lives of coastal area inhabitants. This paper aims to identify the positive impacts of coastal management projects toward fishermen-s economic well-being, to analyze the role of fishermen and their families in effecting economic change and to analyze the roles of stakeholders in managing coastal resources. Structured and semi-structured questionnaires were prepared to obtain qualitative data, and interviews were conducted with fishermen. Findings show that community empowerment and conservation of coastal resources through local and central government projects have exerted positive impact on the coastal community. Some activities involved women who are more active particularly in “off-fishing" season. Traditionally, local fishermen together with local stakeholders have set up a zoning system to minimize conflicts between fishermen. In addition, zoning is used to protect certain ecosystems that can provide benefits well into the future.
Abstract: Historic religious buildings located in seismic areas
have developed different failure mechanisms. Simulation of failure
modes is done with computer programs through a nonlinear dynamic
analysis or simplified using the method of failure blocks. Currently
there are simulation methodologies of failure modes based on the
failure rigid blocks method only for Roman Catholic churches type.
Due to differences of shape in plan, elevation and construction
systems between Orthodox churches and Catholic churches, for the
first time there were initiated researches in the development of this
simulation methodology for Orthodox churches. In this article are
presented the first results from the researches. The theoretical results
were compared with real failure modes recorded at an Orthodox
church from Banat region, severely damaged by earthquakes in
1991. Simulated seismic response, using a computer program based
on finite element method was confirmed by cracks after earthquakes.
The consolidation of the church was made according to these
theoretical results, realizing a rigid floor connecting all the failure
blocks.
Abstract: Introducing Electromagnetic Interference and Electromagnetic Compatibility, or “The Art of Black Magic", for engineering students might be a terrifying experience both for students and tutors. Removing the obstacle of large, expensive facilities like a fully fitted EMC laboratory and hours of complex theory, this paper demonstrates a design of a laboratory setup for student exercises, giving students experience in the basics of EMC/EMI problems that may challenge the functionality and stability of embedded system designs. This is done using a simple laboratory installation and basic measurement equipment such as a medium cost digital storage oscilloscope, at the cost of not knowing the exact magnitude of the noise components, but rather if the noise is significant or not, as well as the source of the noise. A group of students have performed a trial exercise with good results and feedback.
Abstract: SDMA (Space-Division Multiple Access) is a MIMO
(Multiple-Input and Multiple-Output) based wireless communication
network architecture which has the potential to significantly increase
the spectral efficiency and the system performance. The maximum
likelihood (ML) detection provides the optimal performance, but its
complexity increases exponentially with the constellation size of
modulation and number of users. The QR decomposition (QRD)
MUD can be a substitute to ML detection due its low complexity and
near optimal performance. The minimum mean-squared-error
(MMSE) multiuser detection (MUD) minimises the mean square
error (MSE), which may not give guarantee that the BER of the
system is also minimum. But the minimum bit error rate (MBER)
MUD performs better than the classic MMSE MUD in term of
minimum probability of error by directly minimising the BER cost
function. Also the MBER MUD is able to support more users than
the number of receiving antennas, whereas the rest of MUDs fail in
this scenario. In this paper the performance of various MUD
techniques is verified for the correlated MIMO channel models based
on IEEE 802.16n standard.
Abstract: General as well as the MSW management in Thailand is reviewed in this paper. Topics include the MSW generation, sources, composition, and trends. The review, then, moves to sustainable solutions for MSW management, sustainable alternative approaches with an emphasis on an integrated MSW management. Information of waste in Thailand is also given at the beginning of this paper for better understanding of later contents. It is clear that no one single method of MSW disposal can deal with all materials in an environmentally sustainable way. As such, a suitable approach in MSW management should be an integrated approach that could deliver both environmental and economic sustainability. With increasing environmental concerns, the integrated MSW management system has a potential to maximize the useable waste materials as well as produce energy as a by-product. In Thailand, the compositions of waste (86%) are mainly organic waste, paper, plastic, glass, and metal. As a result, the waste in Thailand is suitable for an integrated MSW management. Currently, the Thai national waste management policy starts to encourage the local administrations to gather into clusters to establish central MSW disposal facilities with suitable technologies and reducing the disposal cost based on the amount of MSW generated.
Abstract: This paper studies the dependability of componentbased
applications, especially embedded ones, from the diagnosis
point of view. The principle of the diagnosis technique is to
implement inter-component tests in order to detect and locate the
faulty components without redundancy. The proposed approach for
diagnosing faulty components consists of two main aspects. The first
one concerns the execution of the inter-component tests which
requires integrating test functionality within a component. This is the
subject of this paper. The second one is the diagnosis process itself
which consists of the analysis of inter-component test results to
determine the fault-state of the whole system. Advantage of this
diagnosis method when compared to classical redundancy faulttolerant
techniques are application autonomy, cost-effectiveness and
better usage of system resources. Such advantage is very important
for many systems and especially for embedded ones.
Abstract: Sensor relocation is to repair coverage holes caused by node failures. One way to repair coverage holes is to find redundant nodes to replace faulty nodes. Most researches took a long time to find redundant nodes since they randomly scattered redundant nodes around the sensing field. To record the precise position of sensor nodes, most researches assumed that GPS was installed in sensor nodes. However, high costs and power-consumptions of GPS are heavy burdens for sensor nodes. Thus, we propose a fast sensor relocation algorithm to arrange redundant nodes to form redundant walls without GPS. Redundant walls are constructed in the position where the average distance to each sensor node is the shortest. Redundant walls can guide sensor nodes to find redundant nodes in the minimum time. Simulation results show that our algorithm can find the proper redundant node in the minimum time and reduce the relocation time with low message complexity.
Abstract: This paper deals with a portfolio selection problem
based on the possibility theory under the assumption that the returns
of assets are LR-type fuzzy numbers. A possibilistic portfolio model
with transaction costs is proposed, in which the possibilistic mean
value of the return is termed measure of investment return, and the
possibilistic variance of the return is termed measure of investment
risk. Due to considering transaction costs, the existing traditional
optimization algorithms usually fail to find the optimal solution
efficiently and heuristic algorithms can be the best method. Therefore,
a particle swarm optimization is designed to solve the corresponding
optimization problem. At last, a numerical example is given to
illustrate our proposed effective means and approaches.
Abstract: Removal of Methylene Blue (MB) from aqueous
solution by adsorbing it on Gypsum was investigated by batch
method. The studies were conducted at 25°C and included the effects
of pH and initial concentration of Methylene Blue. The adsorption
data was analyzed by using the Langmuir, Freundlich and Tempkin
isotherm models. The maximum monolayer adsorption capacity was
found to be 36 mg of the dye per gram of gypsum. The data were
also analyzed in terms of their kinetic behavior and was found to
obey the pseudo second order equation.
Abstract: In a previous work, we presented the numerical
solution of the two dimensional second order telegraph partial
differential equation discretized by the centred and rotated five-point
finite difference discretizations, namely the explicit group (EG) and
explicit decoupled group (EDG) iterative methods, respectively. In
this paper, we utilize a domain decomposition algorithm on these
group schemes to divide the tasks involved in solving the same
equation. The objective of this study is to describe the development
of the parallel group iterative schemes under OpenMP programming
environment as a way to reduce the computational costs of the
solution processes using multicore technologies. A detailed
performance analysis of the parallel implementations of points and
group iterative schemes will be reported and discussed.
Abstract: Supply Chain Management (SCM) is the integration
between manufacturer, transporter and customer in order to form one
seamless chain that allows smooth flow of raw materials, information
and products throughout the entire network that help in minimizing
all related efforts and costs. The main objective of this paper is to
develop a model that can accept a specified number of spare-parts
within the supply chain, simulating its inventory operations
throughout all stages in order to minimize the inventory holding
costs, base-stock, safety-stock, and to find the optimum quantity of
inventory levels, thereby suggesting a way forward to adapt some
factors of Just-In-Time to minimizing the inventory costs throughout
the entire supply chain. The model has been developed using Micro-
Soft Excel & Visual Basic in order to study inventory allocations in
any network of the supply chain. The application and reproducibility
of this model were tested by comparing the actual system that was
implemented in the case study with the results of the developed
model. The findings showed that the total inventory costs of the
developed model are about 50% less than the actual costs of the
inventory items within the case study.