Abstract: In this paper we proposed multistage adaptive
ARQ/HARQ/HARQ scheme. This method combines pure ARQ
(Automatic Repeat reQuest) mode in low channel bit error rate and
hybrid ARQ method using two different Reed-Solomon codes in
middle and high error rate conditions. It follows, that our scheme has
three stages. The main goal is to increase number of states in adaptive
HARQ methods and be able to achieve maximum throughput for
every channel bit error rate. We will prove the proposal by
calculation and then with simulations in land mobile satellite channel
environment. Optimization of scheme system parameters is described
in order to maximize the throughput in the whole defined Signal-to-
Noise Ratio (SNR) range in selected channel environment.
Abstract: Sustainable development is highly dependent on the
implementation of environmental education programs, which has as
its ultimate goal to produce environmentally literate citizens that
undertake environmentally friendly actions. Efforts on environmental
education along past years are now perceived on the increase of
citizens awareness on European countries and, particularly, in
Portugal. However, we still have a lack of information on the
prevalence of specific behaviors that contributes to sustainability,
influenced by a new attitude toward the environment. The
determination of pro-environmental behaviors prevalence in higher
education students is an important approach to understand to which
extend the next leading generation is, in practice, committed with the
goals of sustainable development. Therefore, present study evaluates
the prevalence of a specific set of behaviors (water savings, energy
savings, environmental criteria on shopping, and mobility) on the
University of Madeira students and discusses their commitment with
sustainable development.
Abstract: This paper aims to develop a model that assists the
international retailer in selecting the country that maximizes the
degree of fit between the retailer-s goals and the country
characteristics in his initial internationalization move. A two-stage
multi criteria decision model is designed integrating the Analytic
Hierarchy Process (AHP) and Goal Programming. Ethical, cultural,
geographic and economic proximity are identified as the relevant
constructs of the internationalization decision. The constructs are
further structured into sub-factors within analytic hierarchy. The
model helps the retailer to integrate, rank and weigh a number of
hard and soft factors and prioritize the countries accordingly. The
model has been implemented on a Turkish luxury goods retailer who
was planning to internationalize. Actual entry of the specific retailer
in the selected country is a support for the model. Implementation on
a single retailer limits the generalizability of the results; however, the
emphasis of the paper is on construct identification and model
development. The paper enriches the existing literature by proposing
a hybrid multi objective decision model which introduces new soft
dimensions i.e. perceived distance, ethical proximity, humane
orientation to the decision process and facilitates effective decision
making.
Abstract: On a such wide-area environment as a Grid, data
placement is an important aspect of distributed database systems. In
this paper, we address the problem of initial placement of database
no-replicated fragments in Grid architecture. We propose a graph
based approach that considers resource restrictions. The goal is to
optimize the use of computing, storage and communication
resources. The proposed approach is developed in two phases: in the
first phase, we perform fragment grouping using knowledge about
fragments dependency and, in the second phase, we determine an
efficient placement of the fragment groups on the Grid. We also
show, via experimental analysis that our approach gives solutions
that are close to being optimal for different databases and Grid
configurations.
Abstract: This paper presents the methodology from machine
learning approaches for short-term rain forecasting system. Decision
Tree, Artificial Neural Network (ANN), and Support Vector Machine
(SVM) were applied to develop classification and prediction models
for rainfall forecasts. The goals of this presentation are to
demonstrate (1) how feature selection can be used to identify the
relationships between rainfall occurrences and other weather
conditions and (2) what models can be developed and deployed for
predicting the accurate rainfall estimates to support the decisions to
launch the cloud seeding operations in the northeastern part of
Thailand. Datasets collected during 2004-2006 from the
Chalermprakiat Royal Rain Making Research Center at Hua Hin,
Prachuap Khiri khan, the Chalermprakiat Royal Rain Making
Research Center at Pimai, Nakhon Ratchasima and Thai
Meteorological Department (TMD). A total of 179 records with 57
features was merged and matched by unique date. There are three
main parts in this work. Firstly, a decision tree induction algorithm
(C4.5) was used to classify the rain status into either rain or no-rain.
The overall accuracy of classification tree achieves 94.41% with the
five-fold cross validation. The C4.5 algorithm was also used to
classify the rain amount into three classes as no-rain (0-0.1 mm.),
few-rain (0.1- 10 mm.), and moderate-rain (>10 mm.) and the overall
accuracy of classification tree achieves 62.57%. Secondly, an ANN
was applied to predict the rainfall amount and the root mean square
error (RMSE) were used to measure the training and testing errors of
the ANN. It is found that the ANN yields a lower RMSE at 0.171 for
daily rainfall estimates, when compared to next-day and next-2-day
estimation. Thirdly, the ANN and SVM techniques were also used to
classify the rain amount into three classes as no-rain, few-rain, and
moderate-rain as above. The results achieved in 68.15% and 69.10%
of overall accuracy of same-day prediction for the ANN and SVM
models, respectively. The obtained results illustrated the comparison
of the predictive power of different methods for rainfall estimation.
Abstract: Safer driver behavior promoting is the main goal of this paper. It is a fact that drivers behavior is relatively safer when being monitored. Thus, in this paper, we propose a monitoring system to report specific driving event as well as the potentially aggressive events for estimation of the driving performance. Our driving monitoring system is composed of two parts. The first part is the in-vehicle embedded system which is composed of a GPS receiver, a two-axis accelerometer, radar sensor, OBD interface, and GPRS modem. The design considerations that led to this architecture is described in this paper. The second part is a web server where an adaptive hierarchical fuzzy system is proposed to classify the driving performance based on the data that is sent by the in-vehicle embedded system and the data that is provided by the geographical information system (GIS). Our system is robust, inexpensive and small enough to fit inside a vehicle without distracting the driver.
Abstract: With the development of technology, the growing
trend of fast and safe passenger transport, air pollution, traffic
congestion, increase in problems such as the increasing population
and the high cost of private vehicle usage made many cities around
the world with a population of more or less, start to build rail systems
as a means of urban transport in order to ensure the economic and
environmental sustainability and more efficient use of land in the
city. The implementation phase of rail systems costs much more than
other public transport systems. However, social and economic returns
in the long term made these systems the most popular investment tool
for planned and developing cities.
In our country, the purpose, goals and policies of transportation
plans are away from integrity, and the problems are not clearly
detected. Also, not defined and incomplete assessment of
transportation systems and insufficient financial analysis are the most
important cause of failure. Rail systems and other transportation
systems to be addressed as a whole is seen as the main factor in
increasing efficiency in applications that are not integrated yet in our
country to come to this point has led to the problem.
Abstract: Many supervised induction algorithms require discrete
data, even while real data often comes in a discrete
and continuous formats. Quality discretization of continuous
attributes is an important problem that has effects on speed,
accuracy and understandability of the induction models. Usually,
discretization and other types of statistical processes are applied
to subsets of the population as the entire population is practically
inaccessible. For this reason we argue that the discretization
performed on a sample of the population is only an estimate of
the entire population. Most of the existing discretization methods,
partition the attribute range into two or several intervals using
a single or a set of cut points. In this paper, we introduce a
technique by using resampling (such as bootstrap) to generate
a set of candidate discretization points and thus, improving the
discretization quality by providing a better estimation towards
the entire population. Thus, the goal of this paper is to observe
whether the resampling technique can lead to better discretization
points, which opens up a new paradigm to construction of
soft decision trees.
Abstract: A new method for low complexity image coding is presented, that permits different settings and great scalability in the generation of the final bit stream. This coding presents a continuoustone still image compression system that groups loss and lossless compression making use of finite arithmetic reversible transforms. Both transformation in the space of color and wavelet transformation are reversible. The transformed coefficients are coded by means of a coding system in depending on a subdivision into smaller components (CFDS) similar to the bit importance codification. The subcomponents so obtained are reordered by means of a highly configure alignment system depending on the application that makes possible the re-configure of the elements of the image and obtaining different levels of importance from which the bit stream will be generated. The subcomponents of each level of importance are coded using a variable length entropy coding system (VBLm) that permits the generation of an embedded bit stream. This bit stream supposes itself a bit stream that codes a compressed still image. However, the use of a packing system on the bit stream after the VBLm allows the realization of a final highly scalable bit stream from a basic image level and one or several enhance levels.
Abstract: When designing information systems that deal with
large amount of domain knowledge, system designers need to consider
ambiguities of labeling termsin domain vocabulary for navigating
users in the information space. The goal of this study is to develop a
methodology for system designers to label navigation items, taking
account of ambiguities stems from synonyms or polysemes of labeling
terms. In this paper, we propose a method for concept labeling based
on mappings between domain ontology andthesaurus, and report
results of an empirical evaluation.
Abstract: In distributed resource allocation a set of agents must assign their resources to a set of tasks. This problem arises in many real-world domains such as distributed sensor networks, disaster rescue, hospital scheduling and others. Despite the variety of approaches proposed for distributed resource allocation, a systematic formalization of the problem, explaining the different sources of difficulties, and a formal explanation of the strengths and limitations of key approaches is missing. We take a step towards this goal by using a formalization of distributed resource allocation that represents both dynamic and distributed aspects of the problem. In this paper we present a new idea for target tracking in sensor networks and compare it with previous approaches. The central contribution of the paper is a generalized mapping from distributed resource allocation to DDCSP. This mapping is proven to correctly perform resource allocation problems of specific difficulty. This theoretical result is verified in practice by a simulation on a realworld distributed sensor network.
Abstract: Process-oriented software development is a new
software development paradigm in which software design is modeled
by a business process which is in turn translated into a process
execution language for execution. The building blocks of this
paradigm are software units that are composed together to work
according to the flow of the business process. This new paradigm
still exhibits the characteristic of the applications built with the
traditional software component technology. This paper discusses an
approach to apply a traditional technique for software component
fabrication to the design of process-oriented software units, called
process components. These process components result from
decomposing a business process of a particular application domain
into subprocesses, and these process components can be reused to
design the business processes of other application domains. The
decomposition considers five managerial goals, namely cost
effectiveness, ease of assembly, customization, reusability, and
maintainability. The paper presents how to design or decompose
process components from a business process model and measure
some technical features of the design that would affect the
managerial goals. A comparison between the measurement values
from different designs can tell which process component design is
more appropriate for the managerial goals that have been set. The
proposed approach can be applied in Web Services environment
which accommodates process-oriented software development.
Abstract: The paper introduces and discusses definitions and concepts from the supplier relationship management area. This review has the goal to provide readers with the basic conditions to understand the market mechanisms and the technological developments of the SRM market. Further on, the work gives a picture of the actual business environment in which the SRM vendors are in, and the main trends in the field, based on the main SRM functionalities i.e. e-Procurement, e-Sourcing and Supplier Enablement, which indicates users and software providers the future technological developments and practises that will take place in this area in the next couple of years.
Abstract: Speed estimation is one of the important and practical tasks in machine vision, Robotic and Mechatronic. the availability of high quality and inexpensive video cameras, and the increasing need for automated video analysis has generated a great deal of interest in machine vision algorithms. Numerous approaches for speed estimation have been proposed. So classification and survey of the proposed methods can be very useful. The goal of this paper is first to review and verify these methods. Then we will propose a novel algorithm to estimate the speed of moving object by using fuzzy concept. There is a direct relation between motion blur parameters and object speed. In our new approach we will use Radon transform to find direction of blurred image, and Fuzzy sets to estimate motion blur length. The most benefit of this algorithm is its robustness and precision in noisy images. Our method was tested on many images with different range of SNR and is satisfiable.
Abstract: The need to evaluate and understand the natural
drainage pattern in a flood prone, and fast developing environment is
of paramount importance. This information will go a long way to
help the town planners to determine the drainage pattern, road
networks and areas where prominent structures are to be located. This
research work was carried out with the aim of studying the Bayelsa
landscape topography using digitized topographic information, and to
model the natural drainage flow pattern that will aid the
understanding and constructions of workable drainages. To achieve
this, digitize information of elevation and coordinate points were
extracted from a global imagery map. The extracted information was
modeled into 3D surfaces. The result revealed that the average
elevation for Bayelsa State is 12 m above sea level. The highest
elevation is 28 m, and the lowest elevation 0 m, along the coastline.
In Yenagoa the capital city of Bayelsa were a detail survey was
carried out showed that average elevation is 15 m, the highest
elevation is 25 m and lowest is 3 m above the mean sea level. The
regional elevation in Bayelsa, showed a gradation decrease from the
North Eastern zone to the South Western Zone. Yenagoa showed an
observed elevation lineament, were low depression is flanked by high
elevation that runs from the North East to the South west. Hence,
future drainages in Yenagoa should be directed from the high
elevation, from South East toward the North West and from the
North West toward South East, to the point of convergence which is
at the center that flows from South East toward the North West.
Bayelsa when considered on a regional Scale, the flow pattern is from
the North East to the South West, and also North South. It is
recommended that in the event of any large drainage construction at
municipal scale, it should be directed from North East to the South
West or from North to South. Secondly, detail survey should be
carried out to ascertain the local topography and the drainage pattern
before the design and construction of any drainage system in any part
of Bayelsa.
Abstract: Plastic waste is a big issue in Thailand, but the amount of recycled plastic in Thailand is still low due to the high investment and operating cost. Hence, the rest of plastic waste are burnt to destroy or sent to the landfills. In order to be financial viable, an effective reverse logistics infrastructure is required to support the product recovery activities. However, there is a conflict between reducing the cost and raising environmental protection level. The purpose of this study is to build a goal programming (GP) so that it can be used to help analyze the proper planning of the Thailand-s plastic recycling system that involves multiple objectives. This study considers three objectives; reducing total cost, increasing the amount of plastic recovery, and raising the desired plastic materials in recycling process. The results from two priority structures show that it is necessary to raise the total cost budget in order to achieve targets on amount of recycled plastic and desired plastic materials.
Abstract: Nowadays, the challenge in hydraulic turbine design is
the multi-objective design of turbine runner to reach higher
efficiency. The hydraulic performance of a turbine is strictly depends
on runner blades shape. The present paper focuses on the application
of the multi-objective optimization algorithm to the design of a small
Francis turbine runner. The optimization exercise focuses on the
efficiency improvement at the best efficiency operating point (BEP)
of the GAMM Francis turbine. A global optimization method based
on artificial neural networks (ANN) and genetic algorithms (GA)
coupled by 3D Navier-Stokes flow solver has been used to improve
the performance of an initial geometry of a Francis runner. The
results show the good ability of optimization algorithm and the final
geometry has better efficiency with initial geometry. The goal was to
optimize the geometry of the blades of GAMM turbine runner which
leads to maximum total efficiency by changing the design parameters
of camber line in at least 5 sections of a blade. The efficiency of the
optimized geometry is improved from 90.7% to 92.5%. Finally,
design parameters and the way of selection have been considered and
discussed.
Abstract: The goal of this project is to design a system to
recognition voice commands. Most of voice recognition systems
contain two main modules as follow “feature extraction" and “feature
matching". In this project, MFCC algorithm is used to simulate
feature extraction module. Using this algorithm, the cepstral
coefficients are calculated on mel frequency scale. VQ (vector
quantization) method will be used for reduction of amount of data to
decrease computation time. In the feature matching stage Euclidean
distance is applied as similarity criterion. Because of high accuracy
of used algorithms, the accuracy of this voice command system is
high. Using these algorithms, by at least 5 times repetition for each
command, in a single training session, and then twice in each testing
session zero error rate in recognition of commands is achieved.
Abstract: The main goal of this paper is to show a possibility, how to solve numerically elliptic boundary value problems arising in 2D linear elasticity by using the fictitious domain method (FDM) and the Total-FETI domain decomposition method. We briefly mention the theoretical background of these methods and demonstrate their performance on a benchmark.
Abstract: Finite impulse response (FIR) filters have the advantage of linear phase, guaranteed stability, fewer finite precision errors, and efficient implementation. In contrast, they have a major disadvantage of high order need (more coefficients) than IIR counterpart with comparable performance. The high order demand imposes more hardware requirements, arithmetic operations, area usage, and power consumption when designing and fabricating the filter. Therefore, minimizing or reducing these parameters, is a major goal or target in digital filter design task. This paper presents an algorithm proposed for modifying values and the number of non-zero coefficients used to represent the FIR digital pulse shaping filter response. With this algorithm, the FIR filter frequency and phase response can be represented with a minimum number of non-zero coefficients. Therefore, reducing the arithmetic complexity needed to get the filter output. Consequently, the system characteristic i.e. power consumption, area usage, and processing time are also reduced. The proposed algorithm is more powerful when integrated with multiplierless algorithms such as distributed arithmetic (DA) in designing high order digital FIR filters. Here the DA usage eliminates the need for multipliers when implementing the multiply and accumulate unit (MAC) and the proposed algorithm will reduce the number of adders and addition operations needed through the minimization of the non-zero values coefficients to get the filter output.