Abstract: A glider is in essence an unpowered vehicle and in this project we designed and built an oceanic glider, designed to operate underwater. This Glider was designed to collect ocean data such as temperature, pressure and (in future measures physical dimensions of the operating environment) and output this data to an external source. Development of the Oceanic Glider required research into various actuation systems that control buoyancy, pitch and yaw and the dynamics of these systems. It also involved the design and manufacture of the Glider and the design and implementation of a controller that enabled the Glider to navigate and move in an appropriate manner.
Abstract: Currently, web usage make a huge data from a lot of
user attention. In general, proxy server is a system to support web
usage from user and can manage system by using hit rates. This
research tries to improve hit rates in proxy system by applying data
mining technique. The data set are collected from proxy servers in the
university and are investigated relationship based on several features.
The model is used to predict the future access websites. Association
rule technique is applied to get the relation among Date, Time, Main
Group web, Sub Group web, and Domain name for created model.
The results showed that this technique can predict web content for the
next day, moreover the future accesses of websites increased from
38.15% to 85.57 %.
This model can predict web page access which tends to increase
the efficient of proxy servers as a result. In additional, the
performance of internet access will be improved and help to reduce
traffic in networks.
Abstract: This research explorers the relationship between leadership style and continuous improvement (CI) teams. CI teams have several features that are not always found in other types of teams, including multi-functional members, short time period for performance, positive and actionable results, and exposure to senior leadership. There is no one best style of leadership for these teams. Instead, it is important to select the best leadership style for the situation. The leader must have the flexibility to change styles and the skill to use the chosen style effectively in order to ensure the team’s success.
Abstract: Information and Communication Technologies (ICT) are increasing in importance everyday, especially since the 90’s (last decade of birth for the Millennials generation). While social interactions involving the Millennials generation have been studied, a lack of investigation remains regarding the use of the ICT by this generation as well as the impact on outcomes in education and professional training. Observing and interviewing students preparing a MSc, we aimed at characterizing the interaction students-ICT during the courses. We found that up to 50% of the students (mainly female) could use ICT during courses at a rate of 0.84 occurrence/minutes for some of them, and they thought this involvement did not disturb learning, even was helpful. As recent researches show that multitasking leads people think they are much better than they actually are, further observations with assessments are needed to conclude whether or not the use ICT by students during the courses is a real strength.
Abstract: Monitoring of ecological systems is one of the major
issues in ecosystem research. The concepts and methodology of
mathematical systems theory provide useful tools to face this
problem. In many cases, state monitoring of a complex ecological
system consists in observation (measurement) of certain state
variables, and the whole state process has to be determined from the
observed data. The solution proposed in the paper is the design of an
observer system, which makes it possible to approximately recover
the state process from its partial observation. The method is
illustrated with a trophic chain of resource – producer – primary
consumer type and a numerical example is also presented.
Abstract: This paper presents a heuristic to solve large size 0-1 Multi constrained Knapsack problem (01MKP) which is NP-hard. Many researchers are used heuristic operator to identify the redundant constraints of Linear Programming Problem before applying the regular procedure to solve it. We use the intercept matrix to identify the zero valued variables of 01MKP which is known as redundant variables. In this heuristic, first the dominance property of the intercept matrix of constraints is exploited to reduce the search space to find the optimal or near optimal solutions of 01MKP, second, we improve the solution by using the pseudo-utility ratio based on surrogate constraint of 01MKP. This heuristic is tested for benchmark problems of sizes upto 2500, taken from literature and the results are compared with optimum solutions. Space and computational complexity of solving 01MKP using this approach are also presented. The encouraging results especially for relatively large size test problems indicate that this heuristic can successfully be used for finding good solutions for highly constrained NP-hard problems.
Abstract: Concrete performance is strongly affected by the
particle packing degree since it determines the distribution of the
cementitious component and the interaction of mineral particles. By
using packing theory designers will be able to select optimal
aggregate materials for preparing concrete with low cement content,
which is beneficial from the point of cost. Optimum particle packing
implies minimizing porosity and thereby reducing the amount of
cement paste needed to fill the voids between the aggregate particles,
taking also the rheology of the concrete into consideration. For
reaching good fluidity superplasticizers are required. The results from
pilot tests at Luleå University of Technology (LTU) show various
forms of the proposed theoretical models, and the empirical approach
taken in the study seems to provide a safer basis for developing new,
improved packing models.
Abstract: Recently, lots of researchers are attracted to retrieving
multimedia database by using some impression words and their values.
Ikezoe-s research is one of the representatives and uses eight pairs of
opposite impression words. We had modified its retrieval interface and
proposed '2D-RIB'. In '2D-RIB', after a retrieval person selects a
single basic music, the system visually shows some other music
around the basic one along relative position. He/she can select one of
them fitting to his/her intention, as a retrieval result. The purpose of
this paper is to improve his/her satisfaction level to the retrieval result
in 2D-RIB. One of our extensions is to define and introduce the
following two measures: 'melody goodness' and 'general acceptance'.
We implement them in different five combinations. According to an
evaluation experiment, both of these two measures can contribute to
the improvement. Another extension is three types of customization.
We have implemented them and clarified which customization is
effective.
Abstract: The present research was designed to investigate the
anti-microbial activity of aristolochic acid from the root of
Aristolochia bracteata. From the methanolic & ethyl extract extracts
of Aristolochia bracteata aristolochic acid I was isolated and
conformed through IR, NMR & MS. The percentage purity of
aristolochic acid I was determined by UV & HPLC method. Antibacterial
activity of extracts of Aristolochia bracteata and the
isolated compound was determined by disc diffusion method. The
results reveled that the isolated aristolochic acid from methanolic
extract was more pure than the compound from ethyl acetate extract.
The various extracts (500μg/disc) of Aristolochia bracteata showed
moderate antibacterial activity with the average zone of inhibition of
7-18 mm by disc diffusion method. Among the extracts, ethyl acetate
& methanol extracts were shown good anti-microbial activity and the
growth of E.coli (18 mm) was strongly inhibited. Microbial assay of
isolated compound (Aristolochic acid I) from ethyl acetate &
methanol extracts were shown good antimicrobial activity and the
zone of inhibition of both at higher concentration 50 μg/ml was
similar with the standard aristolochic acid. It may be concluded that
the isolated compound of aristolochic acid I has good anti-bacterial
activity.
Abstract: Image Searching was always a problem specially when these images are not properly managed or these are distributed over different locations. Currently different techniques are used for image search. On one end, more features of the image are captured and stored to get better results. Storing and management of such features is itself a time consuming job. While on the other extreme if fewer features are stored the accuracy rate is not satisfactory. Same image stored with different visual properties can further reduce the rate of accuracy. In this paper we present a new concept of using polynomials of sorted histogram of the image. This approach need less overhead and can cope with the difference in visual features of image.
Abstract: Many foreign and Lithuanian scientists, analyzing the
evaluation of the tax system in respect of the burden of taxation,
agree that the latter, in principle, depends on how many individuals
and what units of the residents constitute a household. Therefore, the
aim of scientific research is to substantiate or to deny the significance
of a household, but not a resident, as a statistical unit, during the
evaluation of tax system, to be precise, determination of the value of
the burden of taxation. A performed scientific research revealed that
evaluation of the tax system in respect of a household, but not a
resident, as a statistical unit, allows not only to evaluate the
efficiency of the tax system more objectively, but also to forecast
practicably existing poverty line, burden of taxation, and to
capacitate the initiation of efficient decisions in social and tax fields
creating the environment of existence.
Abstract: EGOTHOR is a search engine that indexes the Web
and allows us to search the Web documents. Its hit list contains URL
and title of the hits, and also some snippet which tries to shortly
show a match. The snippet can be almost always assembled by an
algorithm that has a full knowledge of the original document (mostly
HTML page). It implies that the search engine is required to store
the full text of the documents as a part of the index.
Such a requirement leads us to pick up an appropriate compression
algorithm which would reduce the space demand. One of the solutions
could be to use common compression methods, for instance gzip or
bzip2, but it might be preferable if we develop a new method which
would take advantage of the document structure, or rather, the textual
character of the documents.
There already exist a special compression text algorithms and
methods for a compression of XML documents. The aim of this
paper is an integration of the two approaches to achieve an optimal
level of the compression ratio
Abstract: The aim of this paper is to express the input-output
matrix as a linear ordering problem which is classified as an NP-hard
problem. We then use a Tabu search algorithm to find the best
permutation among sectors in the input-output matrix that will give
an optimal solution. This optimal permutation can be useful in
designing policies and strategies for economists and government in
their goal of maximizing the gross domestic product.
Abstract: In order to study pressed pile test and ultimate bearing
capacity character of large-diameter steel pipe pile, based on two high-piled wharfs of Zhanjiang Port, pressed pile test and numerical simulation of three large-diameter steel pipe piles are analyzed in this
paper. Anchored pile method is used to pressed pile test, and the
curves of Q-s and ultimate bearing capacity are attained. Then the three piles are numerically simulated by ABAQUS, and results of numerical simulation and those of field test are comparatively analyzed. The results show that settlement value of numerical
simulation is larger than that of field test in the process of loading, the difference value is widening with the increasing of load, and the
ultimate difference value of settlement is 20% to 30%.
Abstract: The aim of the present study is to analyze empirical
researches on the social resources dimension of occupational status
attainment process and relate them to the rational choice approach.
The analysis suggests that the existing data on the strength of ties
aspect of social resources is insufficient and does not allow any
implication concerning rational actor-s behavior. However, the results
concerning work relation aspect are more encouraging.
Abstract: With the rapid development in the field of life
sciences and the flooding of genomic information, the need for faster
and scalable searching methods has become urgent. One of the
approaches that were investigated is indexing. The indexing methods
have been categorized into three categories which are the lengthbased
index algorithms, transformation-based algorithms and mixed
techniques-based algorithms. In this research, we focused on the
transformation based methods. We embedded the N-gram method
into the transformation-based method to build an inverted index
table. We then applied the parallel methods to speed up the index
building time and to reduce the overall retrieval time when querying
the genomic database. Our experiments show that the use of N-Gram
transformation algorithm is an economical solution; it saves time and
space too. The result shows that the size of the index is smaller than
the size of the dataset when the size of N-Gram is 5 and 6. The
parallel N-Gram transformation algorithm-s results indicate that the
uses of parallel programming with large dataset are promising which
can be improved further.
Abstract: Exclusive breastfeeding is the feeding of a baby on no other milk apart from breast milk. Exclusive breastfeeding during the first 6 months of life is of fundamental importance because it supports optimal growth and development during infancy and reduces the risk of obliterating diseases and problems. Moreover, in developed countries, exclusive breastfeeding has decreased the incidence and/or severity of diarrhea, lower respiratory infection and urinary tract infection. In this paper, we study the factors that influence exclusive breastfeeding and use the Generalized Poisson regression model to analyze the practices of exclusive breastfeeding in Mauritius. We develop two sets of quasi-likelihood equations (QLE)to estimate the parameters.
Abstract: This study was conducted to explore the effects of two
countries model comparison program in Taiwan and Singapore in
TIMSS database. The researchers used Multi-Group Hierarchical
Linear Modeling techniques to compare the effects of two different
country models and we tested our hypotheses on 4,046 Taiwan
students and 4,599 Singapore students in 2007 at two levels: the class
level and student (individual) level. Design quality is a class level
variable. Student level variables are achievement and self-confidence.
The results challenge the widely held view that retention has a positive
impact on self-confidence. Suggestions for future research are
discussed.
Abstract: Dengue disease is an infectious vector-borne viral
disease that is commonly found in tropical and sub-tropical regions,
especially in urban and semi-urban areas, around the world and
including Malaysia. There is no currently available vaccine or
chemotherapy for the prevention or treatment of dengue disease.
Therefore prevention and treatment of the disease depend on vector
surveillance and control measures. Disease risk mapping has been
recognized as an important tool in the prevention and control
strategies for diseases. The choice of statistical model used for
relative risk estimation is important as a good model will
subsequently produce a good disease risk map. Therefore, the aim of
this study is to estimate the relative risk for dengue disease based
initially on the most common statistic used in disease mapping called
Standardized Morbidity Ratio (SMR) and one of the earliest
applications of Bayesian methodology called Poisson-gamma model.
This paper begins by providing a review of the SMR method, which
we then apply to dengue data of Perak, Malaysia. We then fit an
extension of the SMR method, which is the Poisson-gamma model.
Both results are displayed and compared using graph, tables and
maps. Results of the analysis shows that the latter method gives a
better relative risk estimates compared with using the SMR. The
Poisson-gamma model has been demonstrated can overcome the
problem of SMR when there is no observed dengue cases in certain
regions. However, covariate adjustment in this model is difficult and
there is no possibility for allowing spatial correlation between risks in
adjacent areas. The drawbacks of this model have motivated many
researchers to propose other alternative methods for estimating the
risk.
Abstract: In recent years, a number of works proposing the
combination of multiple classifiers to produce a single
classification have been reported in remote sensing literature. The
resulting classifier, referred to as an ensemble classifier, is
generally found to be more accurate than any of the individual
classifiers making up the ensemble. As accuracy is the primary
concern, much of the research in the field of land cover
classification is focused on improving classification accuracy. This
study compares the performance of four ensemble approaches
(boosting, bagging, DECORATE and random subspace) with a
univariate decision tree as base classifier. Two training datasets,
one without ant noise and other with 20 percent noise was used to
judge the performance of different ensemble approaches. Results
with noise free data set suggest an improvement of about 4% in
classification accuracy with all ensemble approaches in
comparison to the results provided by univariate decision tree
classifier. Highest classification accuracy of 87.43% was achieved
by boosted decision tree. A comparison of results with noisy data
set suggests that bagging, DECORATE and random subspace
approaches works well with this data whereas the performance of
boosted decision tree degrades and a classification accuracy of
79.7% is achieved which is even lower than that is achieved (i.e.
80.02%) by using unboosted decision tree classifier.