Abstract: The design of high-rise building is more often dictated
by its serviceability rather than strength. Structural Engineers are
always striving to overcome challenge of controlling lateral
deflection and storey drifts as well as self weight of structure
imposed on foundation.
One of the most effective techniques is the use of outrigger and
belt truss system in Composite structures that can astutely solve the
above two issues in High-rise constructions.
This paper investigates deflection control by effective utilisation
of belt truss and outrigger system on a 60-storey composite building
subjected to wind loads. A three dimensional Finite Element Analysis
is performed with one, two and three outrigger levels. The reductions
in lateral deflection are 34%, 42% and 51% respectively as compared
to a model without any outrigger system. There is an appreciable
decline in the storey drifts with the introduction of these stiffer
arrangements.
Abstract: Through 1980s, management accounting researchers
described the increasing irrelevance of traditional control and
performance measurement systems. The Balanced Scorecard (BSC)
is a critical business tool for a lot of organizations. It is a
performance measurement system which translates mission and
strategy into objectives. Strategy map approach is a development
variant of BSC in which some necessary causal relations must be
established. To recognize these relations, experts usually use
experience. It is also possible to utilize regression for the same
purpose. Structural Equation Modeling (SEM), which is one of the
most powerful methods of multivariate data analysis, obtains more
appropriate results than traditional methods such as regression. In the
present paper, we propose SEM for the first time to identify the
relations between objectives in the strategy map, and a test to
measure the importance of relations. In SEM, factor analysis and test
of hypotheses are done in the same analysis. SEM is known to be
better than other techniques at supporting analysis and reporting. Our
approach provides a framework which permits the experts to design
the strategy map by applying a comprehensive and scientific method
together with their experience. Therefore this scheme is a more
reliable method in comparison with the previously established
methods.
Abstract: This paper presents anapproach of hybridizing two or more artificial intelligence (AI) techniques which arebeing used to
fuzzify the workstress level ranking and categorize the rating accordingly. The use of two or more techniques (hybrid approach)
has been considered in this case, as combining different techniques may lead to neutralizing each other-s weaknesses generating a
superior hybrid solution. Recent researches have shown that there is a
need for a more valid and reliable tools, for assessing work stress. Thus artificial intelligence techniques have been applied in this
instance to provide a solution to a psychological application. An overview about the novel and autonomous interactive model for analysing work-stress that has been developedusing multi-agent
systems is also presented in this paper. The establishment of the intelligent multi-agent decision analyser (IMADA) using hybridized technique of neural networks and fuzzy logic within the multi-agent based framework is also described.
Abstract: In this paper we have proposed three and two
stage still gray scale image compressor based on BTC. In our
schemes, we have employed a combination of four techniques
to reduce the bit rate. They are quad tree segmentation, bit
plane omission, bit plane coding using 32 visual patterns and
interpolative bit plane coding. The experimental results show
that the proposed schemes achieve an average bit rate of 0.46
bits per pixel (bpp) for standard gray scale images with an
average PSNR value of 30.25, which is better than the results
from the exiting similar methods based on BTC.
Abstract: e-mail has become an important means of electronic
communication but the viability of its usage is marred by Unsolicited
Bulk e-mail (UBE) messages. UBE consists of many types
like pornographic, virus infected and 'cry-for-help' messages as well
as fake and fraudulent offers for jobs, winnings and medicines. UBE
poses technical and socio-economic challenges to usage of e-mails.
To meet this challenge and combat this menace, we need to
understand UBE. Towards this end, the current paper presents a
content-based textual analysis of more than 2700 body enhancement
medicinal UBE. Technically, this is an application of Text Parsing
and Tokenization for an un-structured textual document and we
approach it using Bag Of Words (BOW) and Vector Space Document
Model techniques. We have attempted to identify the most
frequently occurring lexis in the UBE documents that advertise
various products for body enhancement. The analysis of such top
100 lexis is also presented. We exhibit the relationship between
occurrence of a word from the identified lexis-set in the given UBE
and the probability that the given UBE will be the one advertising for
fake medicinal product. To the best of our knowledge and survey of
related literature, this is the first formal attempt for identification of
most frequently occurring lexis in such UBE by its textual analysis.
Finally, this is a sincere attempt to bring about alertness against and
mitigate the threat of such luring but fake UBE.
Abstract: This paper includes two novel techniques for skew
estimation of binary document images. These algorithms are based on
connected component analysis and Hough transform. Both these
methods focus on reducing the amount of input data provided to
Hough transform. In the first method, referred as word centroid
approach, the centroids of selected words are used for skew detection.
In the second method, referred as dilate & thin approach, the selected
characters are blocked and dilated to get word blocks and later
thinning is applied. The final image fed to Hough transform has the
thinned coordinates of word blocks in the image. The methods have
been successful in reducing the computational complexity of Hough
transform based skew estimation algorithms. Promising experimental
results are also provided to prove the effectiveness of the proposed
methods.
Abstract: Fuzzy Load forecasting plays a paramount role in the operation and management of power systems. Accurate estimation of future power demands for various lead times facilitates the task of generating power reliably and economically. The forecasting of future loads for a relatively large lead time (months to few years) is studied here (long term load forecasting). Among the various techniques used in forecasting load, artificial intelligence techniques provide greater accuracy to the forecasts as compared to conventional techniques. Fuzzy Logic, a very robust artificial intelligent technique, is described in this paper to forecast load on long term basis. The paper gives a general algorithm to forecast long term load. The algorithm is an Extension of Short term load forecasting method to Long term load forecasting and concentrates not only on the forecast values of load but also on the errors incorporated into the forecast. Hence, by correcting the errors in the forecast, forecasts with very high accuracy have been achieved. The algorithm, in the paper, is demonstrated with the help of data collected for residential sector (LT2 (a) type load: Domestic consumers). Load, is determined for three consecutive years (from April-06 to March-09) in order to demonstrate the efficiency of the algorithm and to forecast for the next two years (from April-09 to March-11).
Abstract: In the last decade digital watermarking procedures have
become increasingly applied to implement the copyright protection
of multimedia digital contents distributed on the Internet. To this
end, it is worth noting that a lot of watermarking procedures
for images and videos proposed in literature are based on spread
spectrum techniques. However, some scepticism about the robustness
and security of such watermarking procedures has arisen because
of some documented attacks which claim to render the inserted
watermarks undetectable. On the other hand, web content providers
wish to exploit watermarking procedures characterized by flexible and
efficient implementations and which can be easily integrated in their
existing web services frameworks or platforms. This paper presents
how a simple spread spectrum watermarking procedure for MPEG-2
videos can be modified to be exploited in web contexts. To this end,
the proposed procedure has been made secure and robust against some
well-known and dangerous attacks. Furthermore, its basic scheme
has been optimized by making the insertion procedure adaptive with
respect to the terminals used to open the videos and the network transactions
carried out to deliver them to buyers. Finally, two different
implementations of the procedure have been developed: the former
is a high performance parallel implementation, whereas the latter is
a portable Java and XML based implementation. Thus, the paper
demonstrates that a simple spread spectrum watermarking procedure,
with limited and appropriate modifications to the embedding scheme,
can still represent a valid alternative to many other well-known and
more recent watermarking procedures proposed in literature.
Abstract: In this paper, a model for an information retrieval
system is proposed which takes into account that knowledge about
documents and information need of users are dynamic. Two
methods are combined, one qualitative or symbolic and the other
quantitative or numeric, which are deemed suitable for many
clustering contexts, data analysis, concept exploring and
knowledge discovery. These two methods may be classified as
inductive learning techniques. In this model, they are introduced to
build “long term" knowledge about past queries and concepts in a
collection of documents. The “long term" knowledge can guide
and assist the user to formulate an initial query and can be
exploited in the process of retrieving relevant information. The
different kinds of knowledge are organized in different points of
view. This may be considered an enrichment of the exploration
level which is coherent with the concept of document/query
structure.
Abstract: The Institute of Product Development is dealing
with the development, design and dimensioning of micro components
and systems as a member of the Collaborative Research
Centre 499 “Design, Production and Quality Assurance of
Molded micro components made of Metallic and Ceramic Materials".
Because of technological restrictions in the miniaturization
of conventional manufacturing techniques, shape and
material deviations cannot be scaled down in the same proportion
as the micro parts, rendering components with relatively
wide tolerance fields. Systems that include such components
should be designed with this particularity in mind, often requiring
large clearance. On the end, the output of such systems
results variable and prone to dynamical instability. To save
production time and resources, every study of these effects
should happen early in the product development process and
base on computer simulation to avoid costly prototypes. A
suitable method is proposed here and exemplary applied to a
micro technology demonstrator developed by the CRC499. It
consists of a one stage planetary gear train in a sun-planet-ring
configuration, with input through the sun gear and output
through the carrier. The simulation procedure relies on ordinary
Multi Body Simulation methods and subsequently adds
other techniques to further investigate details of the system-s
behavior and to predict its response. The selection of the relevant
parameters and output functions followed the engineering
standards for regular sized gear trains. The first step is to
quantify the variability and to reveal the most critical points of
the system, performed through a whole-mechanism Sensitivity
Analysis. Due to the lack of previous knowledge about the system-s
behavior, different DOE methods involving small and
large amount of experiments were selected to perform the SA.
In this particular case the parameter space can be divided into
two well defined groups, one of them containing the gear-s profile
information and the other the components- spatial location.
This has been exploited to explore the different DOE techniques
more promptly. A reduced set of parameters is derived for
further investigation and to feed the final optimization process,
whether as optimization parameters or as external perturbation
collective. The 10 most relevant perturbation factors and 4 to 6
prospective variable parameters are considered in a new, simplified
model. All of the parameters are affected by the mentioned
production variability. The objective functions of interest
are based on scalar output-s variability measures, so the
problem becomes an optimization under robustness and reliability constrains. The study shows an initial step on the development
path of a method to design and optimize complex micro
mechanisms composed of wide tolerated elements accounting
for the robustness and reliability of the systems- output.
Abstract: The advancement in wireless technology with the wide
use of mobile devices have drawn the attention of the research and
technological communities towards wireless environments, such as
Wireless Local Area Networks (WLANs), Wireless Wide Area
Networks (WWANs), and mobile systems and ad-hoc networks.
Unfortunately, wired and wireless networks are expressively different
in terms of link reliability, bandwidth, and time of propagation delay
and by adapting new solutions for these enhanced
telecommunications, superior quality, efficiency, and opportunities
will be provided where wireless communications were otherwise
unfeasible. Some researchers define 4G as a significant improvement
of 3G, where current cellular network’s issues will be solved and data
transfer will play a more significant role. For others, 4G unifies
cellular and wireless local area networks, and introduces new routing
techniques, efficient solutions for sharing dedicated frequency bands,
and an increased mobility and bandwidth capacity. This paper
discusses the possible solutions and enhancements probabilities that
proposed to improve the performance of Transmission Control
Protocol (TCP) over different wireless networks and also the paper
investigated each approach in term of advantages and disadvantages.
Abstract: This paper analyzes different techniques of the fine grained security of relational databases for the two variables-data accessibility and inference. Data accessibility measures the amount of data available to the users after applying a security technique on a table. Inference is the proportion of information leakage after suppressing a cell containing secret data. A row containing a secret cell which is suppressed can become a security threat if an intruder generates useful information from the related visible information of the same row. This paper measures data accessibility and inference associated with row, cell, and column level security techniques. Cell level security offers greatest data accessibility as it suppresses secret data only. But on the other hand, there is a high probability of inference in cell level security. Row and column level security techniques have least data accessibility and inference. This paper introduces cell plus innocent security technique that utilizes the cell level security method but suppresses some innocent data to dodge an intruder that a suppressed cell may not necessarily contain secret data. Four variations of the technique namely cell plus innocent 1/4, cell plus innocent 2/4, cell plus innocent 3/4, and cell plus innocent 4/4 respectively have been introduced to suppress innocent data equal to 1/4, 2/4, 3/4, and 4/4 percent of the true secret data inside the database. Results show that the new technique offers better control over data accessibility and inference as compared to the state-of-theart security techniques. This paper further discusses the combination of techniques together to be used. The paper shows that cell plus innocent 1/4, 2/4, and 3/4 techniques can be used as a replacement for the cell level security.
Abstract: Heart sound is an acoustic signal and many techniques
used nowadays for human recognition tasks borrow speech recognition
techniques. One popular choice for feature extraction of accoustic
signals is the Mel Frequency Cepstral Coefficients (MFCC) which
maps the signal onto a non-linear Mel-Scale that mimics the human
hearing. However the Mel-Scale is almost linear in the frequency
region of heart sounds and thus should produce similar results with
the standard cepstral coefficients (CC). In this paper, MFCC is
investigated to see if it produces superior results for PCG based
human identification system compared to CC. Results show that the
MFCC system is still superior to CC despite linear filter-banks in
the lower frequency range, giving up to 95% correct recognition rate
for MFCC and 90% for CC. Further experiments show that the high
recognition rate is due to the implementation of filter-banks and not
from Mel-Scaling.
Abstract: This work concerns the evolution and the maintenance
of an ontological resource in relation with the evolution of the corpus
of texts from which it had been built.
The knowledge forming a text corpus, especially in dynamic domains,
is in continuous evolution. When a change in the corpus occurs, the
domain ontology must evolve accordingly. Most methods manage
ontology evolution independently from the corpus from which it is
built; in addition, they treat evolution just as a process of knowledge
addition, not considering other knowledge changes. We propose a
methodology for managing an evolving ontology from a text corpus
that evolves over time, while preserving the consistency and the
persistence of this ontology.
Our methodology is based on the changes made on the corpus to
reflect the evolution of the considered domain - augmented surgery
in our case. In this context, the results of text mining techniques,
as well as the ARCHONTE method slightly modified, are used to
support the evolution process.
Abstract: In this paper, a study on the applications of the
optimization and regression techniques for optimal calculation of
partial ratios of helical gearboxes with second-step double gear-sets
for minimal cross section dimension is introduced. From the condition
of the moment equilibrium of a mechanic system including three gear
units and their regular resistance condition, models for calculation of
the partial ratios of helical gearboxes with second-step double
gear-sets were given. Especially, by regression analysis, explicit
models for calculation of the partial ratios are introduced. These
models allow determining the partial ratios accurately and simply.
Abstract: In this paper three different approaches for person
verification and identification, i.e. by means of fingerprints, face and
voice recognition, are studied. Face recognition uses parts-based
representation methods and a manifold learning approach. The
assessment criterion is recognition accuracy. The techniques under
investigation are: a) Local Non-negative Matrix Factorization
(LNMF); b) Independent Components Analysis (ICA); c) NMF with
sparse constraints (NMFsc); d) Locality Preserving Projections
(Laplacianfaces). Fingerprint detection was approached by classical
minutiae (small graphical patterns) matching through image
segmentation by using a structural approach and a neural network as
decision block. As to voice / speaker recognition, melodic cepstral
and delta delta mel cepstral analysis were used as main methods, in
order to construct a supervised speaker-dependent voice recognition
system. The final decision (e.g. “accept-reject" for a verification
task) is taken by using a majority voting technique applied to the
three biometrics. The preliminary results, obtained for medium
databases of fingerprints, faces and voice recordings, indicate the
feasibility of our study and an overall recognition precision (about
92%) permitting the utilization of our system for a future complex
biometric card.
Abstract: Over the past decades, automatic face recognition has become a highly active research area, mainly due to the countless application possibilities in both the private as well as the public sector. Numerous algorithms have been proposed in the literature to cope with the problem of face recognition, nevertheless, a group of methods commonly referred to as appearance based have emerged as the dominant solution to the face recognition problem. Many comparative studies concerned with the performance of appearance based methods have already been presented in the literature, not rarely with inconclusive and often with contradictory results. No consent has been reached within the scientific community regarding the relative ranking of the efficiency of appearance based methods for the face recognition task, let alone regarding their susceptibility to appearance changes induced by various environmental factors. To tackle these open issues, this paper assess the performance of the three dominant appearance based methods: principal component analysis, linear discriminant analysis and independent component analysis, and compares them on equal footing (i.e., with the same preprocessing procedure, with optimized parameters for the best possible performance, etc.) in face verification experiments on the publicly available XM2VTS database. In addition to the comparative analysis on the XM2VTS database, ten degraded versions of the database are also employed in the experiments to evaluate the susceptibility of the appearance based methods on various image degradations which can occur in "real-life" operating conditions. Our experimental results suggest that linear discriminant analysis ensures the most consistent verification rates across the tested databases.
Abstract: this article proposed a methodology for computer
numerical control (CNC) machine scoring. The case study company
is a manufacturer of hard disk drive parts in Thailand. In this
company, sample of parts manufactured from CNC machine are
usually taken randomly for quality inspection. These inspection data
were used to make a decision to shut down the machine if it has
tendency to produce parts that are out of specification. Large amount
of data are produced in this process and data mining could be very
useful technique in analyzing them. In this research, data mining
techniques were used to construct a machine scoring model called
'machine priority assessment model (MPAM)'. This model helps to
ensure that the machine with higher risk of producing defective parts
be inspected before those with lower risk. If the defective prone
machine is identified sooner, defective part and rework could be
reduced hence improving the overall productivity. The results
showed that the proposed method can be successfully implemented
and approximately 351,000 baht of opportunity cost could have
saved in the case study company.
Abstract: Machine-understandable data when strongly
interlinked constitutes the basis for the SemanticWeb. Annotating
web documents is one of the major techniques for creating metadata
on the Web. Annotating websites defines the containing data in a
form which is suitable for interpretation by machines. In this paper,
we present a new approach to annotate websites and documents by
promoting the abstraction level of the annotation process to a
conceptual level. By this means, we hope to solve some of the
problems of the current annotation solutions.
Abstract: In this study, the transesterification of palm oil with methanol for biodiesel production was studied by using CaO–ZnO as a heterogeneous base catalyst prepared by incipient-wetness impregnation (IWI) and co-precipitation (CP) methods. The reaction parameters considered were molar ratio of methanol to oil, amount of catalyst, reaction temperature, and reaction time. The optimum conditions–15:1 molar ratio of methanol to oil, a catalyst amount of 6 wt%, reaction temperature of 60 °C, and reaction time of 8 h–were observed. The effects of Ca loading, calcination temperature, and catalyst preparation on the catalytic performance were studied. The fresh and spent catalysts were characterized by several techniques, including XRD, TPR, and XRF.