Abstract: This study systemizes processes and methods in
wooden furniture design that contains uniqueness in function and
aesthetics. The study was done by research and analysis for
designer-s consideration factors that affect function and production.
Therefore, the study result indicates that such factors are design
process (planning for design, product specifications, concept design,
product architecture, industrial design, production), design evaluation
as well as wooden furniture design dependent factors i.e. art (art
style; furniture history, form), functionality (the strength and
durability, area place, using), material (appropriate to function, wood
mechanical properties), joints, cost, safety, and social responsibility.
Specifically, all aforementioned factors affect good design. Resulting
from direct experience gained through user-s usage, the designer
must design the wooden furniture systemically and effectively. As a
result, this study selected dinning armchair as a case study with all
involving factors and all design process stated in this study.
Abstract: Image target detection and tracking methods based on
target information such as intensity, shape model, histogram and
target dynamics have been proven to be robust to target model
variations and background clutters as shown by recent researches.
However, no definitive answer has been given to occluded target by
counter measure or limited field of view(FOV). In this paper, we
will present a novel tracking method using filtering and computational
geometry. This paper has two central goals: 1) to deal with vulnerable
target measurements; and 2) to maintain target tracking out of FOV
using non-target-originated information. The experimental results,
obtained with airborne images, show a robust tracking ability with
respect to the existing approaches. In exploring the questions of target
tracking, this paper will be limited to consideration of airborne image.
Abstract: Recently, nanomaterials are developed in the form of nano-films, nano-crystals and nano-pores. Lanthanide phosphates as a material find extensive application as laser, ceramic, sensor, phosphor, and also in optoelectronics, medical and biological labels, solar cells and light sources. Among the different kinds of rare-earth orthophosphates, yttrium orthophosphate has been shown to be an efficient host lattice for rare earth activator ions, which have become a research focus because of their important role in the field of light display systems, lasers, and optoelectronic devices. It is in this context that the 4fn- « 4fn-1 5d transitions of rare earth in insulating materials, lying in the UV and VUV, are the aim of large number of studies .Though there has been a few reports on Eu3+, Nd3+, Pr3+,Er3+, Ce3+, Tm3+ doped YPO4. The 4fn- « 4fn-1 5d transitions of the rare earth dependent to the host-matrix, several matrices ions were used to study these transitions, in this work we are suggesting to study on a very specific class of inorganic material that are orthophosphate doped with rare earth ions. This study focused on the effect of Ce3+ concentration on the structural and optical properties of Ce3+ doped YPO4 yttrium orthophosphate with powder form prepared by the Sol Gel method.
Abstract: In recent years with the rapid development of Internet and the Web, more and more web applications have been deployed in many fields and organizations such as finance, military, and government. Together with that, hackers have found more subtle ways to attack web applications. According to international statistics, SQL Injection is one of the most popular vulnerabilities of web applications. The consequences of this type of attacks are quite dangerous, such as sensitive information could be stolen or authentication systems might be by-passed. To mitigate the situation, several techniques have been adopted. In this research, a security solution is proposed using Artificial Neural Network to protect web applications against this type of attacks. The solution has been experimented on sample datasets and has given promising result. The solution has also been developed in a prototypic web application firewall called ANNbWAF.
Abstract: Accurately predicting non-peak traffic is crucial to
daily traffic for all forecasting models. In the paper, least squares
support vector machines (LS-SVMs) are investigated to solve such a
practical problem. It is the first time to apply the approach and analyze
the forecast performance in the domain. For comparison purpose, two
parametric and two non-parametric techniques are selected because of
their effectiveness proved in past research. Having good
generalization ability and guaranteeing global minima, LS-SVMs
perform better than the others. Providing sufficient improvement in
stability and robustness reveals that the approach is practically
promising.
Abstract: In this paper we present a new method for over-height
vehicle detection in low headroom streets and highways using digital
video possessing. The accuracy and the lower price comparing to
present detectors like laser radars and the capability of providing
extra information like speed and height measurement make this
method more reliable and efficient. In this algorithm the features are
selected and tracked using KLT algorithm. A blob extraction
algorithm is also applied using background estimation and
subtraction. Then the world coordinates of features that are inside the
blobs are estimated using a noble calibration method. As, the heights
of the features are calculated, we apply a threshold to select overheight
features and eliminate others. The over-height features are
segmented using some association criteria and grouped using an
undirected graph. Then they are tracked through sequential frames.
The obtained groups refer to over-height vehicles in a scene.
Abstract: Cognitive Science appeared about 40 years ago,
subsequent to the challenge of the Artificial Intelligence, as common
territory for several scientific disciplines such as: IT, mathematics,
psychology, neurology, philosophy, sociology, and linguistics. The
new born science was justified by the complexity of the problems
related to the human knowledge on one hand, and on the other by the
fact that none of the above mentioned sciences could explain alone
the mental phenomena. Based on the data supplied by the
experimental sciences such as psychology or neurology, models of
the human mind operation are built in the cognition science. These
models are implemented in computer programs and/or electronic
circuits (specific to the artificial intelligence) – cognitive systems –
whose competences and performances are compared to the human
ones, leading to the psychology and neurology data reinterpretation,
respectively to the construction of new models. During these
processes if psychology provides the experimental basis, philosophy
and mathematics provides the abstraction level utterly necessary for
the intermission of the mentioned sciences.
The ongoing general problematic of the cognitive approach
provides two important types of approach: the computational one,
starting from the idea that the mental phenomenon can be reduced to
1 and 0 type calculus operations, and the connection one that
considers the thinking products as being a result of the interaction
between all the composing (included) systems. In the field of
psychology measurements in the computational register use classical
inquiries and psychometrical tests, generally based on calculus
methods. Deeming things from both sides that are representing the
cognitive science, we can notice a gap in psychological product
measurement possibilities, regarded from the connectionist
perspective, that requires the unitary understanding of the quality –
quantity whole. In such approach measurement by calculus proves to
be inefficient. Our researches, deployed for longer than 20 years,
lead to the conclusion that measuring by forms properly fits to the
connectionism laws and principles.
Abstract: Robots- visual perception is a field that is gaining
increasing attention from researchers. This is partly due to emerging
trends in the commercial availability of 3D scanning systems or
devices that produce a high information accuracy level for a variety of
applications. In the history of mining, the mortality rate of mine workers
has been alarming and robots exhibit a great deal of potentials to
tackle safety issues in mines. However, an effective vision system
is crucial to safe autonomous navigation in underground terrains.
This work investigates robots- perception in underground terrains
(mines and tunnels) using statistical region merging (SRM) model.
SRM reconstructs the main structural components of an imagery
by a simple but effective statistical analysis. An investigation is
conducted on different regions of the mine, such as the shaft, stope
and gallery, using publicly available mine frames, with a stream of
locally captured mine images. An investigation is also conducted on a
stream of underground tunnel image frames, using the XBOX Kinect
3D sensors. The Kinect sensors produce streams of red, green and
blue (RGB) and depth images of 640 x 480 resolution at 30 frames per
second. Integrating the depth information to drivability gives a strong
cue to the analysis, which detects 3D results augmenting drivable and
non-drivable regions in 2D. The results of the 2D and 3D experiment
with different terrains, mines and tunnels, together with the qualitative
and quantitative evaluation, reveal that a good drivable region can be
detected in dynamic underground terrains.
Abstract: The ever-growing usage of aspect-oriented
development methodology in the field of software engineering
requires tool support for both research environments and industry. So
far, tool support for many activities in aspect-oriented software
development has been proposed, to automate and facilitate their
development. For instance, the AJaTS provides a transformation
system to support aspect-oriented development and refactoring. In
particular, it is well established that the abstract interpretation of
programs, in any paradigm, pursued in static analysis is best served
by a high-level programs representation, such as Control Flow Graph
(CFG). This is why such analysis can more easily locate common
programmatic idioms for which helpful transformation are already
known as well as, association between the input program and
intermediate representation can be more closely maintained.
However, although the current researches define the good concepts
and foundations, to some extent, for control flow analysis of aspectoriented
programs but they do not provide a concrete tool that can
solely construct the CFG of these programs. Furthermore, most of
these works focus on addressing the other issues regarding Aspect-
Oriented Software Development (AOSD) such as testing or data flow
analysis rather than CFG itself. Therefore, this study is dedicated to
build an aspect-oriented control flow graph construction tool called
AJcFgraph Builder. The given tool can be applied in many software
engineering tasks in the context of AOSD such as, software testing,
software metrics, and so forth.
Abstract: This work has been carried out in order to provide an understanding of the physical behaviors of the flow variation of pressure and temperature in a vortex tube. A computational fluid dynamics model is used to predict the flow fields and the associated temperature separation within a Ranque–Hilsch vortex tube. The CFD model is a steady axisymmetric model (with swirl) that utilizes the standard k-ε turbulence model. The second–order numerical schemes, was used to carry out all the computations. Vortex tube with a circumferential inlet stream and an axial (cold) outlet stream and a circumferential (hot) outlet stream was considered. Performance curves (temperature separation versus cold outlet mass fraction) were obtained for a specific vortex tube with a given inlet mass flow rate. Simulations have been carried out for varying amounts of cold outlet mass flow rates. The model results have a good agreement with experimental data.
Abstract: Instead of traditional (nominal) classification we investigate
the subject of ordinal classification or ranking. An enhanced
method based on an ensemble of Support Vector Machines (SVM-s)
is proposed. Each binary classifier is trained with specific weights
for each object in the training data set. Experiments on benchmark
datasets and synthetic data indicate that the performance of our
approach is comparable to state of the art kernel methods for
ordinal regression. The ensemble method, which is straightforward
to implement, provides a very good sensitivity-specificity trade-off
for the highest and lowest rank.
Abstract: Ant Colony Algorithms have been applied to difficult
combinatorial optimization problems such as the travelling salesman
problem and the quadratic assignment problem. In this paper gridbased
and random-based ant colony algorithms are proposed for
automatic 3D hose routing and their pros and cons are discussed. The
algorithm uses the tessellated format for the obstacles and the
generated hoses in order to detect collisions. The representation of
obstacles and hoses in the tessellated format greatly helps the
algorithm towards handling free-form objects and speeds up
computation. The performance of algorithm has been tested on a
number of 3D models.
Abstract: The rapid pace of technological advancement and its
consequential widening digital divide has resulted in the
marginalization of the disabled especially the communication
challenged. The dearth of suitable technologies for the development
of assistive technologies has served to further marginalize the
communications challenged user population and widen this chasm
even further. Given the varying levels of disability there and its
associated requirement for customized solution based. This paper
explains the use of a Software Development Kits (SDK) for the
bridging of this communications divide through the use of industry
poplar communications SDKs towards identification of requirements
for communications challenged users as well as identification of
appropriate frameworks for future development initiatives.
Abstract: Partial combustion of biomass in the gasifier generates producer gas that can be used for heating purposes and as supplementary or sole fuel in internal combustion engines. In this study, the virgin biomass obtained from hingan shell is used as the feedstock for gasifier to generate producer gas. The gasifier-engine system is operated on diesel and on esters of vegetable oil of hingan in liquid fuel mode operation and then on liquid fuel and producer gas combination in dual fuel mode operation. The performance and emission characteristics of the CI engine is analyzed by running the engine in liquid fuel mode operation and in dual fuel mode operation at different load conditions with respect to maximum diesel savings in the dual fuel mode operation. It was observed that specific energy consumption in the dual fuel mode of operation is found to be in the higher side at all load conditions. The brake thermal efficiency of the engine using diesel or hingan oil methyl ester (HOME) is higher than that of dual fuel mode operation. A diesel replacement in the tune of 60% in dual fuel mode is possible with the use of hingan shell producer gas. The emissions parameters such CO, HC, NOx, CO2 and smoke are higher in the case of dual fuel mode of operation as compared to that of liquid fuel mode.
Abstract: Designing, implementing, and debugging concurrency
control algorithms in a real system is a complex, tedious, and errorprone
process. Further, understanding concurrency control
algorithms and distributed computations is itself a difficult task.
Visualization can help with both of these problems. Thus, we have
developed an exploratory environment in which people can prototype
and test various versions of concurrency control algorithms, study
and debug distributed computations, and view performance statistics
of distributed systems. In this paper, we describe the exploratory
environment and show how it can be used to explore concurrency
control algorithms for the interactive steering of distributed
computations.
Abstract: In the semiconductor manufacturing process, large
amounts of data are collected from various sensors of multiple
facilities. The collected data from sensors have several different characteristics
due to variables such as types of products, former processes
and recipes. In general, Statistical Quality Control (SQC) methods
assume the normality of the data to detect out-of-control states of
processes. Although the collected data have different characteristics,
using the data as inputs of SQC will increase variations of data,
require wide control limits, and decrease performance to detect outof-
control. Therefore, it is necessary to separate similar data groups
from mixed data for more accurate process control. In the paper,
we propose a regression tree using split algorithm based on Pearson
distribution to handle non-normal distribution in parametric method.
The regression tree finds similar properties of data from different
variables. The experiments using real semiconductor manufacturing
process data show improved performance in fault detecting ability.
Abstract: The aim of this article is to assess the existing
business models used by the banks operating in the CEE countries in
the time period from 2006 till 2011.
In order to obtain research results, the authors performed
qualitative analysis of the scientific literature on bank business
models, which have been grouped into clusters that consist of such
components as: 1) capital and reserves; 2) assets; 3) deposits, and 4)
loans.
In their turn, bank business models have been developed based on
the types of core activities of the banks, and have been divided into
four groups: Wholesale, Investment, Retail and Universal Banks.
Descriptive statistics have been used to analyse the models,
determining mean, minimal and maximal values of constituent
cluster components, as well as standard deviation. The analysis of
the data is based on such bank variable indices as Return on Assets
(ROA) and Return on Equity (ROE).
Abstract: This research was to study a comparison of inspector-s performance between regular and complex visual inspection task. Visual task was simulated on DVD read control circuit. Inspection task was performed by using computer. Subjects were 10 undergraduate randomly selected and test for 20/20. Then, subjects were divided into two groups, five for regular inspection (control group) and five for complex inspection (treatment group) tasks. Result was showed that performance on regular and complex inspectors was significantly difference at the level of 0.05. Inspector performance on regular inspection was showed high percentage on defects detected by using equal time to complex inspection. This would be indicated that inspector performance was affected by visual inspection task.
Abstract: The extensive number of engineering drawing will be referred for planning process and the changes will produce a good engineering design to meet the demand in producing a new model. The advantage in reuse of engineering designs is to allow continuous product development to further improve the quality of product development, thus reduce the development costs. However, to retrieve the existing engineering drawing, it is time consuming, a complex process and are expose to errors. Engineering drawing file searching system will be proposed to solve this problem. It is essential for engineer and designer to have some sort of medium to enable them to search for drawing in the most effective way. This paper lays out the proposed research project under the area of information extraction in engineering drawing.
Abstract: A recent neurospiking coding scheme for feature extraction from biosonar echoes of various plants is examined with avariety of stochastic classifiers. Feature vectors derived are employedin well-known stochastic classifiers, including nearest-neighborhood,single Gaussian and a Gaussian mixture with EM optimization.Classifiers' performances are evaluated by using cross-validation and bootstrapping techniques. It is shown that the various classifers perform equivalently and that the modified preprocessing configuration yields considerably improved results.