Abstract: There are many researches to detect collision between real object and virtual object in 3D space. In general, these techniques are need to huge computing power. So, many research and study are constructed by using cloud computing, network computing, and distribute computing. As a reason of these, this paper proposed a novel fast 3D collision detection algorithm between real and virtual object using 2D intersection area. Proposed algorithm uses 4 multiple cameras and coarse-and-fine method to improve accuracy and speed performance of collision detection. In the coarse step, this system examines the intersection area between real and virtual object silhouettes from all camera views. The result of this step is the index of virtual sensors which has a possibility of collision in 3D space. To decide collision accurately, at the fine step, this system examines the collision detection in 3D space by using the visual hull algorithm. Performance of the algorithm is verified by comparing with existing algorithm. We believe proposed algorithm help many other research, study and application fields such as HCI, augmented reality, intelligent space, and so on.
Abstract: Preprocessing of speech signals is considered a crucial step in the development of a robust and efficient speech or speaker recognition system. In this paper, we present some popular statistical outlier-detection based strategies to segregate the silence/unvoiced part of the speech signal from the voiced portion. The proposed methods are based on the utilization of the 3 σ edit rule, and the Hampel Identifier which are compared with the conventional techniques: (i) short-time energy (STE) based methods, and (ii) distribution based methods. The results obtained after applying the proposed strategies on some test voice signals are encouraging.
Abstract: Breast cancer detection techniques have been reported
to aid radiologists in analyzing mammograms. We note that most
techniques are performed on uncompressed digital mammograms.
Mammogram images are huge in size necessitating the use of
compression to reduce storage/transmission requirements. In this
paper, we present an algorithm for the detection of
microcalcifications in the JPEG2000 domain. The algorithm is based
on the statistical properties of the wavelet transform that the
JPEG2000 coder employs. Simulation results were carried out at
different compression ratios. The sensitivity of this algorithm ranges
from 92% with a false positive rate of 4.7 down to 66% with a false
positive rate of 2.1 using lossless compression and lossy compression
at a compression ratio of 100:1, respectively.
Abstract: In this paper, we propose an improved 3D star skeleton
technique, which is a suitable skeletonization for human posture representation
and reflects the 3D information of human posture.
Moreover, the proposed technique is simple and then can be performed
in real-time. The existing skeleton construction techniques, such as
distance transformation, Voronoi diagram, and thinning, focus on the
precision of skeleton information. Therefore, those techniques are not
applicable to real-time posture recognition since they are computationally
expensive and highly susceptible to noise of boundary. Although
a 2D star skeleton was proposed to complement these problems,
it also has some limitations to describe the 3D information of the
posture. To represent human posture effectively, the constructed skeleton
should consider the 3D information of posture. The proposed 3D
star skeleton contains 3D data of human, and focuses on human action
and posture recognition. Our 3D star skeleton uses the 8 projection
maps which have 2D silhouette information and depth data of human
surface. And the extremal points can be extracted as the features of 3D
star skeleton, without searching whole boundary of object. Therefore,
on execution time, our 3D star skeleton is faster than the “greedy" 3D
star skeleton using the whole boundary points on the surface. Moreover,
our method can offer more accurate skeleton of posture than the
existing star skeleton since the 3D data for the object is concerned.
Additionally, we make a codebook, a collection of representative 3D
star skeletons about 7 postures, to recognize what posture of constructed
skeleton is.
Abstract: This study was set to determine the antimicrobial
activities of brine salting, chlorinated solution, and oil frying
treatments on enteric bacteria and fungi in Rastrineobola argentea
fish from fish landing beaches within L. Victoria basin of western
Kenya. Statistical differences in effectiveness of the different
treatment methods was determined by single factor ANOVA, and
paired two-tail t-Test was performed to compare the differences in
moisture contents before and after storage. Oil fried fish recorded the
lowest microbial loads, sodium chloride at 10% concentration was
the second most effective and chlorinated solution even at 150ppm
was the least effective against the bacteria and fungi in fish. Moisture
contents of the control and treated fish were significantly lower after
storage. These results show that oil frying of fish should be adopted
for processing and preserving Rastrineobola argentea which is the
most abundant and affordable fish species from Lake Victoria.
Abstract: Spectrum is a scarce commodity, and considering the spectrum scarcity faced by the wireless-based service providers led to high congestion levels. Technical inefficiencies from pooled, since all networks share a common pool of channels, exhausting the available channels will force networks to block the services. Researchers found that cognitive radio (CR) technology may resolve the spectrum scarcity. A CR is a self-configuring entity in a wireless networking that senses its environment, tracks changes, and frequently exchanges information with their networks. However, CRN facing challenges and condition become worst while tracks changes i.e. reallocation of another under-utilized channels while primary network user arrives. In this paper, channels or resource reallocation technique based on DNA-inspired computing algorithm for CRN has been proposed.
Abstract: In this paper, we present a new method for
incorporating global shift invariance in support vector machines.
Unlike other approaches which incorporate a feature extraction stage,
we first scale the image and then classify it by using the modified
support vector machines classifier. Shift invariance is achieved by
replacing dot products between patterns used by the SVM classifier
with the maximum cross-correlation value between them. Unlike the
normal approach, in which the patterns are treated as vectors, in our
approach the patterns are treated as matrices (or images). Crosscorrelation
is computed by using computationally efficient
techniques such as the fast Fourier transform. The method has been
tested on the ORL face database. The tests indicate that this method
can improve the recognition rate of an SVM classifier.
Abstract: The automatic discrimination of seismic signals is an important practical goal for the earth-science observatories due to the large amount of information that they receive continuously. An essential discrimination task is to allocate the incoming signal to a group associated with the kind of physical phenomena producing it. In this paper, we present new techniques for seismic signals classification: local, regional and global discrimination. These techniques were tested on seismic signals from the data base of the National Geophysical Institute of the Centre National pour la Recherche Scientifique et Technique (Morocco) by using the Moroccan software for seismic signals analysis.
Abstract: Today’s technology is heavily dependent on web applications. Web applications are being accepted by users at a very rapid pace. These have made our work efficient. These include webmail, online retail sale, online gaming, wikis, departure and arrival of trains and flights and list is very long. These are developed in different languages like PHP, Python, C#, ASP.NET and many more by using scripts such as HTML and JavaScript. Attackers develop tools and techniques to exploit web applications and legitimate websites. This has led to rise of web application security; which can be broadly classified into Declarative Security and Program Security. The most common attacks on the applications are by SQL Injection and XSS which give access to unauthorized users who totally damage or destroy the system. This paper presents a detailed literature description and analysis on Web Application Security, examples of attacks and steps to mitigate the vulnerabilities.
Abstract: Self-organizing map (SOM) is a well known data
reduction technique used in data mining. It can reveal structure in
data sets through data visualization that is otherwise hard to detect
from raw data alone. However, interpretation through visual
inspection is prone to errors and can be very tedious. There are
several techniques for the automatic detection of clusters of code
vectors found by SOM, but they generally do not take into account
the distribution of code vectors; this may lead to unsatisfactory
clustering and poor definition of cluster boundaries, particularly
where the density of data points is low. In this paper, we propose the
use of an adaptive heuristic particle swarm optimization (PSO)
algorithm for finding cluster boundaries directly from the code
vectors obtained from SOM. The application of our method to
several standard data sets demonstrates its feasibility. PSO algorithm
utilizes a so-called U-matrix of SOM to determine cluster boundaries;
the results of this novel automatic method compare very favorably to
boundary detection through traditional algorithms namely k-means
and hierarchical based approach which are normally used to interpret
the output of SOM.
Abstract: Laser Metal Deposition (LMD) is an additive manufacturing process with capabilities that include: producing new
part directly from 3 Dimensional Computer Aided Design (3D CAD)
model, building new part on the existing old component and repairing an existing high valued component parts that would have
been discarded in the past. With all these capabilities and its advantages over other additive manufacturing techniques, the
underlying physics of the LMD process is yet to be fully understood probably because of high interaction between the processing
parameters and studying many parameters at the same time makes it
further complex to understand. In this study, the effect of laser power
and powder flow rate on physical properties (deposition height and
deposition width), metallurgical property (microstructure) and
mechanical (microhardness) properties on laser deposited most
widely used aerospace alloy are studied. Also, because the Ti6Al4V
is very expensive, and LMD is capable of reducing buy-to-fly ratio
of aerospace parts, the material utilization efficiency is also studied.
Four sets of experiments were performed and repeated to establish repeatability using laser power of 1.8 kW and 3.0 kW, powder flow
rate of 2.88 g/min and 5.67 g/min, and keeping the gas flow rate and
scanning speed constant at 2 l/min and 0.005 m/s respectively. The
deposition height / width are found to increase with increase in laser
power and increase in powder flow rate. The material utilization is favoured by higher power while higher powder flow rate reduces
material utilization. The results are presented and fully discussed.
Abstract: Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In Neural Network that address classification problems, training set, testing set, learning rate are considered as key tasks. That is collection of input/output patterns that are used to train the network and used to assess the network performance, set the rate of adjustments. This paper describes a proposed back propagation neural net classifier that performs cross validation for original Neural Network. In order to reduce the optimization of classification accuracy, training time. The feasibility the benefits of the proposed approach are demonstrated by means of five data sets like contact-lenses, cpu, weather symbolic, Weather, labor-nega-data. It is shown that , compared to exiting neural network, the training time is reduced by more than 10 times faster when the dataset is larger than CPU or the network has many hidden units while accuracy ('percent correct') was the same for all datasets but contact-lences, which is the only one with missing attributes. For contact-lences the accuracy with Proposed Neural Network was in average around 0.3 % less than with the original Neural Network. This algorithm is independent of specify data sets so that many ideas and solutions can be transferred to other classifier paradigms.
Abstract: The problem of mapping tasks onto a computational grid with the aim to minimize the power consumption and the makespan subject to the constraints of deadlines and architectural requirements is considered in this paper. To solve this problem, we propose a solution from cooperative game theory based on the concept of Nash Bargaining Solution. The proposed game theoretical technique is compared against several traditional techniques. The experimental results show that when the deadline constraints are tight, the proposed technique achieves superior performance and reports competitive performance relative to the optimal solution.
Abstract: This research is to study the types of products and
services that employs 'ambient media and respective techniques in its
advertisement materials. Data collection has been done via analyses of a total of 62 advertisements that employed ambient media
approach in Thailand during the years 2004 to 2011. The 62 advertisement were qualifying advertisements of the Adman Awards
& Symposium under the category of Outdoor & Ambience. Analysis
results reveal that there is a total of 14 products and services that
chooses to utilize ambient media in its advertisement. Amongst all ambient media techniques, 'intrusion' uses the value of a medium in
its representation of content most often. Following intrusion is 'interaction', where consumers are invited to participate and interact
with the advertising materials. 'Illusion' ranks third in its ability to subject the viewers to distortions of reality that makes the division
between reality and fantasy less clear.
Abstract: In this paper a new maximum power point tracking
algorithm for photovoltaic arrays is proposed. The algorithm detects
the maximum power point of the PV. The computed maximum
power is used as a reference value (set point) of the control system.
ON/OFF power controller with hysteresis band is used to control the
operation of a Buck chopper such that the PV module always
operates at its maximum power computed from the MPPT algorithm.
The major difference between the proposed algorithm and other
techniques is that the proposed algorithm is used to control directly
the power drawn from the PV.
The proposed MPPT has several advantages: simplicity, high
convergence speed, and independent on PV array characteristics. The
algorithm is tested under various operating conditions. The obtained
results have proven that the MPP is tracked even under sudden
change of irradiation level.
Abstract: Design should be viewed concurrently by three ways
as transformation, flow and value generation. An innovative approach
to solve design – related problems is described as the integrated
product - process design. As a foundation for a formal framework
consisting of organizing principles and techniques, Work Structuring
has been developed to guide efforts in the integration that enhances
the development of operation and process design in alignment with
product design.
Vietnam construction projects are facing many delays, and cost
overruns caused mostly by design related problems. A better design
management that integrates product and process design could resolve
these problems. A questionnaire survey and in – depth interviews
were used to investigate the feasibility of applying Work Structuring
to construction projects in Vietnam.
The purpose of this paper is to present the research results and to
illustrate the possible problems and potential solutions when Work
Structuring is implemented to construction projects in Vietnam.
Abstract: Various sounds generated in the chest are included in
auscultation sound. Adaptive Noise Canceller (ANC) is one of the
useful techniques for biomedical signal. But the ANC is not suitable
for auscultation sound. Because the ANC needs two input channels as
a primary signal and a reference signals, but a stethoscope can
provide just one input sound. Therefore, in this paper, it was
proposed the Single Input ANC (SIANC) for suppression of breath
sound in a cardiac auscultation sound. For the SIANC, it was
proposed that the reference generation system which included Heart
Sound Detector, Control and Reference Generator. By experiment
and comparison, it was confirmed that the proposed SIANC was
efficient for heart sound enhancement and it was independent of
variations of a heartbeat.
Abstract: CIM is the standard formalism for modeling management
information developed by the Distributed Management Task
Force (DMTF) in the context of its WBEM proposal, designed to
provide a conceptual view of the managed environment. In this
paper, we propose the inclusion of formal knowledge representation
techniques, based on Description Logics (DLs) and the Web Ontology
Language (OWL), in CIM-based conceptual modeling, and then we
examine the benefits of such a decision. The proposal is specified
as a CIM metamodel level mapping to a highly expressive subset
of DLs capable of capturing all the semantics of the models. The
paper shows how the proposed mapping provides CIM diagrams with
precise semantics and can be used for automatic reasoning about the
management information models, as a design aid, by means of newgeneration
CASE tools, thanks to the use of state-of-the-art automatic
reasoning systems that support the proposed logic and use algorithms
that are sound and complete with respect to the semantics. Such a
CASE tool framework has been developed by the authors and its
architecture is also introduced. The proposed formalization is not
only useful at design time, but also at run time through the use of
rational autonomous agents, in response to a need recently recognized
by the DMTF.
Abstract: Human computer interaction has progressed
considerably from the traditional modes of interaction. Vision based
interfaces are a revolutionary technology, allowing interaction
through human actions, gestures. Researchers have developed
numerous accurate techniques, however, with an exception to few
these techniques are not evaluated using standard HCI techniques. In
this paper we present a comprehensive framework to address this
issue. Our evaluation of a computer vision application shows that in
addition to the accuracy, it is vital to address human factors
Abstract: The stability of a software system is one of the most
important quality attributes affecting the maintenance effort. Many
techniques have been proposed to support the analysis of software
stability at the architecture, file, and class level of software systems,
but little effort has been made for that at the feature (i.e., method and
attribute) level. And the assumptions the existing techniques based
on always do not meet the practice to a certain degree. Considering
that, in this paper, we present a novel metric, Stability of Software
(SoS), to measure the stability of object-oriented software systems
by software change propagation analysis using a simulation way
in software dependency networks at feature level. The approach is
evaluated by case studies on eight open source Java programs using
different software structures (one employs design patterns versus one
does not) for the same object-oriented program. The results of the
case studies validate the effectiveness of the proposed metric. The
approach has been fully automated by a tool written in Java.