Abstract: Distant-talking voice-based HCI system suffers from
performance degradation due to mismatch between the acoustic
speech (runtime) and the acoustic model (training). Mismatch is
caused by the change in the power of the speech signal as observed at
the microphones. This change is greatly influenced by the change in
distance, affecting speech dynamics inside the room before reaching
the microphones. Moreover, as the speech signal is reflected, its
acoustical characteristic is also altered by the room properties. In
general, power mismatch due to distance is a complex problem. This
paper presents a novel approach in dealing with distance-induced
mismatch by intelligently sensing instantaneous voice power variation
and compensating model parameters. First, the distant-talking speech
signal is processed through microphone array processing, and the
corresponding distance information is extracted. Distance-sensitive
Gaussian Mixture Models (GMMs), pre-trained to capture both
speech power and room property are used to predict the optimal
distance of the speech source. Consequently, pre-computed statistic
priors corresponding to the optimal distance is selected to correct
the statistics of the generic model which was frozen during training.
Thus, model combinatorics are post-conditioned to match the power
of instantaneous speech acoustics at runtime. This results to an
improved likelihood in predicting the correct speech command at
farther distances. We experiment using real data recorded inside two
rooms. Experimental evaluation shows voice recognition performance
using our method is more robust to the change in distance compared
to the conventional approach. In our experiment, under the most
acoustically challenging environment (i.e., Room 2: 2.5 meters), our
method achieved 24.2% improvement in recognition performance
against the best-performing conventional method.
Abstract: In this work, we present for the first time in our perception an efficient digital watermarking scheme for mpeg audio layer 3 files that operates directly in the compressed data domain, while manipulating the time and subband/channel domain. In addition, it does not need the original signal to detect the watermark. Our scheme was implemented taking special care for the efficient usage of the two limited resources of computer systems: time and space. It offers to the industrial user the capability of watermark embedding and detection in time immediately comparable to the real music time of the original audio file that depends on the mpeg compression, while the end user/audience does not face any artifacts or delays hearing the watermarked audio file. Furthermore, it overcomes the disadvantage of algorithms operating in the PCMData domain to be vulnerable to compression/recompression attacks, as it places the watermark in the scale factors domain and not in the digitized sound audio data. The strength of our scheme, that allows it to be used with success in both authentication and copyright protection, relies on the fact that it gives to the users the enhanced capability their ownership of the audio file not to be accomplished simply by detecting the bit pattern that comprises the watermark itself, but by showing that the legal owner knows a hard to compute property of the watermark.
Abstract: The statistical process control (SPC) is one of the most powerful tools developed to assist ineffective control of quality, involves collecting, organizing and interpreting data during production. This article aims to show how the use of CEP industries can control and continuously improve product quality through monitoring of production that can detect deviations of parameters representing the process by reducing the amount of off-specification products and thus the costs of production. This study aimed to conduct a technological forecasting in order to characterize the research being done related to the CEP. The survey was conducted in the databases Spacenet, WIPO and the National Institute of Industrial Property (INPI). Among the largest are the United States depositors and deposits via PCT, the classification section that was presented in greater abundance to F.
Abstract: A good green building design project, designers should consider not only energy consumption, but also healthy and comfortable needs of inhabitants. In recent years, the Taiwan government paid attentions on both carbon reduction and indoor air quality issues, which be presented in the legislation of Building Codes and other regulations. Taiwan located in hot and humid climates, dampness in buildings leads to significant microbial pollution and building damage. This means that the high temperature and humidity present a serious indoor air quality issue. The interactions between vapor transfers and energy fluxes are essential for the whole building Heat Air and Moisture (HAM) response. However, a simulation tool with short calculation time, property accuracy and interface is needed for practical building design processes. In this research, we consider the vapor transfer phenomenon of building materials as well as temperature and humidity and energy consumption in a building space. The simulation bases on the EMPD method, which was performed by EnergyPlus, a simulation tool developed by DOE, to simulate the indoor moisture variation in a one-zone residential unit based on the Effective Moisture Penetration Depth Method, which is more suitable for practical building design processes.
Abstract: In recent past, the Unified Modeling Language (UML) has become the de facto industry standard for object-oriented modeling of the software systems. The syntax and semantics rich UML has encouraged industry to develop several supporting tools including those capable of generating deployable product (code) from the UML models. As a consequence, ensuring the correctness of the model/design has become challenging and extremely important task. In this paper, we present an approach for automatic verification of protocol model/design. As a case study, Session Initiation Protocol (SIP) design is verified for the property, “the CALLER will not converse with the CALLEE before the connection is established between them ". The SIP is modeled using UML statechart diagrams and the desired properties are expressed in temporal logic. Our prototype verifier “UML-SMV" is used to carry out the verification. We subjected an erroneous SIP model to the UML-SMV, the verifier could successfully detect the error (in 76.26ms) and generate the error trace.
Abstract: This research attempts to study the feasibility of
augmenting an augmented reality (AR) image card on a Quick
Response (QR) code. The authors have developed a new visual tag,
which contains a QR code and an augmented AR image card. The new
visual tag has features of reading both of the revealed data of the QR
code and the instant data from the AR image card. Furthermore, a
handheld communicating device is used to read and decode the new
visual tag, and then the concealed data of the new visual tag can be
revealed and read through its visual display. In general, the QR code is
designed to store the corresponding data or, as a key, to access the
corresponding data from the server through internet. Those reveled
data from the QR code are represented in text. Normally, the AR
image card is designed to store the corresponding data in
3-Dimensional or animation/video forms. By using QR code's
property of high fault tolerant rate, the new visual tag can access those
two different types of data by using a handheld communicating device.
The new visual tag has an advantage of carrying much more data than
independent QR code or AR image card. The major findings of this
research are: 1) the most efficient area for the designed augmented AR
card augmenting on the QR code is 9% coverage area out of the total
new visual tag-s area, and 2) the best location for the augmented AR
image card augmenting on the QR code is located in the bottom-right
corner of the new visual tag.
Abstract: Copolymerization of ethylene with 1-hexene was
carried out using two ansa-fluorenyl titanium derivative complexes.
The substituent effect on the catalytic activity, monomer reactivity
ratio and polymer property was investigated. It was found that the
presence of t-Bu groups on fluorenyl ring exhibited remarkable
catalytic activity and produced polymer with high molecular weight.
However, these catalysts produce polymer with narrow molecular
weight distribution, indicating the characteristic of single-site
metallocene catalyst. Based on 13C NMR, we can observe that
monomer reactivity ratio was affected by catalyst structure. The rH
values of complex 2 were lower than that of complex 1 which might
be result from the higher steric hindrance leading to a reduction of 1-
hexene insertion step.
Abstract: The commercial finite element program LS-DYNA was employed to evaluate the response and energy absorbing capacity of cylindrical metal tubes that are externally wrapped with composite. The effects of composite wall thickness, loading conditions and fiber ply orientation were examined. The results demonstrate that a wrapped composite can be utilized effectively to enhance the crushing characteristics and energy absorbing capacity of the tubes. Increasing the thickness of the composite increases the mean force and the specific energy absorption under both static and dynamic crushing. The ply pattern affects the energy absorption capacity and the failure mode of the metal tube and the composite material property is also significant in determining energy absorption efficiency.
Abstract: This paper deals with the synthesis of fuzzy controller
applied to a permanent magnet synchronous machine (PMSM) with a
guaranteed H∞ performance. To design this fuzzy controller,
nonlinear model of the PMSM is approximated by Takagi-Sugeno
fuzzy model (T-S fuzzy model), then the so-called parallel
distributed compensation (PDC) is employed. Next, we derive the
property of the H∞ norm. The latter is cast in terms of linear matrix
inequalities (LMI-s) while minimizing the H∞ norm of the transfer
function between the disturbance and the error ( ) ev T . The
experimental and simulations results were conducted on a permanent
magnet synchronous machine to illustrate the effects of the fuzzy
modelling and the controller design via the PDC.
Abstract: Electrophysiological signals were recorded from primary cultures of dissociated rat cortical neurons coupled to Micro-Electrode Arrays (MEAs). The neuronal discharge patterns may change under varying physiological and pathological conditions. For this reason, we developed a new burst detection method able to identify bursts with peculiar features in different experimental conditions (i.e. spontaneous activity and under the effect of specific drugs). The main feature of our algorithm (i.e. Burst On Hurst), based on the auto-similarity or fractal property of the recorded signal, is the independence from the chosen spike detection method since it works directly on the raw data.
Abstract: An ontology is widely used in many kinds of applications as a knowledge representation tool for domain knowledge. However, even though an ontology schema is well prepared by domain experts, it is tedious and cost-intensive to add instances into the ontology. The most confident and trust-worthy way to add instances into the ontology is to gather instances from tables in the related Web pages. In automatic populating of instances, the primary task is to find the most proper concept among all possible concepts within the ontology for a given table. This paper proposes a novel method for this problem by defining the similarity between the table and the concept using the overlap of their properties. According to a series of experiments, the proposed method achieves 76.98% of accuracy. This implies that the proposed method is a plausible way for automatic ontology population from Web tables.
Abstract: The aims of this paper are to study the efficacy of
chitosan nanoparticles in stimulating specific antibody against
A/H1N1 influenza antigen in mice. Chitosan nanoparticles (CSN)
were characterized by TEM. The results showed that the average size
of CSN was from 80nm to 106nm. The efficacy of A/H1N1 influenza
vaccine loaded on the surface of CSN showed that loading efficiency
of A/H1N1 influenza antigen on CSN was from 93.75 to 100%. Safe
property of the vaccine were tested. In 10 days post vaccination,
group of CSN 30 kDa and 300 kDa loaded A/H1N1 influenza antigen
were the rate of immune response on mice to be 100% (9/9) higher
than Al(OH)3 and other adjuvant. 100% mice in the experiment of all
groups had immune response in 20 days post vaccination. The results
also showed that HI titer of the group using CSN 300 kDa as an
adjuvant increased significantly up to 3971 HIU, over three-fold
higher than the Al(OH)3 adjuvant, chitosan (CS), and one hundredfold
than the A/H1N1 antigen only. Stability of the vaccine
formulation was investigated.
Abstract: Training neural networks to capture an intrinsic
property of a large volume of high dimensional data is a difficult
task, as the training process is computationally expensive. Input
attributes should be carefully selected to keep the dimensionality of
input vectors relatively small.
Technical indexes commonly used for stock market prediction
using neural networks are investigated to determine its effectiveness
as inputs. The feed forward neural network of Levenberg-Marquardt
algorithm is applied to perform one step ahead forecasting of
NASDAQ and Dow stock prices.
Abstract: Weather disaster events were frequent and caused loss
of lives and property in Taiwan recently. Excessive concentration of
population and lacking of integrated planning led to Taiwanese coastal
zone face the impacts of climate change directly. Comparing to many
countries which have already set up legislation, competent authorities
and national adaptation strategies, the ability of coastal management
adapting to climate change is still insufficient in Taiwan. Therefore, it
is necessary to establish a complete institutional arrangement for
coastal management due to climate change in order to protect
environment and sustain socio-economic development. This paper
firstly reviews the impact of climate change on Taiwanese coastal
zone. Secondly, development of Taiwanese institutional arrangement
of coastal management is introduced. Followed is the analysis of four
dimensions of legal basis, competent authority, scientific and financial
support and international cooperations of institutional arrangement.
The results show that Taiwanese government shall: 1) integrate climate
change issue into Coastal Act, Wetland Act and territorial planning
Act and pass them; 2) establish the high level competent authority for
coastal management; 3) set up the climate change disaster coordinate
platform; 4) link scientific information and decision markers; 5)
establish the climate change adjustment fund; 6) participate in
international climate change organizations and meetings actively; 7)
cooperate with near countries to exchange experiences.
Abstract: Soil stabilization has been widely used to improve
soil strength and durability or to prevent erosion and dust generation.
Generally to reduce problems of clayey soils in engineering work and
to stabilize these soils additional materials are used. The most
common materials are lime, fly ash and cement. Using this materials,
although improve soil property , but in some cases due to financial
problems and the need to use special equipment are limited .One of
the best methods for stabilization clayey soils is neutralization the
clay particles. For this purpose we can use ion exchange materials.
Ion exchange solution like CBR plus can be used for soil
stabilization. One of the most important things in using CBR plus is
determination the amount of this solution for various soils with
different properties. In this study a laboratory experiment is conduct
to evaluate the ion exchange capacity of three soils with various
plasticity index (PI) to determine amount or required CBR plus
solution for soil stabilization.
Abstract: In this paper, a new efficient method for load balancing in low voltage distribution systems is presented. The proposed method introduces an improved Leap-frog method for optimization. The proposed objective function includes the difference between three phase currents, as well as two other terms to provide the integer property of the variables; where the latter are the status of the connection of loads to different phases. Afterwards, a new algorithm is supplemented to undertake the integer values for the load connection status. Finally, the method is applied to different parts of Tabriz low voltage network, where the results have shown the good performance of the proposed method.
Abstract: In this paper, we introduce a new method for elliptical
object identification. The proposed method adopts a hybrid scheme
which consists of Eigen values of covariance matrices, Circular
Hough transform and Bresenham-s raster scan algorithms. In this
approach we use the fact that the large Eigen values and small Eigen
values of covariance matrices are associated with the major and minor
axial lengths of the ellipse. The centre location of the ellipse can be
identified using circular Hough transform (CHT). Sparse matrix
technique is used to perform CHT. Since sparse matrices squeeze zero
elements and contain a small number of nonzero elements they
provide an advantage of matrix storage space and computational time.
Neighborhood suppression scheme is used to find the valid Hough
peaks. The accurate position of circumference pixels is identified
using raster scan algorithm which uses the geometrical symmetry
property. This method does not require the evaluation of tangents or
curvature of edge contours, which are generally very sensitive to
noise working conditions. The proposed method has the advantages of
small storage, high speed and accuracy in identifying the feature. The
new method has been tested on both synthetic and real images.
Several experiments have been conducted on various images with
considerable background noise to reveal the efficacy and robustness.
Experimental results about the accuracy of the proposed method,
comparisons with Hough transform and its variants and other
tangential based methods are reported.
Abstract: The draw solute separation process in Forward
Osmosis desalination was simulated in Aspen Plus chemical process
modeling software, to estimate the energy consumption and compare
it with other desalination processes, mainly the Reverse Osmosis
process which is currently most prevalent. The electrolytic chemistry
for the system was retrieved using the Elec – NRTL property method
in the Aspen Plus database. Electrical equivalent of energy required
in the Forward Osmosis desalination technique was estimated and
compared with the prevalent desalination techniques.
Abstract: In this paper we present a new method for coin
identification. The proposed method adopts a hybrid scheme using
Eigenvalues of covariance matrix, Circular Hough Transform (CHT)
and Bresenham-s circle algorithm. The statistical and geometrical
properties of the small and large Eigenvalues of the covariance
matrix of a set of edge pixels over a connected region of support are
explored for the purpose of circular object detection. Sparse matrix
technique is used to perform CHT. Since sparse matrices squeeze
zero elements and contain only a small number of non-zero elements,
they provide an advantage of matrix storage space and computational
time. Neighborhood suppression scheme is used to find the valid
Hough peaks. The accurate position of the circumference pixels is
identified using Raster scan algorithm which uses geometrical
symmetry property. After finding circular objects, the proposed
method uses the texture on the surface of the coins called texton,
which are unique properties of coins, refers to the fundamental micro
structure in generic natural images. This method has been tested on
several real world images including coin and non-coin images. The
performance is also evaluated based on the noise withstanding
capability.
Abstract: An efficient parallel form in digital signal processor can improve the algorithm performance. The butterfly structure is an important role in fast Fourier transform (FFT), because its symmetry form is suitable for hardware implementation. Although it can perform a symmetric structure, the performance will be reduced under the data-dependent flow characteristic. Even though recent research which call as novel memory reference reduction methods (NMRRM) for FFT focus on reduce memory reference in twiddle factor, the data-dependent property still exists. In this paper, we propose a parallel-computing approach for FFT implementation on digital signal processor (DSP) which is based on data-independent property and still hold the property of low-memory reference. The proposed method combines final two steps in NMRRM FFT to perform a novel data-independent structure, besides it is very suitable for multi-operation-unit digital signal processor and dual-core system. We have applied the proposed method of radix-2 FFT algorithm in low memory reference on TI TMSC320C64x DSP. Experimental results show the method can reduce 33.8% clock cycles comparing with the NMRRM FFT implementation and keep the low-memory reference property.