Abstract: Electronic commerce is growing rapidly with on-line
sales already heading for hundreds of billion dollars per year. Due to
the huge amount of money transferred everyday, an increased
security level is required. In this work we present the architecture of
an intelligent speaker verification system, which is able to accurately
verify the registered users of an e-commerce service using only their
voices as an input. According to the proposed architecture, a
transaction-based e-commerce application should be complemented
by a biometric server where customer-s unique set of speech models
(voiceprint) is stored. The verification procedure requests from the
user to pronounce a personalized sequence of digits and after
capturing speech and extracting voice features at the client side are
sent back to the biometric server. The biometric server uses pattern
recognition to decide whether the received features match the stored
voiceprint of the customer who claims to be, and accordingly grants
verification. The proposed architecture can provide e-commerce
applications with a higher degree of certainty regarding the identity
of a customer, and prevent impostors to execute fraudulent
transactions.
Abstract: Active research is underway on virtual touch screens
that complement the physical limitations of conventional touch
screens. This paper discusses a virtual touch screen that uses a
multi-layer perceptron to recognize and control three-dimensional
(3D) depth information from a time of flight (TOF) camera. This
system extracts an object-s area from the image input and compares it
with the trajectory of the object, which is learned in advance, to
recognize gestures. The system enables the maneuvering of content in
virtual space by utilizing human actions.
Abstract: Transient Stability is an important issue in power systems planning, operation and extension. The objective of transient stability analysis problem is not satisfied with mere transient instability detection or evaluation and it is most important to complement it by defining fast and efficient control measures in order to ensure system security. This paper presents a new Fuzzy Support Vector Machines (FSVM) to investigate the stability status of power systems and a modified generation rescheduling scheme to bring back the identified unstable cases to a more economical and stable operating point. FSVM improves the traditional SVM (Support Vector Machines) by adding fuzzy membership to each training sample to indicate the degree of membership of this sample to different classes. The preventive control based on economic generator rescheduling avoids the instability of the power systems with minimum change in operating cost under disturbed conditions. Numerical results on the New England 39 bus test system show the effectiveness of the proposed method.
Abstract: This paper addresses the problem of peak-to-average
power ratio (PAPR) in orthogonal frequency division multiplexing
(OFDM) systems. It also introduces a new PAPR reduction technique
based on adaptive square-rooting (SQRT) companding process. The
SQRT process of the proposed technique changes the statistical
characteristics of the OFDM output signals from Rayleigh
distribution to Gaussian-like distribution. This change in statistical
distribution results changes of both the peak and average power
values of OFDM signals, and consequently reduces significantly the
PAPR. For the 64QAM OFDM system using 512 subcarriers, up to 6
dB reduction in PAPR was achieved by square-rooting technique
with fixed degradation in bit error rate (BER) equal to 3 dB.
However, the PAPR is reduced at the expense of only -15 dB out-ofband
spectral shoulder re-growth below the in-band signal level. The
proposed adaptive SQRT technique is superior in terms of BER
performance than the original, non-adaptive, square-rooting
technique when the required reduction in PAPR is no more than 5
dB. Also, it provides fixed amount of PAPR reduction in which it is
not available in the original SQRT technique.
Abstract: Cognitive Science appeared about 40 years ago,
subsequent to the challenge of the Artificial Intelligence, as common
territory for several scientific disciplines such as: IT, mathematics,
psychology, neurology, philosophy, sociology, and linguistics. The
new born science was justified by the complexity of the problems
related to the human knowledge on one hand, and on the other by the
fact that none of the above mentioned sciences could explain alone
the mental phenomena. Based on the data supplied by the
experimental sciences such as psychology or neurology, models of
the human mind operation are built in the cognition science. These
models are implemented in computer programs and/or electronic
circuits (specific to the artificial intelligence) – cognitive systems –
whose competences and performances are compared to the human
ones, leading to the psychology and neurology data reinterpretation,
respectively to the construction of new models. During these
processes if psychology provides the experimental basis, philosophy
and mathematics provides the abstraction level utterly necessary for
the intermission of the mentioned sciences.
The ongoing general problematic of the cognitive approach
provides two important types of approach: the computational one,
starting from the idea that the mental phenomenon can be reduced to
1 and 0 type calculus operations, and the connection one that
considers the thinking products as being a result of the interaction
between all the composing (included) systems. In the field of
psychology measurements in the computational register use classical
inquiries and psychometrical tests, generally based on calculus
methods. Deeming things from both sides that are representing the
cognitive science, we can notice a gap in psychological product
measurement possibilities, regarded from the connectionist
perspective, that requires the unitary understanding of the quality –
quantity whole. In such approach measurement by calculus proves to
be inefficient. Our researches, deployed for longer than 20 years,
lead to the conclusion that measuring by forms properly fits to the
connectionism laws and principles.
Abstract: In this paper we ultra low-voltage and high speed CMOS domino logic. For supply voltages below 500mV the delay for a ultra low-voltage NAND2 gate is aproximately 10% of a complementary CMOS inverter. Furthermore, the delay variations due to mismatch is much less than for conventional CMOS. Differential domino gates for AND/NAND and OR/NOR operation are presented.
Abstract: The ability of UML to handle the modeling process of complex industrial software applications has increased its popularity to the extent of becoming the de-facto language in serving the design purpose. Although, its rich graphical notation naturally oriented towards the object-oriented concept, facilitates the understandability, it hardly successes to report all domainspecific aspects in a satisfactory way. OCL, as the standard language for expressing additional constraints on UML models, has great potential to help improve expressiveness. Unfortunately, it suffers from a weak formalism due to its poor semantic resulting in many obstacles towards the build of tools support and thus its application in the industry field. For this reason, many researches were established to formalize OCL expressions using a more rigorous approach. Our contribution join this work in a complementary way since it focuses specifically on OCL predefined properties which constitute an important part in the construction of OCL expressions. Using formal methods, we mainly succeed in expressing rigorously OCL predefined functions.
Abstract: In this paper, we consider the problem of logic simplification for a special class of logic functions, namely complementary Boolean functions (CBF), targeting low power implementation using static CMOS logic style. The functions are uniquely characterized by the presence of terms, where for a canonical binary 2-tuple, D(mj) ∪ D(mk) = { } and therefore, we have | D(mj) ∪ D(mk) | = 0 [19]. Similarly, D(Mj) ∪ D(Mk) = { } and hence | D(Mj) ∪ D(Mk) | = 0. Here, 'mk' and 'Mk' represent a minterm and maxterm respectively. We compare the circuits minimized with our proposed method with those corresponding to factored Reed-Muller (f-RM) form, factored Pseudo Kronecker Reed-Muller (f-PKRM) form, and factored Generalized Reed-Muller (f-GRM) form. We have opted for algebraic factorization of the Reed-Muller (RM) form and its different variants, using the factorization rules of [1], as it is simple and requires much less CPU execution time compared to Boolean factorization operations. This technique has enabled us to greatly reduce the literal count as well as the gate count needed for such RM realizations, which are generally prone to consuming more cells and subsequently more power consumption. However, this leads to a drawback in terms of the design-for-test attribute associated with the various RM forms. Though we still preserve the definition of those forms viz. realizing such functionality with only select types of logic gates (AND gate and XOR gate), the structural integrity of the logic levels is not preserved. This would consequently alter the testability properties of such circuits i.e. it may increase/decrease/maintain the same number of test input vectors needed for their exhaustive testability, subsequently affecting their generalized test vector computation. We do not consider the issue of design-for-testability here, but, instead focus on the power consumption of the final logic implementation, after realization with a conventional CMOS process technology (0.35 micron TSMC process). The quality of the resulting circuits evaluated on the basis of an established cost metric viz., power consumption, demonstrate average savings by 26.79% for the samples considered in this work, besides reduction in number of gates and input literals by 39.66% and 12.98% respectively, in comparison with other factored RM forms.
Abstract: A scalable QoS aware multicast deployment in
DiffServ networks has become an important research dimension in
recent years. Although multicasting and differentiated services are
two complementary technologies, the integration of the two
technologies is a non-trivial task due to architectural conflicts
between them. A popular solution proposed is to extend the
functionality of the DiffServ components to support multicasting. In
this paper, we propose an algorithm to construct an efficient QoSdriven
multicast tree, taking into account the available bandwidth per
service class. We also present an efficient way to provision the
limited available bandwidth for supporting heterogeneous users. The
proposed mechanism is evaluated using simulated tests. The
simulated result reveals that our algorithm can effectively minimize
the bandwidth use and transmission cost
Abstract: Business process model describes process flow of a
business and can be seen as the requirement for developing a
software application. This paper discusses a BPM2CD guideline
which complements the Model Driven Architecture concept by
suggesting how to create a platform-independent software model in
the form of a UML class diagram from a business process model. An
important step is the identification of UML classes from the business
process model. A technique for object-oriented analysis called
domain analysis is borrowed and key concepts in the business
process model will be discovered and proposed as candidate classes
for the class diagram. The paper enhances this step by using ontology
search to help identify important classes for the business domain. As
ontology is a source of knowledge for a particular domain which
itself can link to ontologies of related domains, the search can give a
refined set of candidate classes for the resulting class diagram.
Abstract: More and more home videos are being generated with the ever growing popularity of digital cameras and camcorders. For many home videos, a photo rendering, whether capturing a moment or a scene within the video, provides a complementary representation to the video. In this paper, a video motion mining framework for creative rendering is presented. The user-s capture intent is derived by analyzing video motions, and respective metadata is generated for each capture type. The metadata can be used in a number of applications, such as creating video thumbnail, generating panorama posters, and producing slideshows of video.
Abstract: The Taiwan government has started to promote the “Plain Landscape Afforestation and Greening Program" since 2002. A key task of the program was the payment for environmental services (PES), entitled the “Plain Landscape Afforestation Policy" (PLAP), which was certificated by the Executive Yuan on August 31, 2001 and enacted on January 1, 2002. According to the policy, it is estimated that the total area of afforestation will be 25,100 hectares by December 31, 2007. Until the end of 2007, the policy had been enacted for six years in total and the actual area of afforestation was 8,919.18 hectares. Among them, Taiwan Sugar Corporation (TSC) was accounted for 7,960 hectares (with 2,450.83 hectares as public service area) which occupied 86.22% of the total afforestation area; the private farmland promoted by local governments was accounted for 869.18 hectares which occupied 9.75% of the total afforestation area. Based on the above, we observe that most of the afforestation area in this policy is executed by TSC, and the achievement ratio by TSC is better than by others. It implies that the success of the PLAP is seriously related to the execution of TSC. The objective of this study is to analyze the relevant policy planning of TSC-s participation in the PLAP, suggest complementary measures, and draw up effective adjustment mechanisms, so as to improve the effectiveness of executing the policy. Our main conclusions and suggestions are summarized as follows: 1. The main reason for TSC-s participation in the PLAP is based on their passive cooperation with the central government or company policy. Prior to TSC-s participation in the PLAP, their lands were mainly used for growing sugarcane. 2. The main factors of TSC-s consideration on the selection of tree species are based on the suitability of land and species. The largest proportion of tree species is allocated to economic forests, and the lack of technical instruction was the main problem during afforestation. Moreover, the method of improving TSC-s future development in leisure agriculture and landscape business becomes a key topic. 3. TSC has developed short and long-term plans on participating in the PLAP for the future. However, there is no great willingness or incentive on budgeting for such detailed planning. 4. Most people from TSC interviewed consider the requirements on PLAP unreasonable. Among them, an unreasonable requirement on the number of trees accounted for the greatest proportion; furthermore, most interviewees suggested that the government should continue to provide incentives even after 20 years. 5. Since the government shares the same goals as TSC, there should be sufficient cooperation and communication that support the technical instruction and reduction of afforestation cost, which will also help to improve effectiveness of the policy.
Abstract: This paper critiques several exiting strategic
international human resource management (SIHRM) frameworks and
discusses their limitations to apply directly to emerging multinational
enterprises (EMNEs), especially those generated from China and
other BRICS nations. To complement the existing SIHRM
frameworks, key variables relevant to emerging economies are
identified and the extended model with particular reference to
EMNEs is developed with several research propositions. It is
believed that the extended model would better capture the recent
development of MNEs in transition, and alert emerging international
managers to address several human resource management challenges
in the global context
Abstract: This paper presents an algorithm for the recognition
and tracking of moving objects, 1/10 scale model car is used to verify
performance of the algorithm. Presented algorithm for the recognition
and tracking of moving objects in the paper is as follows. SURF
algorithm is merged with Lucas-Kanade algorithm. SURF algorithm
has strong performance on contrast, size, rotation changes and it
recognizes objects but it is slow due to many computational
complexities. Processing speed of Lucas-Kanade algorithm is fast but
the recognition of objects is impossible. Its optical flow compares the
previous and current frames so that can track the movement of a pixel.
The fusion algorithm is created in order to solve problems which
occurred using the Kalman Filter to estimate the position and the
accumulated error compensation algorithm was implemented. Kalman
filter is used to create presented algorithm to complement problems
that is occurred when fusion two algorithms. Kalman filter is used to
estimate next location, compensate for the accumulated error. The
resolution of the camera (Vision Sensor) is fixed to be 640x480. To
verify the performance of the fusion algorithm, test is compared to
SURF algorithm under three situations, driving straight, curve, and
recognizing cars behind the obstacles. Situation similar to the actual is
possible using a model vehicle. Proposed fusion algorithm showed
superior performance and accuracy than the existing object
recognition and tracking algorithms. We will improve the performance
of the algorithm, so that you can experiment with the images of the
actual road environment.