Abstract: This research aims to examine the key success factors
for the diffusion of mobile entertainment services in Malaysia. The
drivers and barriers observed in this research include perceived
benefit; concerns pertaining to pricing, product and technological
standardization, privacy and security; as well as influences from
peers and community. An analysis of a Malaysian survey of 384
respondents between 18 to 25 years shows that subscribers placed
greater importance on perceived benefit of mobile entertainment
services compared to other factors. Results of the survey also show
that there are strong positive correlations between all the factors,
with pricing issue–perceived benefit showing the strongest
relationship. This paper aims to provide an extensive study on the
drivers and barriers that could be used to derive architecture for
entertainment service provision to serve as a guide for telcos to
outline suitable approaches in order to encourage mass market
adoption of mobile entertainment services in Malaysia.
Abstract: with increasing circuits- complexity and demand to
use portable devices, power consumption is one of the most
important parameters these days. Full adders are the basic block of
many circuits. Therefore reducing power consumption in full adders
is very important in low power circuits. One of the most powerconsuming
modules in full adders is XOR/XNOR circuit. This paper
presents two new full adders based on two new logic approaches. The
proposed logic approaches use one XOR or XNOR gate to implement
a full adder cell. Therefore, delay and power will be decreased. Using
two new approaches and two XOR and XNOR gates, two new full
adders have been implemented in this paper. Simulations are carried
out by HSPICE in 0.18μm bulk technology with 1.8V supply voltage.
The results show that the ten-transistors proposed full adder has 12%
less power consumption and is 5% faster in comparison to MB12T
full adder. 9T is more efficient in area and is 24% better than similar
10T full adder in term of power consumption. The main drawback of
the proposed circuits is output threshold loss problem.
Abstract: Scheduling for the flexible job shop is very important
in both fields of production management and combinatorial
optimization. However, it quit difficult to achieve an optimal solution
to this problem with traditional optimization approaches owing to the
high computational complexity. The combining of several
optimization criteria induces additional complexity and new
problems. In this paper, a Pareto approach to solve the multi
objective flexible job shop scheduling problems is proposed. The
objectives considered are to minimize the overall completion time
(makespan) and total weighted tardiness (TWT). An effective
simulated annealing algorithm based on the proposed approach is
presented to solve multi objective flexible job shop scheduling
problem. An external memory of non-dominated solutions is
considered to save and update the non-dominated solutions during
the solution process. Numerical examples are used to evaluate and
study the performance of the proposed algorithm. The proposed
algorithm can be applied easily in real factory conditions and for
large size problems. It should thus be useful to both practitioners and
researchers.
Abstract: Virtually all existing networked system management
tools use a Manager/Agent paradigm. That is, distributed agents are
deployed on managed devices to collect local information and report
it back to some management unit. Even those that use standard
protocols such as SNMP fall into this model. Using standard protocol
has the advantage of interoperability among devices from different
vendors. However, it may not be able to provide customized
information that is of interest to satisfy specific management needs.
In this dissertation work, different approaches are used to
collect information regarding the devices attached to a Local Area
Network. An SNMP aware application is being developed that will
manage the discovery procedure and will be used as data collector.
Abstract: In this study, three subtypes of influenza A viruses (pH1N1, H5N1 and H3N2) which naturally infected human were analyzed by bioinformatic approaches to find candidate human cellular miRNAs targeting viral genomes. There were 76 miRNAs targeting influenza A viruses. Among these candidates, 70 miRNAs were subtypes specifically targeting each subtype of influenza A virus including 21 miRNAs targeted subtype H1N1, 27 miRNAs targeted subtype H5N1 and 22 miRNAs targeted subtype H3N2. The remaining 6 miRNAs target on multiple subtypes of influenza A viruses. Uniquely, hsa-miR-3145 is the only one candidate miRNA targeting PB1 gene of all three subtypes. Obviously, most of the candidate miRNAs are targeting on polymerase complex genes (PB2, PB1 and PA) of influenza A viruses. This study predicted potential human miRNAs targeting on different subtypes of influenza A viruses which might be useful for inhibition of viral replication and for better understanding of the interaction between virus and host cell.
Abstract: The research investigates the “impact of VLE on mathematical concepts acquisition of the special education needs (SENs) students at KS4 secondary education sector" in England. The overall aim of the study is to establish possible areas of difficulties to approach for above or below knowledge standard requirements for KS4 students in the acquisition and validation of basic mathematical concepts. A teaching period, in which virtual learning environment (Fronter) was used to emphasise different mathematical perception and symbolic representation was carried out and task based survey conducted to 20 special education needs students [14 actually took part]. The result shows that students were able to process information and consider images, objects and numbers within the VLE at early stages of acquisition process. They were also able to carry out perceptual tasks but with limiting process of different quotient, thus they need teacher-s guidance to connect them to symbolic representations and sometimes coach them through. The pilot study further indicates that VLE curriculum approaches for students were minutely aligned with mathematics teaching which does not emphasise the integration of VLE into the existing curriculum and current teaching practice. There was also poor alignment of vision regarding the use of VLE in realisation of the objectives of teaching mathematics by the management. On the part of teacher training, not much was done to develop teacher-s skills in the technical and pedagogical aspects of VLE that is in-use at the school. The classroom observation confirmed teaching practice will find a reliance on VLE as an enhancer of mathematical skills, providing interaction and personalisation of learning to SEN students.
Abstract: Business and IT alignment has continued as a
top concern for business and IT executives for almost three
decades. Many researchers have conducted empirical studies on
the relationship between business-IT alignment and performance.
Yet, these approaches, lacking a social perspective, have had little
impact on sustaining performance and competitive advantage. In
addition to the limited alignment literature that explores
organisational learning that is represented in shared understanding,
communication, cognitive maps and experiences.
Hence, this paper proposes an integrated process that enables
social and intellectual dimensions through the concept of
organisational learning. In particular, the feedback and feedforward
process which provide a value creation across dynamic
multilevel of learning. This mechanism enables on-going
effectiveness through development of individuals, groups and
organisations, which improves the quality of business and IT
strategies and drives to performance.
Abstract: In this paper a unified approach via block-pulse functions (BPFs) or shifted Legendre polynomials (SLPs) is presented to solve the linear-quadratic-Gaussian (LQG) control problem. Also a recursive algorithm is proposed to solve the above problem via BPFs. By using the elegant operational properties of orthogonal functions (BPFs or SLPs) these computationally attractive algorithms are developed. To demonstrate the validity of the proposed approaches a numerical example is included.
Abstract: General as well as the MSW management in Thailand is reviewed in this paper. Topics include the MSW generation, sources, composition, and trends. The review, then, moves to sustainable solutions for MSW management, sustainable alternative approaches with an emphasis on an integrated MSW management. Information of waste in Thailand is also given at the beginning of this paper for better understanding of later contents. It is clear that no one single method of MSW disposal can deal with all materials in an environmentally sustainable way. As such, a suitable approach in MSW management should be an integrated approach that could deliver both environmental and economic sustainability. With increasing environmental concerns, the integrated MSW management system has a potential to maximize the useable waste materials as well as produce energy as a by-product. In Thailand, the compositions of waste (86%) are mainly organic waste, paper, plastic, glass, and metal. As a result, the waste in Thailand is suitable for an integrated MSW management. Currently, the Thai national waste management policy starts to encourage the local administrations to gather into clusters to establish central MSW disposal facilities with suitable technologies and reducing the disposal cost based on the amount of MSW generated.
Abstract: This paper deals with a portfolio selection problem
based on the possibility theory under the assumption that the returns
of assets are LR-type fuzzy numbers. A possibilistic portfolio model
with transaction costs is proposed, in which the possibilistic mean
value of the return is termed measure of investment return, and the
possibilistic variance of the return is termed measure of investment
risk. Due to considering transaction costs, the existing traditional
optimization algorithms usually fail to find the optimal solution
efficiently and heuristic algorithms can be the best method. Therefore,
a particle swarm optimization is designed to solve the corresponding
optimization problem. At last, a numerical example is given to
illustrate our proposed effective means and approaches.
Abstract: Considering payload, reliability, security and operational lifetime as major constraints in transmission of images we put forward in this paper a steganographic technique implemented at the physical layer. We suggest transmission of Halftoned images (payload constraint) in wireless sensor networks to reduce the amount of transmitted data. For low power and interference limited applications Turbo codes provide suitable reliability. Ensuring security is one of the highest priorities in many sensor networks. The Turbo Code structure apart from providing forward error correction can be utilized to provide for encryption. We first consider the Halftoned image and then the method of embedding a block of data (called secret) in this Halftoned image during the turbo encoding process is presented. The small modifications required at the turbo decoder end to extract the embedded data are presented next. The implementation complexity and the degradation of the BER (bit error rate) in the Turbo based stego system are analyzed. Using some of the entropy based crypt analytic techniques we show that the strength of our Turbo based stego system approaches that found in the OTPs (one time pad).
Abstract: Artificial Immune System is adopted as a Heuristic
Algorithm to solve the combinatorial problems for decades.
Nevertheless, many of these applications took advantage of the benefit
for applications but seldom proposed approaches for enhancing the
efficiency. In this paper, we continue the previous research to develop
a Self-evolving Artificial Immune System II via coordinating the T
and B cell in Immune System and built a block-based artificial
chromosome for speeding up the computation time and better
performance for different complexities of problems. Through the
design of Plasma cell and clonal selection which are relative the
function of the Immune Response. The Immune Response will help
the AIS have the global and local searching ability and preventing
trapped in local optima. From the experimental result, the significant
performance validates the SEAIS II is effective when solving the
permutation flows-hop problems.
Abstract: Cosmic showers, during the transit through space, produce
sub - products as a result of interactions with the intergalactic
or interstellar medium which after entering earth generate secondary
particles called Extensive Air Shower (EAS). Detection and analysis
of High Energy Particle Showers involve a plethora of theoretical and
experimental works with a host of constraints resulting in inaccuracies
in measurements. Therefore, there exist a necessity to develop a
readily available system based on soft-computational approaches
which can be used for EAS analysis. This is due to the fact that soft
computational tools such as Artificial Neural Network (ANN)s can be
trained as classifiers to adapt and learn the surrounding variations. But
single classifiers fail to reach optimality of decision making in many
situations for which Multiple Classifier System (MCS) are preferred
to enhance the ability of the system to make decisions adjusting
to finer variations. This work describes the formation of an MCS
using Multi Layer Perceptron (MLP), Recurrent Neural Network
(RNN) and Probabilistic Neural Network (PNN) with data inputs
from correlation mapping Self Organizing Map (SOM) blocks and
the output optimized by another SOM. The results show that the setup
can be adopted for real time practical applications for prediction
of primary energy and location of EAS from density values captured
using detectors in a circular grid.
Abstract: The demand for higher performance graphics
continues to grow because of the incessant desire towards realism.
And, rapid advances in fabrication technology have enabled us to
build several processor cores on a single die. Hence, it is important to
develop single chip parallel architectures for such data-intensive
applications. In this paper, we propose an efficient PIM architectures
tailored for computer graphics which requires a large number of
memory accesses. We then address the two important tasks necessary
for maximally exploiting the parallelism provided by the architecture,
namely, partitioning and placement of graphic data, which affect
respectively load balances and communication costs. Under the
constraints of uniform partitioning, we develop approaches for optimal
partitioning and placement, which significantly reduce search space.
We also present heuristics for identifying near-optimal placement,
since the search space for placement is impractically large despite our
optimization. We then demonstrate the effectiveness of our partitioning
and placement approaches via analysis of example scenes; simulation
results show considerable search space reductions, and our heuristics
for placement performs close to optimal – the average ratio of
communication overheads between our heuristics and the optimal was
1.05. Our uniform partitioning showed average load-balance ratio of
1.47 for geometry processing and 1.44 for rasterization, which is
reasonable.
Abstract: This paper introduces a new signal denoising based on the Empirical mode decomposition (EMD) framework. The method is a fully data driven approach. Noisy signal is decomposed adaptively into oscillatory components called Intrinsic mode functions (IMFs) by means of a process called sifting. The EMD denoising involves filtering or thresholding each IMF and reconstructs the estimated signal using the processed IMFs. The EMD can be combined with a filtering approach or with nonlinear transformation. In this work the Savitzky-Golay filter and shoftthresholding are investigated. For thresholding, IMF samples are shrinked or scaled below a threshold value. The standard deviation of the noise is estimated for every IMF. The threshold is derived for the Gaussian white noise. The method is tested on simulated and real data and compared with averaging, median and wavelet approaches.
Abstract: Most of the existing text mining approaches are
proposed, keeping in mind, transaction databases model. Thus, the
mined dataset is structured using just one concept: the “transaction",
whereas the whole dataset is modeled using the “set" abstract type. In
such cases, the structure of the whole dataset and the relationships
among the transactions themselves are not modeled and
consequently, not considered in the mining process.
We believe that taking into account structure properties of
hierarchically structured information (e.g. textual document, etc ...)
in the mining process, can leads to best results. For this purpose, an
hierarchical associations rule mining approach for textual documents
is proposed in this paper and the classical set-oriented mining
approach is reconsidered profits to a Direct Acyclic Graph (DAG)
oriented approach. Natural languages processing techniques are used
in order to obtain the DAG structure. Based on this graph model, an
hierarchical bottom up algorithm is proposed. The main idea is that
each node is mined with its parent node.
Abstract: In present work are considered the scheme of
evaluation the transition probability in quantum system. It is based on
path integral representation of transition probability amplitude and its
evaluation by means of a saddle point method, applied to the part of
integration variables. The whole integration process is reduced to
initial value problem solutions of Hamilton equations with a random
initial phase point. The scheme is related to the semiclassical initial
value representation approaches using great number of trajectories. In
contrast to them from total set of generated phase paths only one path
for each initial coordinate value is selected in Monte Karlo process.
Abstract: the purpose of this research is to identify and clarify
factors which have positive effect among user satisfaction and their
social networking through websites. The examined factors in this
research are; innovation, ease of use, trustworthy and customer
support which are defined as satisfaction factors. To obtain reliable
research approaches and to have better result in this research four
hypothesizes used to test. This hypothesis testing has been done by
correlation, regression and test of normality by using “SPSS16" also
the data which was analyzed by this software. this data was gathered
from prepaid questionnaire.
Abstract: One major difficulty that faces developers of
concurrent and distributed software is analysis for concurrency based
faults like deadlocks. Petri nets are used extensively in the
verification of correctness of concurrent programs. ECATNets are a
category of algebraic Petri nets based on a sound combination of
algebraic abstract types and high-level Petri nets. ECATNets have
'sound' and 'complete' semantics because of their integration in
rewriting logic and its programming language Maude. Rewriting
logic is considered as one of very powerful logics in terms of
description, verification and programming of concurrent systems We
proposed previously a method for translating Ada-95 tasking
programs to ECATNets formalism (Ada-ECATNet) and we showed
that ECATNets formalism provides a more compact translation for
Ada programs compared to the other approaches based on simple
Petri nets or Colored Petri nets. We showed also previously how the
ECATNet formalism offers to Ada many validation and verification
tools like simulation, Model Checking, accessibility analysis and
static analysis. In this paper, we describe the implementation of our
translation of the Ada programs into ECATNets.
Abstract: The overriding goal of software engineering is to
provide a high quality system, application or a product. To achieve
this goal, software engineers must apply effective methods coupled
with modern tools within the context of a mature software process
[2]. In addition, it is also must to assure that high quality is realized.
Although many quality measures can be collected at the project
levels, the important measures are errors and defects. Deriving a
quality measure for reusable components has proven to be
challenging task now a days. The results obtained from the study are
based on the empirical evidence of reuse practices, as emerged from
the analysis of industrial projects. Both large and small companies,
working in a variety of business domains, and using object-oriented
and procedural development approaches contributed towards this
study. This paper proposes a quality metric that provides benefit at
both project and process level, namely defect removal efficiency
(DRE).