Abstract: In this paper we propose a novel RF LDMOS structure which employs a thin strained silicon layer at the top of the channel and the N-Drift region. The strain is induced by a relaxed Si0.8 Ge0.2 layer which is on top of a compositionally graded SiGe buffer. We explain the underlying physics of the device and compare the proposed device with a conventional LDMOS in terms of energy band diagram and carrier concentration. Numerical simulations of the proposed strained silicon laterally diffused MOS using a 2 dimensional device simulator indicate improvements in saturation and linear transconductance, current drivability, cut off frequency and on resistance. These improvements are however accompanied with a suppression in the break down voltage.
Abstract: The objectives of this research are to search the
management pattern of Bang Khonthi lodging entrepreneurs for
sufficient economy ways, to know the threat that affects this sector
and design fit arrangement model to sustain their business with
Samut Songkram style. What will happen if they do not use this
approach? Will they have a financial crisis? The data and information
are collected by informal discussions with 8 managers and 400
questionnaires. A mixed methods of both qualitative research and
quantitative research are used. Bent Flyvbjerg-s phronesis is utilized
for this analysis. Our research will prove that sufficient economy can
help small business firms to solve their problems. We think that the
results of our research will be a financial model to solve many
problems of the entrepreneurs and this way will can be a model for
other provinces of Thailand.
Abstract: This paper presents a portable robot that is to use for
welding process in shipbuilding yard. It has six degree of freedom and
3kg payload capability. Its weight is 21.5kg so that human workers can
carry it to the work place. Its body mainly made of magnesium alloy
and aluminum alloy for few parts that require high strength. Since the
distance between robot and controller should be 50m at most, the robot
controller controls the robot through EtherCAT. RTX and KPA are
used for real time EtherCAT control on Windows XP. The
performance of the developed robot was satisfactory, in welding of U
type cell in shipbuilding yard.
Abstract: CIM is the standard formalism for modeling management
information developed by the Distributed Management Task
Force (DMTF) in the context of its WBEM proposal, designed to
provide a conceptual view of the managed environment. In this
paper, we propose the inclusion of formal knowledge representation
techniques, based on Description Logics (DLs) and the Web Ontology
Language (OWL), in CIM-based conceptual modeling, and then we
examine the benefits of such a decision. The proposal is specified
as a CIM metamodel level mapping to a highly expressive subset
of DLs capable of capturing all the semantics of the models. The
paper shows how the proposed mapping provides CIM diagrams with
precise semantics and can be used for automatic reasoning about the
management information models, as a design aid, by means of newgeneration
CASE tools, thanks to the use of state-of-the-art automatic
reasoning systems that support the proposed logic and use algorithms
that are sound and complete with respect to the semantics. Such a
CASE tool framework has been developed by the authors and its
architecture is also introduced. The proposed formalization is not
only useful at design time, but also at run time through the use of
rational autonomous agents, in response to a need recently recognized
by the DMTF.
Abstract: The study applied a combination of organisational learning models (Senge, 1994: Pedler, Burgoyne and Boydell, 1991) and later adopted fifteen organisational learning principles with one of the biggest energy providers in South East Asia. The purposes of the current study were to: a) investigate the company-s practices on fifteen organisational learning principles; b) explore the perceptions and expectations of its employees in relations to the principles; and c) compare the perceptions and expectations between management and non-management staff toward the fifteen factors. One hundred and ten employees responded on a designed questionnaire and the results indicated that the company was practicing activities that associated with organisational learning principles. Also, according to the T-test results, significant differences between management and non-management respondents were found. Research implications are also provided.
Abstract: We consider different types of aggregation operators
such as the heavy ordered weighted averaging (HOWA) operator and
the fuzzy ordered weighted averaging (FOWA) operator. We
introduce a new extension of the OWA operator called the fuzzy
heavy ordered weighted averaging (FHOWA) operator. The main
characteristic of this aggregation operator is that it deals with
uncertain information represented in the form of fuzzy numbers (FN)
in the HOWA operator. We develop the basic concepts of this
operator and study some of its properties. We also develop a wide
range of families of FHOWA operators such as the fuzzy push up
allocation, the fuzzy push down allocation, the fuzzy median
allocation and the fuzzy uniform allocation.
Abstract: Capacitive electrocardiogram (ECG) measurement is an attractive approach for long-term health monitoring. However, there is little literature available on its implementation, especially for multichannel system in standard ECG leads. This paper begins from the design criteria for capacitive ECG measurement and presents a multichannel limb-lead capacitive ECG system with conductive fabric tapes pasted on a double layer PCB as the capacitive sensors. The proposed prototype system incorporates a capacitive driven-body (CDB) circuit to reduce the common-mode power-line interference (PLI). The presented prototype system has been verified to be stable by theoretic analysis and practical long-term experiments. The signal quality is competitive to that acquired by commercial ECG machines. The feasible size and distance of capacitive sensor have also been evaluated by a series of tests. From the test results, it is suggested to be greater than 60 cm2 in sensor size and be smaller than 1.5 mm in distance for capacitive ECG measurement.
Abstract: Direction of Arrival estimation refers to defining a mathematical function called a pseudospectrum that gives an indication of the angle a signal is impinging on the antenna array. This estimation is an efficient method of improving the quality of service in a communication system by focusing the reception and transmission only in the estimated direction thereby increasing fidelity with a provision to suppress interferers. This improvement is largely dependent on the performance of the algorithm employed in the estimation. Many DOA algorithms exists amongst which are MUSIC, Root-MUSIC and ESPRIT. In this paper, performance of these three algorithms is analyzed in terms of complexity, accuracy as assessed and characterized by the CRLB and memory requirements in various environments and array sizes. It is found that the three algorithms are high resolution and dependent on the operating environment and the array size.
Abstract: The three steps of the standard one-way nested grid
for a regional scale of the third generation WAve Model Cycle 4
(WAMC4) is scrutinized. The model application is enabled to solve
the energy balance equation on a coarse resolution grid in order to
produce boundary conditions for a smaller area by the nested grid
technique. In the present study, the model takes a full advantage of the
fine resolution of wind fields in space and time produced by the available
U.S. Navy Global Atmospheric Prediction System (NOGAPS)
model with 1 degree resolution. The nested grid application of the
model is developed in order to gradually increase the resolution from
the open ocean towards the South China Sea (SCS) and the Gulf of
Thailand (GoT) respectively. The model results were compared with
buoy observations at Ko Chang, Rayong and Huahin locations which
were obtained from the Seawatch project. In addition, the results were
also compared with Satun based weather station which was provided
from Department of Meteorology, Thailand. The data collected from
this station presented the significant wave height (Hs) reached 12.85
m. The results indicated that the tendency of the Hs from the model
in the spherical coordinate propagation with deep water condition in
the fine grid domain agreed well with the Hs from the observations.
Abstract: In this paper the Laplace Decomposition method is developed to solve linear and nonlinear fractional integro- differential equations of Volterra type.The fractional derivative is described in the Caputo sense.The Laplace decomposition method is found to be fast and accurate.Illustrative examples are included to demonstrate the validity and applicability of presented technique and comparasion is made with exacting results.
Abstract: This article presents the results of researchrelated to the assessment protocol weightedcumulative expected transmission time (WCETT)applied to cognitive radio networks.The development work was based on researchdone by different authors, we simulated a network,which communicates wirelessly, using a licensedchannel, through which other nodes are notlicensed, try to transmit during a given time nodeuntil the station's owner begins its transmission.
Abstract: Qualification of doctoral students- and the candidates for a scientific degree is evaluated by the ability to solve scientific ideas in an innovative way, consequently, being a potential of research and science they play a significant role in the sustainability context of the society. The article deals with the analysis of the results of the pilot project, the aim of which has been to study the structure of doctoral students- research competences in the sustainability context. With the existance of variety of theories on research competence development, their analysis focuses on the attained aim approach. Three competence groups have been identified in this study: informative, communicative and instrumental. Within the study the doctoral students and candidates for a scientific degree (N=64) made their self-assessment of research competences. The study results depict their present research competence development level and its dynamics according to the aim to attain.
Abstract: Multiple sequence alignment is a fundamental part in
many bioinformatics applications such as phylogenetic analysis.
Many alignment methods have been proposed. Each method gives a
different result for the same data set, and consequently generates a
different phylogenetic tree. Hence, the chosen alignment method
affects the resulting tree. However in the literature, there is no
evaluation of multiple alignment methods based on the comparison of
their phylogenetic trees. This work evaluates the following eight
aligners: ClustalX, T-Coffee, SAGA, MUSCLE, MAFFT, DIALIGN,
ProbCons and Align-m, based on their phylogenetic trees (test trees)
produced on a given data set. The Neighbor-Joining method is used
to estimate trees. Three criteria, namely, the dNNI, the dRF and the
Id_Tree are established to test the ability of different alignment
methods to produce closer test tree compared to the reference one
(true tree). Results show that the method which produces the most
accurate alignment gives the nearest test tree to the reference tree.
MUSCLE outperforms all aligners with respect to the three criteria
and for all datasets, performing particularly better when sequence
identities are within 10-20%. It is followed by T-Coffee at lower
sequence identity (30%), trees scores of all methods
become similar.
Abstract: Although Model Driven Architecture has taken
successful steps toward model-based software development, this
approach still faces complex situations and ambiguous questions
while applying to real world software systems. One of these
questions - which has taken the most interest and focus - is how
model transforms between different abstraction levels, MDA
proposes. In this paper, we propose an approach based on Story
Driven Modeling and Aspect Oriented Programming to ease these
transformations. Service Oriented Architecture is taken as the target
model to test the proposed mechanism in a functional system.
Service Oriented Architecture and Model Driven Architecture [1]
are both considered as the frontiers of their own domain in the
software world. Following components - which was the greatest step
after object oriented - SOA is introduced, focusing on more
integrated and automated software solutions. On the other hand - and
from the designers' point of view - MDA is just initiating another
evolution. MDA is considered as the next big step after UML in
designing domain.
Abstract: Currently, slider process of Hard Disk Drive Industry
become more complex, defective diagnosis for yield improvement
becomes more complicated and time-consumed. Manufacturing data
analysis with data mining approach is widely used for solving that
problem. The existing mining approach from combining of the KMean
clustering, the machine oriented Kruskal-Wallis test and the
multivariate chart were applied for defective diagnosis but it is still
be a semiautomatic diagnosis system. This article aims to modify an
algorithm to support an automatic decision for the existing approach.
Based on the research framework, the new approach can do an
automatic diagnosis and help engineer to find out the defective
factors faster than the existing approach about 50%.
Abstract: Reactiondiffusion systems are mathematical models that describe how the concentration of one or more substances distributed in space changes under the influence of local chemical reactions in which the substances are converted into each other, and diffusion which causes the substances to spread out in space. The classical representation of a reaction-diffusion system is given by semi-linear parabolic partial differential equations, whose general form is ÔêétX(x, t) = DΔX(x, t), where X(x, t) is the state vector, D is the matrix of the diffusion coefficients and Δ is the Laplace operator. If the solute move in an homogeneous system in thermal equilibrium, the diffusion coefficients are constants that do not depend on the local concentration of solvent and of solutes and on local temperature of the medium. In this paper a new stochastic reaction-diffusion model in which the diffusion coefficients are function of the local concentration, viscosity and frictional forces of solvent and solute is presented. Such a model provides a more realistic description of the molecular kinetics in non-homogenoeus and highly structured media as the intra- and inter-cellular spaces. The movement of a molecule A from a region i to a region j of the space is described as a first order reaction Ai k- → Aj , where the rate constant k depends on the diffusion coefficient. Representing the diffusional motion as a chemical reaction allows to assimilate a reaction-diffusion system to a pure reaction system and to simulate it with Gillespie-inspired stochastic simulation algorithms. The stochastic time evolution of the system is given by the occurrence of diffusion events and chemical reaction events. At each time step an event (reaction or diffusion) is selected from a probability distribution of waiting times determined by the specific speed of reaction and diffusion events. Redi is the software tool, developed to implement the model of reaction-diffusion kinetics and dynamics. It is a free software, that can be downloaded from http://www.cosbi.eu. To demonstrate the validity of the new reaction-diffusion model, the simulation results of the chaperone-assisted protein folding in cytoplasm obtained with Redi are reported. This case study is redrawing the attention of the scientific community due to current interests on protein aggregation as a potential cause for neurodegenerative diseases.
Abstract: As the disfunctions of the information society and
social development progress, intrusion problems such as malicious
replies, spam mail, private information leakage, phishing, and
pharming, and side effects such as the spread of unwholesome
information and privacy invasion are becoming serious social
problems. Illegal access to information is also becoming a problem as
the exchange and sharing of information increases on the basis of the
extension of the communication network. On the other hand, as the
communication network has been constructed as an international,
global system, the legal response against invasion and cyber-attack
from abroad is facing its limit. In addition, in an environment where
the important infrastructures are managed and controlled on the basis
of the information communication network, such problems pose a
threat to national security. Countermeasures to such threats are
developed and implemented on a yearly basis to protect the major
infrastructures of information communication. As a part of such
measures, we have developed a methodology for assessing the
information protection level which can be used to establish the
quantitative object setting method required for the improvement of the
information protection level.
Abstract: There is a great deal of interest in constructing Double Skin Facade (DSF) structures which are considered as modern movement in field of Energy Conservation, renewable energies, and Architecture design. This trend provides many conclusive alternatives which are frequently associated with sustainable building. In this paper a building with Double Skin Facade is considered in the semiarid climate of Tehran, Iran, in order to consider the DSF-s performance during hot seasons. Mathematical formulations calculate solar heat gain by the external skin. Moreover, Computational Fluid Dynamics (CFD) simulations were performed on the case study building to enhance effectiveness of the facade. The conclusion divulged difference of gained energy by the cavity and room with and without blind and louvers. Some solutions were introduced to surge the performance of natural ventilation by plunging the cooling loads in summer.
Abstract: In most of the popular implementation of Parallel GAs
the whole population is divided into a set of subpopulations, each
subpopulation executes GA independently and some individuals are
migrated at fixed intervals on a ring topology. In these studies,
the migrations usually occur 'synchronously' among subpopulations.
Therefore, CPUs are not used efficiently and the communication
do not occur efficiently either. A few studies tried asynchronous
migration but it is hard to implement and setting proper parameter
values is difficult.
The aim of our research is to develop a migration method which is
easy to implement, which is easy to set parameter values, and which
reduces communication traffic. In this paper, we propose a traffic
reduction method for the Asynchronous Parallel Distributed GA by
migration of elites only. This is a Server-Client model. Every client
executes GA on a subpopulation and sends an elite information to the
server. The server manages the elite information of each client and
the migrations occur according to the evolution of sub-population in
a client. This facilitates the reduction in communication traffic.
To evaluate our proposed model, we apply it to many function optimization
problems. We confirm that our proposed method performs
as well as current methods, the communication traffic is less, and
setting of the parameters are much easier.
Abstract: In the last decades, a number of robust fuzzy clustering algorithms have been proposed to partition data sets affected by noise and outliers. Robust fuzzy C-means (robust-FCM) is certainly one of the most known among these algorithms. In robust-FCM, noise is modeled as a separate cluster and is characterized by a prototype that has a constant distance δ from all data points. Distance δ determines the boundary of the noise cluster and therefore is a critical parameter of the algorithm. Though some approaches have been proposed to automatically determine the most suitable δ for the specific application, up to today an efficient and fully satisfactory solution does not exist. The aim of this paper is to propose a novel method to compute the optimal δ based on the analysis of the distribution of the percentage of objects assigned to the noise cluster in repeated executions of the robust-FCM with decreasing values of δ . The extremely encouraging results obtained on some data sets found in the literature are shown and discussed.