Abstract: General as well as the MSW management in Thailand is reviewed in this paper. Topics include the MSW generation, sources, composition, and trends. The review, then, moves to sustainable solutions for MSW management, sustainable alternative approaches with an emphasis on an integrated MSW management. Information of waste in Thailand is also given at the beginning of this paper for better understanding of later contents. It is clear that no one single method of MSW disposal can deal with all materials in an environmentally sustainable way. As such, a suitable approach in MSW management should be an integrated approach that could deliver both environmental and economic sustainability. With increasing environmental concerns, the integrated MSW management system has a potential to maximize the useable waste materials as well as produce energy as a by-product. In Thailand, the compositions of waste (86%) are mainly organic waste, paper, plastic, glass, and metal. As a result, the waste in Thailand is suitable for an integrated MSW management. Currently, the Thai national waste management policy starts to encourage the local administrations to gather into clusters to establish central MSW disposal facilities with suitable technologies and reducing the disposal cost based on the amount of MSW generated.
Abstract: An approach and its implementation in 0.18 m CMOS process of the multichannel ASIC for capacitive (up to 30 pF) sensors are described in the paper. The main design aim was to study an analog data-driven architecture. The design was done for an analog derandomizing function of the 128 to 16 structure. That means that the ASIC structure should provide a parallel front-end readout of 128 input analog sensor signals and after the corresponding fast commutation with appropriate arbitration logic their processing by means of 16 output chains, including analog-to-digital conversion. The principal feature of the ASIC is a low power consumption within 2 mW/channel (including a 9-bit 20Ms/s ADC) at a maximum average channel hit rate not less than 150 kHz.
Abstract: This paper studies the dependability of componentbased
applications, especially embedded ones, from the diagnosis
point of view. The principle of the diagnosis technique is to
implement inter-component tests in order to detect and locate the
faulty components without redundancy. The proposed approach for
diagnosing faulty components consists of two main aspects. The first
one concerns the execution of the inter-component tests which
requires integrating test functionality within a component. This is the
subject of this paper. The second one is the diagnosis process itself
which consists of the analysis of inter-component test results to
determine the fault-state of the whole system. Advantage of this
diagnosis method when compared to classical redundancy faulttolerant
techniques are application autonomy, cost-effectiveness and
better usage of system resources. Such advantage is very important
for many systems and especially for embedded ones.
Abstract: Microparticles carrier systems made from naturally occurring polymers based on chitosan/casein system appears to be a promising carrier for the sustained release of orally and parenteral administered drugs. In the current study we followed a microencapsulation technique based aqueous coacervation method to prepare chitosan/casein microparticles of compositions 1:1, 1:2 and 1:5 incorporated with chloramphenicol. Glutaraldehyde was used as a chemical cross-linking agent. The microparticles were prepared by aerosol method and studied by optical microscopy, infrared spectroscopy, thermo gravimetric analysis, swelling studies and drug release studies at various pH. The percentage swelling of the polymers are found to be in the order pH 4 > pH 10 > pH 7 and the increase in casein composition decrease the swelling percentage. The drug release studies also follow the above order.
Abstract: The desulfurization of coal using biological methods is an emerging technology. The biodesulfurization process uses the catalytic activity of chemolithotrophic acidpohiles in removing sulfur and pyrite from the coal. The present study was undertaken to examine the potential of Acidithiobacillus ferrooxidans in removing the pyritic sulfur and iron from high iron and sulfur containing US coal. The experiment was undertaken in 10 L batch stirred tank reactor having 10% pulp density of coal. The reactor was operated under mesophilic conditions and aerobic conditions were maintained by sparging the air into the reactor. After 35 days of experiment, about 64% of pyrite and 21% of pyritic sulfur was removed from the coal. The findings of the present study indicate that the biodesulfurization process does have potential in treating the high pyrite and sulfur containing coal. A good mass balance was also obtained with net loss of about 5% showing its feasibility for large scale application.
Abstract: The aim of the current study is to develop a numerical
tool that is capable of achieving an optimum shape and design of
hyperbolic cooling towers based on coupling a non-linear finite
element model developed in-house and a genetic algorithm
optimization technique. The objective function is set to be the
minimum weight of the tower. The geometric modeling of the tower
is represented by means of B-spline curves. The finite element
method is applied to model the elastic buckling behaviour of a tower
subjected to wind pressure and dead load. The study is divided into
two main parts. The first part investigates the optimum shape of the
tower corresponding to minimum weight assuming constant
thickness. The study is extended in the second part by introducing the
shell thickness as one of the design variables in order to achieve an
optimum shape and design. Design, functionality and practicality
constraints are applied.
Abstract: The main objective of Automatic Generation Control (AGC) is to balance the total system generation against system load losses so that the desired frequency and power interchange with neighboring systems is maintained. Any mismatch between generation and demand causes the system frequency to deviate from its nominal value. Thus high frequency deviation may lead to system collapse. This necessitates a very fast and accurate controller to maintain the nominal system frequency. This paper deals with a novel approach of artificial intelligence (AI) technique called Hybrid Neuro-Fuzzy (HNF) approach for an (AGC). The advantage of this controller is that it can handle the non-linearities at the same time it is faster than other conventional controllers. The effectiveness of the proposed controller in increasing the damping of local and inter area modes of oscillation is demonstrated in a two area interconnected power system. The result shows that intelligent controller is having improved dynamic response and at the same time faster than conventional controller.
Abstract: The well known NP-complete problem of the Traveling Salesman Problem (TSP) is coded in genetic form. A software system is proposed to determine the optimum route for a Traveling Salesman Problem using Genetic Algorithm technique. The system starts from a matrix of the calculated Euclidean distances between the cities to be visited by the traveling salesman and a randomly chosen city order as the initial population. Then new generations are then created repeatedly until the proper path is reached upon reaching a stopping criterion. This search is guided by a solution evaluation function.
Abstract: Network on Chip (NoC) has emerged as a promising
on chip communication infrastructure. Three Dimensional Integrate
Circuit (3D IC) provides small interconnection length between layers
and the interconnect scalability in the third dimension, which can
further improve the performance of NoC. Therefore, in this paper,
a hierarchical cluster-based interconnect architecture is merged with
the 3D IC. This interconnect architecture significantly reduces the
number of long wires. Since this architecture only has approximately
a quarter of routers in 3D mesh-based architecture, the average
number of hops is smaller, which leads to lower latency and higher
throughput. Moreover, smaller number of routers decreases the area
overhead. Meanwhile, some dual links are inserted into the bottlenecks
of communication to improve the performance of NoC.
Simulation results demonstrate our theoretical analysis and show the
advantages of our proposed architecture in latency, throughput and
area, when compared with 3D mesh-based architecture.
Abstract: Considering payload, reliability, security and operational lifetime as major constraints in transmission of images we put forward in this paper a steganographic technique implemented at the physical layer. We suggest transmission of Halftoned images (payload constraint) in wireless sensor networks to reduce the amount of transmitted data. For low power and interference limited applications Turbo codes provide suitable reliability. Ensuring security is one of the highest priorities in many sensor networks. The Turbo Code structure apart from providing forward error correction can be utilized to provide for encryption. We first consider the Halftoned image and then the method of embedding a block of data (called secret) in this Halftoned image during the turbo encoding process is presented. The small modifications required at the turbo decoder end to extract the embedded data are presented next. The implementation complexity and the degradation of the BER (bit error rate) in the Turbo based stego system are analyzed. Using some of the entropy based crypt analytic techniques we show that the strength of our Turbo based stego system approaches that found in the OTPs (one time pad).
Abstract: In a complex project environment, project teams face
multi-dimensional communication problems that can ultimately lead
to project breakdown. Team Performance varies in Face-to-Face
(FTF) environment versus groups working remotely in a computermediated
communication (CMC) environment. A brief review of the
Input_Process_Output model suggested by James E. Driskell, Paul H.
Radtke and Eduardo Salas in “Virtual Teams: Effects of
Technological Mediation on Team Performance (2003)", has been
done to develop the basis of this research. This model theoretically
analyzes the effects of technological mediation on team processes,
such as, cohesiveness, status and authority relations, counternormative
behavior and communication. An empirical study
described in this paper has been undertaken to test the
“cohesiveness" of diverse project teams in a multi-national
organization. This study uses both quantitative and qualitative
techniques for data gathering and analysis. These techniques include
interviews, questionnaires for data collection and graphical data
representation for analyzing the collected data. Computer-mediated
technology may impact team performance because of difference in
cohesiveness among teams and this difference may be moderated by
factors, such as, the type of communication environment, the type of
task and the temporal context of the team. Based on the reviewed
model, sets of hypotheses are devised and tested. This research,
reports on a study that compared team cohesiveness among virtual
teams using CMC and non-CMC communication mediums. The
findings suggest that CMC can help virtual teams increase team
cohesiveness among their members, making CMC an effective
medium for increasing productivity and team performance.
Abstract: Many studies have focused on the nonlinear analysis
of electroencephalography (EEG) mainly for the characterization of
epileptic brain states. It is assumed that at least two states of the
epileptic brain are possible: the interictal state characterized by a
normal apparently random, steady-state EEG ongoing activity; and
the ictal state that is characterized by paroxysmal occurrence of
synchronous oscillations and is generally called in neurology, a
seizure.
The spatial and temporal dynamics of the epileptogenic process is
still not clear completely especially the most challenging aspects of
epileptology which is the anticipation of the seizure. Despite all the
efforts we still don-t know how and when and why the seizure
occurs. However actual studies bring strong evidence that the
interictal-ictal state transition is not an abrupt phenomena. Findings
also indicate that it is possible to detect a preseizure phase.
Our approach is to use the neural network tool to detect interictal
states and to predict from those states the upcoming seizure ( ictal
state). Analysis of the EEG signal based on neural networks is used
for the classification of EEG as either seizure or non-seizure. By
applying prediction methods it will be possible to predict the
upcoming seizure from non-seizure EEG.
We will study the patients admitted to the epilepsy monitoring
unit for the purpose of recording their seizures. Preictal, ictal, and
post ictal EEG recordings are available on such patients for analysis
The system will be induced by taking a body of samples then
validate it using another. Distinct from the two first ones a third body
of samples is taken to test the network for the achievement of
optimum prediction. Several methods will be tried 'Backpropagation
ANN' and 'RBF'.
Abstract: This paper presents a integer frequency offset (IFO)
estimation scheme for the 3GPP long term evolution (LTE) downlink
system. Firstly, the conventional joint detection method for IFO and
sector cell index (CID) information is introduced. Secondly, an IFO
estimation without explicit sector CID information is proposed, which
can operate jointly with the proposed IFO estimation and reduce
the time delay in comparison with the conventional joint method.
Also, the proposed method is computationally efficient and has almost
similar performance in comparison with the conventional method over
the Pedestrian and Vehicular channel models.
Abstract: Till date, English as a Second Language (ESL) educators involved in teaching language and communication to engineering students face an uphill task in developing graduate communicative competency. This challenge is accentuated by the apparent lack of English for Specific Purposes (ESP) materials for engineering students in the engineering curriculum. As such, most ESL educators are forced to play multiple roles. They don tasks such as curriculum designers, material writers and teachers with limited knowledge of the disciplinary content. Previous research indicates that prospective professional engineers should possess some sub-sets of competency: technical, linguistic oral immediacy, meta-cognitive and rhetorical explanatory competence. Another study revealed that engineering students need to be equipped with technical and linguistic oral immediacy competence. However, little is known whether these competency needs are in line with the educators- perceptions of communicative competence. This paper examines the best mix of communicative competence subsets that create the magic for engineering students in technical oral presentations. For the purpose of this study, two groups of educators were interviewed. These educators were language and communication lecturers involved in teaching a speaking course and content experts who assess students- technical oral presentations at tertiary level. The findings indicate that these two groups differ in their perceptions
Abstract: In this paper, we present a novel technique called Self-Learning Expert System (SLES). Unlike Expert System, where there is a need for an expert to impart experiences and knowledge to create the knowledge base, this technique tries to acquire the experience and knowledge automatically. To display this technique at work, a simulation of a mobile robot navigating through an environment with obstacles is employed using visual basic. The mobile robot will move through this area without colliding with any obstacle and save the path that it took. If the mobile robot has to go through a similar environment again, then it will apply this experience to help it move through quicker without having to check for collision.
Abstract: Nowadays scientific data is inevitably digital and
stored in a wide variety of formats in heterogeneous systems.
Scientists need to access an integrated view of remote or local
heterogeneous data sources with advanced data accessing, analyzing,
and visualization tools. This research suggests the use of Service
Oriented Architecture (SOA) to integrate biological data from
different data sources. This work shows SOA will solve the problems
that facing integration process and if the biologist scientists can
access the biological data in easier way. There are several methods to
implement SOA but web service is the most popular method. The
Microsoft .Net Framework used to implement proposed architecture.
Abstract: In this paper, a novel and fast algorithm for segmental
and subsegmental lung vessel segmentation is introduced using
Computed Tomography Angiography images. This process is quite
important especially at the detection of pulmonary embolism, lung
nodule, and interstitial lung disease. The applied method has been
realized at five steps. At the first step, lung segmentation is achieved.
At the second one, images are threshold and differences between the
images are detected. At the third one, left and right lungs are gathered
with the differences which are attained in the second step and Exact
Lung Image (ELI) is achieved. At the fourth one, image, which is
threshold for vessel, is gathered with the ELI. Lastly, identifying and
segmentation of segmental and subsegmental lung vessel have been
carried out thanks to image which is obtained in the fourth step. The
performance of the applied method is found quite well for
radiologists and it gives enough results to the surgeries medically.
Abstract: In this paper, a new learning approach for network
intrusion detection using naïve Bayesian classifier and ID3 algorithm
is presented, which identifies effective attributes from the training
dataset, calculates the conditional probabilities for the best attribute
values, and then correctly classifies all the examples of training and
testing dataset. Most of the current intrusion detection datasets are
dynamic, complex and contain large number of attributes. Some of
the attributes may be redundant or contribute little for detection
making. It has been successfully tested that significant attribute
selection is important to design a real world intrusion detection
systems (IDS). The purpose of this study is to identify effective
attributes from the training dataset to build a classifier for network
intrusion detection using data mining algorithms. The experimental
results on KDD99 benchmark intrusion detection dataset demonstrate
that this new approach achieves high classification rates and reduce
false positives using limited computational resources.
Abstract: In a previous work, we presented the numerical
solution of the two dimensional second order telegraph partial
differential equation discretized by the centred and rotated five-point
finite difference discretizations, namely the explicit group (EG) and
explicit decoupled group (EDG) iterative methods, respectively. In
this paper, we utilize a domain decomposition algorithm on these
group schemes to divide the tasks involved in solving the same
equation. The objective of this study is to describe the development
of the parallel group iterative schemes under OpenMP programming
environment as a way to reduce the computational costs of the
solution processes using multicore technologies. A detailed
performance analysis of the parallel implementations of points and
group iterative schemes will be reported and discussed.
Abstract: One of the popular methods for recognition of facial
expressions such as happiness, sadness and surprise is based on
deformation of facial features. Motion vectors which show these
deformations can be specified by the optical flow. In this method, for
detecting emotions, the resulted set of motion vectors are compared
with standard deformation template that caused by facial expressions.
In this paper, a new method is introduced to compute the quantity of
likeness in order to make decision based on the importance of
obtained vectors from an optical flow approach. For finding the
vectors, one of the efficient optical flow method developed by
Gautama and VanHulle[17] is used. The suggested method has been
examined over Cohn-Kanade AU-Coded Facial Expression Database,
one of the most comprehensive collections of test images available.
The experimental results show that our method could correctly
recognize the facial expressions in 94% of case studies. The results
also show that only a few number of image frames (three frames) are
sufficient to detect facial expressions with rate of success of about
83.3%. This is a significant improvement over the available methods.