Abstract: The purpose of this study is to investigate the
efficiency of a double-layer roof in collecting solar energy as an
application to the areas such as raising high-end temperature of
organic Rankine cycle (ORC). The by-product of the solar roof is to
reduce building air-conditioning loads. The experimental apparatus
are arranged to evaluate the effects of the solar roof in absorbing solar
energy. The flow channel is basically formed by an aluminum plate on
top of a plywood plate. The geometric configurations in which the
effects of absorbing energy is analyzed include: a bare uncovered
aluminum plate, a glass-covered aluminum plate, a
glass-covered/black-painted aluminum plate, a plate with variable
lengths, a flow channel with stuffed material (in an attempt on
enhancement of heat conduction), and a flow channel with variable
slanted angles. The experimental results show that the efficiency of
energy collection varies from 0.6 % to 11 % for the geometric
configurations mentioned above. An additional study is carried out
using CFD simulation to investigate the effects of fins on the
aluminum plate. It shows that due to vastly enhanced heat conduction,
the efficiency can reach ~23 % if 50 fins are installed on the aluminum
plate. The study shows that a double-layer roof can efficiently absorb
solar energy and substantially reduce building air-conditioning
loads. On the high end of an organic Rankine cycle, a solar pond is
used to replace the warm surface water of the sea as OTEC (ocean
thermal energy conversion) is the driving energy for the ORC. The
energy collected from the double-layered solar roof can be pumped
into the pond and raise the pond temperature as the pond surface area is
equivalently increased by nearly one-fourth of the total area of the
double-layer solar roof. The effect of raising solar pond temperature is
especially prominent if the double-layer solar roofs are installed in a
community area.
Abstract: This study proposes a multi-response surface
optimization problem (MRSOP) for determining the proper choices
of a process parameter design (PPD) decision problem in a noisy
environment of a grease position process in an electronic industry.
The proposed models attempts to maximize dual process responses
on the mean of parts between failure on left and right processes. The
conventional modified simplex method and its hybridization of the
stochastic operator from the hunting search algorithm are applied to
determine the proper levels of controllable design parameters
affecting the quality performances. A numerical example
demonstrates the feasibility of applying the proposed model to the
PPD problem via two iterative methods. Its advantages are also
discussed. Numerical results demonstrate that the hybridization is
superior to the use of the conventional method. In this study, the
mean of parts between failure on left and right lines improve by
39.51%, approximately. All experimental data presented in this
research have been normalized to disguise actual performance
measures as raw data are considered to be confidential.
Abstract: In this research we show that the dynamics of an action potential in a cell can be modeled with a linear combination of the dynamics of the gating state variables. It is shown that the modeling error is negligible. Our findings can be used for simplifying cell models and reduction of computational burden i.e. it is useful for simulating action potential propagation in large scale computations like tissue modeling. We have verified our finding with the use of several cell models.
Abstract: The purpose of this study is to introduce a new
interface program to calculate a dose distribution with Monte Carlo method in complex heterogeneous systems such as organs or tissues
in proton therapy. This interface program was developed under
MATLAB software and includes a friendly graphical user interface
with several tools such as image properties adjustment or results display. Quadtree decomposition technique was used as an image
segmentation algorithm to create optimum geometries from Computed Tomography (CT) images for dose calculations of proton
beam. The result of the mentioned technique is a number of nonoverlapped
squares with different sizes in every image. By this way
the resolution of image segmentation is high enough in and near
heterogeneous areas to preserve the precision of dose calculations
and is low enough in homogeneous areas to reduce the number of
cells directly. Furthermore a cell reduction algorithm can be used to combine neighboring cells with the same material. The validation of this method has been done in two ways; first, in comparison with experimental data obtained with 80 MeV proton beam in Cyclotron
and Radioisotope Center (CYRIC) in Tohoku University and second, in comparison with data based on polybinary tissue calibration method, performed in CYRIC. These results are presented in this paper. This program can read the output file of Monte Carlo code while region of interest is selected manually, and give a plot of dose distribution of proton beam superimposed onto the CT images.
Abstract: Motor imagery classification provides an important basis for designing Brain Machine Interfaces [BMI]. A BMI captures and decodes brain EEG signals and transforms human thought into actions. The ability of an individual to control his EEG through imaginary mental tasks enables him to control devices through the BMI. This paper presents a method to design a four state BMI using EEG signals recorded from the C3 and C4 locations. Principle features extracted through principle component analysis of the segmented EEG are analyzed using two novel classification algorithms using Elman recurrent neural network and functional link neural network. Performance of both classifiers is evaluated using a particle swarm optimization training algorithm; results are also compared with the conventional back propagation training algorithm. EEG motor imagery recorded from two subjects is used in the offline analysis. From overall classification performance it is observed that the BP algorithm has higher average classification of 93.5%, while the PSO algorithm has better training time and maximum classification. The proposed methods promises to provide a useful alternative general procedure for motor imagery classification
Abstract: An edge based local search algorithm, called ELS, is proposed for the maximum clique problem (MCP), a well-known combinatorial optimization problem. ELS is a two phased local search method effectively £nds the near optimal solutions for the MCP. A parameter ’support’ of vertices de£ned in the ELS greatly reduces the more number of random selections among vertices and also the number of iterations and running times. Computational results on BHOSLIB and DIMACS benchmark graphs indicate that ELS is capable of achieving state-of-the-art-performance for the maximum clique with reasonable average running times.
Abstract: In this paper, at first we explain about negative
hypergeometric distribution and its properties. Then we use the w-function
and the Stein identity to give a result on the poisson
approximation to the negative hypergeometric distribution in terms of the total variation distance between the negative hypergeometric and
poisson distributions and its upper bound.
Abstract: This research was conducted in the Pua Watershed whereas located in the Upper Nan River Basin in Nan province, Thailand. Nan River basin originated in Nan province that comprises of many tributary streams to produce as inflow to the Sirikit dam provided huge reservoir with the storage capacity of 9510 million cubic meters. The common problems of most watersheds were found i.e. shortage water supply for consumption and agriculture utilizations, deteriorate of water quality, flood and landslide including debris flow, and unstable of riverbank. The Pua Watershed is one of several small river basins that flow through the Nan River Basin. The watershed includes 404 km2 representing the Pua District, the Upper Nan Basin, or the whole Nan River Basin, of 61.5%, 18.2% or 1.2% respectively. The Pua River is a main stream producing all year streamflow supplying the Pua District and an inflow to the Upper Nan Basin. Its length approximately 56.3 kilometers with an average slope of the channel by 1.9% measured. A diversion weir namely Pua weir bound the plain and mountainous areas with a very steep slope of the riverbed to 2.9% and drainage area of 149 km2 as upstream watershed while a mild slope of the riverbed to 0.2% found in a river reach of 20.3 km downstream of this weir, which considered as a gauged basin. However, the major branch streams of the Pua River are ungauged catchments namely: Nam Kwang and Nam Koon with the drainage area of 86 and 35 km2 respectively. These upstream watersheds produce runoff through the 3-streams downstream of Pua weir, Jao weir, and Kang weir, with an averaged annual runoff of 578 million cubic meters. They were analyzed using both statistical data at Pua weir and simulated data resulted from the hydrologic modeling system (HEC–HMS) which applied for the remaining ungauged basins. Since the Kwang and Koon catchments were limited with lack of hydrological data included streamflow and rainfall. Therefore, the mathematical modeling: HEC-HMS with the Snyder-s hydrograph synthesized and transposed methods were applied for those areas using calibrated hydrological parameters from the upstream of Pua weir with continuously daily recorded of streamflow and rainfall data during 2008-2011. The results showed that the simulated daily streamflow and sum up as annual runoff in 2008, 2010, and 2011 were fitted with observed annual runoff at Pua weir using the simple linear regression with the satisfied correlation R2 of 0.64, 062, and 0.59, respectively. The sensitivity of simulation results were come from difficulty using calibrated parameters i.e. lag-time, coefficient of peak flow, initial losses, uniform loss rates, and missing some daily observed data. These calibrated parameters were used to apply for the other 2-ungauged catchments and downstream catchments simulated.
Abstract: Multi-user interference (MUI) is the main reason of system deterioration in the Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA) system. MUI increases with the number of simultaneous users, resulting into higher probability bit rate and limits the maximum number of simultaneous users. On the other hand, Phase induced intensity noise (PIIN) problem which is originated from spontaneous emission of broad band source from MUI severely limits the system performance should be addressed as well. Since the MUI is caused by the interference of simultaneous users, reducing the MUI value as small as possible is desirable. In this paper, an extensive study for the system performance specified by MUI and PIIN reducing is examined. Vectors Combinatorial (VC) codes families are adopted as a signature sequence for the performance analysis and a comparison with reported codes is performed. The results show that, when the received power increases, the PIIN noise for all the codes increases linearly. The results also show that the effect of PIIN can be minimized by increasing the code weight leads to preserve adequate signal to noise ratio over bit error probability. A comparison study between the proposed code and the existing codes such as Modified frequency hopping (MFH), Modified Quadratic- Congruence (MQC) has been carried out.
Abstract: An adaptive spatial Gaussian mixture model is proposed for clustering based color image segmentation. A new clustering objective function which incorporates the spatial information is introduced in the Bayesian framework. The weighting parameter for controlling the importance of spatial information is made adaptive to the image content to augment the smoothness towards piecewisehomogeneous region and diminish the edge-blurring effect and hence the name adaptive spatial finite mixture model. The proposed approach is compared with the spatially variant finite mixture model for pixel labeling. The experimental results with synthetic and Berkeley dataset demonstrate that the proposed method is effective in improving the segmentation and it can be employed in different practical image content understanding applications.
Abstract: The steady-state operation of maintaining voltage
stability is done by switching various controllers scattered all over
the power network. When a contingency occurs, whether forced or
unforced, the dispatcher is to alleviate the problem in a minimum
time, cost, and effort. Persistent problem may lead to blackout. The
dispatcher is to have the appropriate switching of controllers in terms
of type, location, and size to remove the contingency and maintain
voltage stability. Wrong switching may worsen the problem and that
may lead to blackout. This work proposed and used a Fuzzy CMeans
Clustering (FCMC) to assist the dispatcher in the decision
making. The FCMC is used in the static voltage stability to map
instantaneously a contingency to a set of controllers where the types,
locations, and amount of switching are induced.
Abstract: The aim of this research is to design a collaborative
framework that integrates risk analysis activities into the geospatial
database design (GDD) process. Risk analysis is rarely undertaken
iteratively as part of the present GDD methods in conformance to
requirement engineering (RE) guidelines and risk standards.
Accordingly, when risk analysis is performed during the GDD, some
foreseeable risks may be overlooked and not reach the output
specifications especially when user intentions are not systematically
collected. This may lead to ill-defined requirements and ultimately in
higher risks of geospatial data misuse. The adopted approach consists
of 1) reviewing risk analysis process within the scope of RE and
GDD, 2) analyzing the challenges of risk analysis within the context
of GDD, and 3) presenting the components of a risk-based
collaborative framework that improves the collection of the
intended/forbidden usages of the data and helps geo-IT experts to
discover implicit requirements and risks.
Abstract: The condition of lightning surge causes the traveling waves and the temporary increase in voltage in the transmission line system. Lightning is the most harmful for destroying the transmission line and setting devices so it is necessary to study and analyze the temporary increase in voltage for designing and setting the surge arrester. This analysis describes the figure of the lightning wave in transmission line with 115 kV voltage level in Thailand by using ATP/EMTP program to create the model of the transmission line and lightning surge. Because of the limit of this program, it must be calculated for the geometry of the transmission line and surge parameter and calculation in the manual book for the closest value of the parameter. On the other hand, for the effects on surge protector when the lightning comes, the surge arrester model must be right and standardized as metropolitan electrical authority's standard. The candidate compared the real information to the result from calculation, also. The results of the analysis show that the temporary increase in voltage value will be rise to 326.59 kV at the line which is done by lightning when the surge arrester is not set in the system. On the other hand, the temporary increase in voltage value will be 182.83 kV at the line which is done by lightning when the surge arrester is set in the system and the period of the traveling wave is reduced, also. The distance for setting the surge arrester must be as near to the transformer as possible. Moreover, it is necessary to know the right distance for setting the surge arrester and the size of the surge arrester for preventing the temporary increase in voltage, effectively.
Abstract: Testable software has two inherent properties – observability and controllability. Observability facilitates observation of internal behavior of software to required degree of detail. Controllability allows creation of difficult-to-achieve states prior to execution of various tests. In this paper, we describe COTT, a Controllability and Observability Testing Tool, to create testable object-oriented software. COTT provides a framework that helps the user to instrument object-oriented software to build the required controllability and observability. During testing, the tool facilitates creation of difficult-to-achieve states required for testing of difficultto- test conditions and observation of internal details of execution at unit, integration and system levels. The execution observations are logged in a test log file, which are used for post analysis and to generate test coverage reports.
Abstract: For higher order multiplications, a huge number of
adders or compressors are to be used to perform the partial product
addition. We have reduced the number of adders by introducing
special kind of adders that are capable to add five/six/seven bits per
decade. These adders are called compressors. Binary counter
property has been merged with the compressor property to develop
high order compressors. Uses of these compressors permit the
reduction of the vertical critical paths. A 16×16 bit multiplier has
been developed using these compressors. These compressors make
the multipliers faster as compared to the conventional design that
have been used 4-2 compressors and 3-2 compressors.
Abstract: The mechanical and tribological properties in WC-Co
coatings are strongly affected by hardness and elasticity
specifications. The results revealed the effect of spraying distance on
microhardness and elasticity modulus of coatings. The metallurgical
studies have been made on coated samples using optical microscopy,
scanning electron microscopy (SEM).
Abstract: In this paper we investigated a number of the Internet
congestion control algorithms that has been developed in the last few
years. It was obviously found that many of these algorithms were
designed to deal with the Internet traffic merely as a train of
consequent packets. Other few algorithms were specifically tailored
to handle the Internet congestion caused by running media traffic that
represents audiovisual content. This later set of algorithms is
considered to be aware of the nature of this media content. In this
context we briefly explained a number of congestion control
algorithms and hence categorized them into the two following
categories: i) Media congestion control algorithms. ii) Common
congestion control algorithms. We hereby recommend the usage of
the media congestion control algorithms for the reason of being
media content-aware rather than the other common type of
algorithms that blindly manipulates such traffic. We showed that the
spread of such media content-aware algorithms over Internet will
lead to better congestion control status in the coming years. This is
due to the observed emergence of the era of digital convergence
where the media traffic type will form the majority of the Internet
traffic.
Abstract: Bridges are one of the main components of
transportation networks. They should be functional before and after
earthquake for emergency services. Therefore we need to assess
seismic performance of bridges under different seismic loadings.
Fragility curve is one of the popular tools in seismic evaluations. The
fragility curves are conditional probability statements, which give the
probability of a bridge reaching or exceeding a particular damage
level for a given intensity level. In this study, the seismic
performance of a two-span simply supported concrete bridge is
assessed. Due to usual lack of empirical data, the analytical fragility
curve was developed by results of the dynamic analysis of bridge
subjected to the different time histories in near-fault area.
Abstract: In this paper, an improved technique for contingency
ranking using artificial neural network (ANN) is presented. The
proposed approach is based on multi-layer perceptrons trained by
backpropagation to contingency analysis. Severity indices in dynamic
stability assessment are presented. These indices are based on the
concept of coherency and three dot products of the system variables.
It is well known that some indices work better than others for a
particular power system. This paper along with test results using
several different systems, demonstrates that combination of indices
with ANN provides better ranking than a single index. The presented
results are obtained through the use of power system simulation
(PSS/E) and MATLAB 6.5 software.
Abstract: Heart-s electric field can be measured anywhere on
the surface of the body (ECG). When individuals touch, one person-s
ECG signal can be registered in other person-s EEG and elsewhere
on his body. Now, the aim of this study was to test the hypothesis
that physical contact (hand-holding) of two persons changes their
heart rate variability. Subjects were sixteen healthy female (age: 20-
26) which divided into eight sets. In each sets, we had two friends
that they passed intimacy test of J.sternberg. ECG of two subjects
(each set) acquired for 5 minutes before hand-holding (as control
group) and 5 minutes during they held their hands (as experimental
group). Then heart rate variability signals were extracted from
subjects' ECG and analyzed in linear feature space (time and
frequency domain) and nonlinear feature space. Considering the
results, we conclude that physical contact (hand-holding of two
friends) increases parasympathetic activity, as indicate by increase
SD1, SD1/SD2, HF and MF power (p