Abstract: Evaporator is an important and widely used heat
exchanger in air conditioning and refrigeration industries. Different
methods have been used by investigators to increase the heat transfer
rates in evaporators. One of the passive techniques to enhance heat
transfer coefficient is the application of microfin tubes. The
mechanism of heat transfer augmentation in microfin tubes is
dependent on the flow regime of two-phase flow. Therefore many
investigations of the flow patterns for in-tube evaporation have been
reported in literatures. The gravitational force, surface tension and
the vapor-liquid interfacial shear stress are known as three dominant
factors controlling the vapor and liquid distribution inside the tube. A
review of the existing literature reveals that the previous
investigations were concerned with the two-phase flow pattern for
flow boiling in horizontal tubes [12], [9]. Therefore, the objective of
the present investigation is to obtain information about the two-phase
flow patterns for evaporation of R-134a inside horizontal smooth and
microfin tubes. Also Investigation of heat transfer during flow
boiling of R-134a inside horizontal microfin and smooth tube have
been carried out experimentally The heat transfer coefficients for
annular flow in the smooth tube is shown to agree well with Gungor
and Winterton-s correlation [4]. All the flow patterns occurred in the
test can be divided into three dominant regimes, i.e., stratified-wavy
flow, wavy-annular flow and annular flow. Experimental data are
plotted in two kinds of flow maps, i.e., Weber number for the vapor
versus weber number for the liquid flow map and mass flux versus
vapor quality flow map. The transition from wavy-annular flow to
annular or stratified-wavy flow is identified in the flow maps.
Abstract: The occurrence of missing values in database is a serious problem for Data Mining tasks, responsible for degrading data quality and accuracy of analyses. In this context, the area has shown a lack of standardization for experiments to treat missing values, introducing difficulties to the evaluation process among different researches due to the absence in the use of common parameters. This paper proposes a testbed intended to facilitate the experiments implementation and provide unbiased parameters using available datasets and suited performance metrics in order to optimize the evaluation and comparison between the state of art missing values treatments.
Abstract: Many works have been carried out to compare the
efficiency of several goodness of fit procedures for identifying
whether or not a particular distribution could adequately explain a
data set. In this paper a study is conducted to investigate the power
of several goodness of fit tests such as Kolmogorov Smirnov (KS),
Anderson-Darling(AD), Cramer- von- Mises (CV) and a proposed
modification of Kolmogorov-Smirnov goodness of fit test which
incorporates a variance stabilizing transformation (FKS). The
performances of these selected tests are studied under simple
random sampling (SRS) and Ranked Set Sampling (RSS). This
study shows that, in general, the Anderson-Darling (AD) test
performs better than other GOF tests. However, there are some
cases where the proposed test can perform as equally good as the
AD test.
Abstract: Preliminary results for a new flat plate test
facility are presented here in the form of Computational Fluid Dynamics (CFD), flow visualisation, pressure measurements and thermal anemometry. The results from the CFD and flow
visualisation show the effectiveness of the plate design, with the trailing edge flap anchoring the stagnation point on the working surface and reducing the extent of the leading edge separation. The flow visualization technique demonstrates the
two-dimensionality of the flow in the location where the
thermal anemometry measurements are obtained.
Measurements of the boundary layer mean velocity profiles compare favourably with the Blasius solution, thereby allowing for comparison of future measurements with the
wealth of data available on zero pressure gradient Blasius
flows. Results for the skin friction, boundary layer thickness,
frictional velocity and wall shear stress are shown to agree well with the Blasius theory, with a maximum experimental deviation from theory of 5%. Two turbulence generating grids
have been designed and characterized and it is shown that the turbulence decay downstream of both grids agrees with established correlations. It is also demonstrated that there is
little dependence of turbulence on the freestream velocity.
Abstract: This research is designed for helping a WAPbased mobile phone-s user in order to analyze of logistics in the traffic area by applying and designing the accessible processes from mobile user to server databases. The research-s design comprises Mysql 4.1.8-nt database system for being the server which there are three sub-databases, traffic light – times of intersections in periods of the day, distances on the road of area-blocks where are divided from the main sample-area and speeds of sample vehicles (motorcycle, personal car and truck) in periods of the day. For interconnections between the server and user, PHP is used to calculate distances and travelling times from the beginning point to destination, meanwhile XHTML applied for receiving, sending and displaying data from PHP to user-s mobile. In this research, the main sample-area is focused at the Huakwang-Ratchada-s area, Bangkok, Thailand where usually the congested point and 6.25 km2 surrounding area which are split into 25 blocks, 0.25 km2 for each. For simulating the results, the designed server-database and all communicating models of this research have been uploaded to www.utccengineering.com/m4tg and used the mobile phone which supports WAP 2.0 XHTML/HTML multimode browser for observing values and displayed pictures. According to simulated results, user can check the route-s pictures from the requiring point to destination along with analyzed consuming times when sample vehicles travel in various periods of the day.
Abstract: Quality costs are the costs associated with preventing,
finding, and correcting defective work. Since the main language of
corporate management is money, quality-related costs act as means of
communication between the staff of quality engineering departments
and the company managers. The objective of quality engineering is to
minimize the total quality cost across the life of product. Quality
costs provide a benchmark against which improvement can be
measured over time. It provides a rupee-based report on quality
improvement efforts. It is an effective tool to identify, prioritize and
select quality improvement projects. After reviewing through the
literature it was noticed that a simplified methodology for data
collection of quality cost in a manufacturing industry was required.
The quantified standard methodology is proposed for collecting data
of various elements of quality cost categories for manufacturing
industry. Also in the light of research carried out so far, it is felt
necessary to standardise cost elements in each of the prevention,
appraisal, internal failure and external failure costs. . Here an attempt
is made to standardise the various cost elements applicable to
manufacturing industry and data is collected by using the proposed
quantified methodology. This paper discusses the case study carried
in luggage manufacturing industry.
Abstract: The nearly 21-year-old Jiujiang Bridge, which is suffering from uneven line shape, constant great downwarping of the main beam and cracking of the box girder, needs reinforcement and cable adjustment. It has undergone cable adjustment for twice with incomplete data. Therefore, the initial internal force state of the Jiujiang Bridge is identified as the key for the cable adjustment project. Based on parameter identification by means of static force test data, this paper suggests determining the initial internal force state of the cable-stayed bridge according to the cable force-displacement relationship parameter identification method. That is, upon measuring the displacement and the change in cable forces for twice, one can identify the parameters concerned by means of optimization. This method is applied to the cable adjustment, replacement and reinforcement project for the Jiujiang Bridge as a guidance for the cable adjustment and reinforcement project of the bridge.
Abstract: Hydrate phase equilibria for the binary CO2+water and
CH4+water mixtures in silica gel pore of nominal diameters 6, 30, and
100 nm were measured and compared with the calculated results based
on van der Waals and Platteeuw model. At a specific temperature,
three-phase hydrate-water-vapor (HLV) equilibrium curves for pore
hydrates were shifted to the higher-pressure condition depending on
pore sizes when compared with those of bulk hydrates. Notably,
hydrate phase equilibria for the case of 100 nominal nm pore size were
nearly identical with those of bulk hydrates. The activities of water in
porous silica gels were modified to account for capillary effect, and
the calculation results were generally in good agreement with the
experimental data. The structural characteristics of gas hydrates in
silica gel pores were investigated through NMR spectroscopy.
Abstract: The purpose of this study is to identify and evaluate
the scale of implementation of Just-In-Time (JIT) in the different industrial sectors in the Middle East. This study analyzes the empirical data collected by a questionnaire survey distributed to
companies in three main industrial sectors in the Middle East, which
are: food, chemicals and fabrics. The following main hypotheses is formulated and tested: (The requirements of JIT application differ
according to the type of industrial sector).Descriptive statistics and Box plot analysis were used to examine the hypotheses. This study indicates a reasonable evidence for accepting the main hypotheses. It
reveals that there is no standard way to adopt JIT as a production system. But each industrial sector should concentrate in the
investment on critical requirements that differ according to the nature
and strategy of production followed in that sector.
Abstract: This study focuses on bureau management
technologies and information systems in developing countries.
Developing countries use such systems which facilitate executive and
organizational functions through the utilization of bureau
management technologies and provide the executive staff with
necessary information.
The concepts of data and information differ from each other in
developing countries, and thus the concepts of data processing and
information processing are different. Symbols represent ideas,
objects, figures, letters and numbers. Data processing system is an
integrated system which deals with the processing of the data related
to the internal and external environment of the organization in order
to make decisions, create plans and develop strategies; it goes
without saying that this system is composed of both human beings
and machines. Information is obtained through the acquisition and
the processing of data. On the other hand, data are raw
communicative messages. Within this framework, data processing
equals to producing plausible information out of raw data.
Organizations in developing countries need to obtain information
relevant to them because rapid changes in the organizational arena
require rapid access to accurate information. The most significant
role of the directors and managers who work in the organizational
arena is to make decisions. Making a correct decision is possible only
when the directors and managers are equipped with sound ideas and
appropriate information. Therefore, acquisition, organization and
distribution of information gain significance. Today-s organizations
make use of computer-assisted “Management Information Systems"
in order to obtain and distribute information.
Decision Support System which is closely related to practice is an
information system that facilitates the director-s task of making
decisions. Decision Support System integrates human intelligence,
information technology and software in order to solve the complex
problems. With the support of the computer technology and software
systems, Decision Support System produces information relevant to
the decision to be made by the director and provides the executive
staff with supportive ideas about the decision.
Artificial Intelligence programs which transfer the studies and
experiences of the people to the computer are called expert systems.
An expert system stores expert information in a limited area and can
solve problems by deriving rational consequences.
Bureau management technologies and information systems in
developing countries create a kind of information society and
information economy which make those countries have their places
in the global socio-economic structure and which enable them to play
a reasonable and fruitful role; therefore it is of crucial importance to
make use of information and management technologies in order to
work together with innovative and enterprising individuals and it is
also significant to create “scientific policies" based on information
and technology in the fields of economy, politics, law and culture.
Abstract: This paper develops models to analyze the
relationship between leisure time and wage change. Using Thailand-s
Time Use Survey and Labor Force Survey data, the estimation of
wage changes in response to leisure time change indicates that media
receiving, personal care and social participation and volunteer
activities are the ones that significantly raise hourly wages. Thus, the
finding suggests the stimulation in time use for media access to
enhance knowledge and productivity, personal care for attractiveness
and healthiness in order to raise productivity, and social activities to
develop connections for possible future opportunities including wage
increase. These activities should be promoted for productive leisure
time and for welfare improvement.
Abstract: Security has been an important issue and concern in the
smart home systems. Smart home networks consist of a wide range of
wired or wireless devices, there is possibility that illegal access to
some restricted data or devices may happen. Password-based
authentication is widely used to identify authorize users, because this
method is cheap, easy and quite accurate. In this paper, a neural
network is trained to store the passwords instead of using verification
table. This method is useful in solving security problems that
happened in some authentication system. The conventional way to
train the network using Backpropagation (BPN) requires a long
training time. Hence, a faster training algorithm, Resilient
Backpropagation (RPROP) is embedded to the MLPs Neural
Network to accelerate the training process. For the Data Part, 200
sets of UserID and Passwords were created and encoded into binary
as the input. The simulation had been carried out to evaluate the
performance for different number of hidden neurons and combination
of transfer functions. Mean Square Error (MSE), training time and
number of epochs are used to determine the network performance.
From the results obtained, using Tansig and Purelin in hidden and
output layer and 250 hidden neurons gave the better performance. As
a result, a password-based user authentication system for smart home
by using neural network had been developed successfully.
Abstract: An adaptive software reliability prediction model
using evolutionary connectionist approach based on Recurrent Radial
Basis Function architecture is proposed. Based on the currently
available software failure time data, Fuzzy Min-Max algorithm is
used to globally optimize the number of the k Gaussian nodes. The
corresponding optimized neural network architecture is iteratively
and dynamically reconfigured in real-time as new actual failure time
data arrives. The performance of our proposed approach has been
tested using sixteen real-time software failure data. Numerical results
show that our proposed approach is robust across different software
projects, and has a better performance with respect to next-steppredictability
compared to existing neural network model for failure
time prediction.
Abstract: Fuzzy logic can be used when knowledge is
incomplete or when ambiguity of data exists. The purpose of
this paper is to propose a proactive fuzzy set- based model for
reacting to the risk inherent in investment activities relative to
a complete view of portfolio management. Fuzzy rules are
given where, depending on the antecedents, the portfolio size
may be slightly or significantly decreased or increased. The
decision maker considers acceptable bounds on the proportion
of acceptable risk and return. The Fuzzy Controller model
allows learning to be achieved as 1) the firing strength of each
rule is measured, 2) fuzzy output allows rules to be updated,
and 3) new actions are recommended as the system continues
to loop. An extension is given to the fuzzy controller that
evaluates potential financial loss before adjusting the
portfolio. An application is presented that illustrates the
algorithm and extension developed in the paper.
Abstract: Employing a recently introduced unified adaptive filter
theory, we show how the performance of a large number of important
adaptive filter algorithms can be predicted within a general framework
in nonstationary environment. This approach is based on energy conservation
arguments and does not need to assume a Gaussian or white
distribution for the regressors. This general performance analysis can
be used to evaluate the mean square performance of the Least Mean
Square (LMS) algorithm, its normalized version (NLMS), the family
of Affine Projection Algorithms (APA), the Recursive Least Squares
(RLS), the Data-Reusing LMS (DR-LMS), its normalized version
(NDR-LMS), the Block Least Mean Squares (BLMS), the Block
Normalized LMS (BNLMS), the Transform Domain Adaptive Filters
(TDAF) and the Subband Adaptive Filters (SAF) in nonstationary
environment. Also, we establish the general expressions for the
steady-state excess mean square in this environment for all these
adaptive algorithms. Finally, we demonstrate through simulations that
these results are useful in predicting the adaptive filter performance.
Abstract: In this paper, we implement a modern serial backplane
platform for telecommunication inter-rack systems. For combination
high reliability and low cost protocol property, we applied high level
data link control (HDLC) protocol with low voltage differential
signaling (LVDS) bus for card to card communicated over backplane.
HDLC protocol is a high performance with several operation modes
and is famous in telecommunication systems. LVDS bus is a high
reliability with high immunity against electromagnetic interference
(EMI) and noise.
Abstract: Global Positioning System (GPS) technology is widely used today in the areas of geodesy and topography as well as in aeronautics mainly for military purposes. Due to the military usage of GPS, full access and use of this technology is being denied to the civilian user who must then work with a less accurate version. In this paper we focus on the estimation of the receiver coordinates ( X, Y, Z ) and its clock bias ( δtr ) of a fixed point based on pseudorange measurements of a single GPS receiver. Utilizing the instantaneous coordinates of just 4 satellites and their clock offsets, by taking into account the atmospheric delays, we are able to derive a set of pseudorange equations. The estimation of the four unknowns ( X, Y, Z , δtr ) is achieved by introducing an extended Kalman filter that processes, off-line, all the data collected from the receiver. Higher performance of position accuracy is attained by appropriate tuning of the filter noise parameters and by including other forms of biases.
Abstract: The aim of this study was to estimate the frequency of
EBV infection in Hodgkin's lymphoma (HL) and non-Hodgkin's
lymphoma (NHL) occurring in Jordanian patients. A total of 55
patients with lymphoma were examined in this study. Of 55 patients,
30 and 25 were diagnosed as HL and NHL, respectively. The four
HL subtypes were observed with the majority of the cases exhibited
the mixed cellularity (MC) subtype followed by the nodular sclerosis
(NS). The high grade was found to be the commonest subtype of
NHL in our sample, followed by the low grade. The presence of EBV
virus was detected by immunostating for expression of latent
membrane protein-1 (LMP-1). The frequency of LMP-1 expression
occurred more frequent in patients with HL (60.0%) than in patients
with NHL (32.0%). The frequency of LMP-1 expression was also
higher in patients with MC subtype (61.11%) than those patients with
NS (28.57%). No age or gender difference in occurrence of EBV
infection was observed among patient with HL. By contrast, the
prevalence of EBV infection in NHL patients aged below 50 was
lower (16.66%) than in NHL patients aged 50 or above (46.15%). In
addition, EBV infection was more frequent in females with NHL
(38.46%) than in male with NHL (25%). In NHL cases, the
frequency of EBV infection in intermediate grade (60.0%) was high
when compared with frequency of low (25%) or high grades (25%).
In conclusion, analysis of LMP-1 expression indicates an important
role for this viral oncogene in the pathogenesis of EBV-associated
malignant lymphomas. These data also support the previous findings
that people with EBV may develop lymphoma and that efforts to
maintain low lymphoma should be considered for people with EBV
infection.
Abstract: Sleep stage scoring is the process of classifying the
stage of the sleep in which the subject is in. Sleep is classified into
two states based on the constellation of physiological parameters.
The two states are the non-rapid eye movement (NREM) and the
rapid eye movement (REM). The NREM sleep is also classified into
four stages (1-4). These states and the state wakefulness are
distinguished from each other based on the brain activity. In this
work, a classification method for automated sleep stage scoring
based on a single EEG recording using wavelet packet decomposition
was implemented. Thirty two ploysomnographic recording from the
MIT-BIH database were used for training and validation of the
proposed method. A single EEG recording was extracted and
smoothed using Savitzky-Golay filter. Wavelet packets
decomposition up to the fourth level based on 20th order Daubechies
filter was used to extract features from the EEG signal. A features
vector of 54 features was formed. It was reduced to a size of 25 using
the gain ratio method and fed into a classifier of regression trees. The
regression trees were trained using 67% of the records available. The
records for training were selected based on cross validation of the
records. The remaining of the records was used for testing the
classifier. The overall correct rate of the proposed method was found
to be around 75%, which is acceptable compared to the techniques in
the literature.
Abstract: The purpose of this research is to develop and apply the
RSCMAC to enhance the dynamic accuracy of Global Positioning
System (GPS). GPS devices provide services of accurate positioning,
speed detection and highly precise time standard for over 98% area on
the earth. The overall operation of Global Positioning System includes
24 GPS satellites in space; signal transmission that includes 2
frequency carrier waves (Link 1 and Link 2) and 2 sets random
telegraphic codes (C/A code and P code), on-earth monitoring stations
or client GPS receivers. Only 4 satellites utilization, the client position
and its elevation can be detected rapidly. The more receivable
satellites, the more accurate position can be decoded. Currently, the
standard positioning accuracy of the simplified GPS receiver is greatly
increased, but due to affected by the error of satellite clock, the
troposphere delay and the ionosphere delay, current measurement
accuracy is in the level of 5~15m. In increasing the dynamic GPS
positioning accuracy, most researchers mainly use inertial navigation
system (INS) and installation of other sensors or maps for the
assistance. This research utilizes the RSCMAC advantages of fast
learning, learning convergence assurance, solving capability of
time-related dynamic system problems with the static positioning
calibration structure to improve and increase the GPS dynamic
accuracy. The increasing of GPS dynamic positioning accuracy can be
achieved by using RSCMAC system with GPS receivers collecting
dynamic error data for the error prediction and follows by using the
predicted error to correct the GPS dynamic positioning data. The
ultimate purpose of this research is to improve the dynamic positioning
error of cheap GPS receivers and the economic benefits will be
enhanced while the accuracy is increased.