Abstract: Addition of milli or micro sized particles to the heat
transfer fluid is one of the many techniques employed for improving
heat transfer rate. Though this looks simple, this method has
practical problems such as high pressure loss, clogging and erosion
of the material of construction. These problems can be overcome by
using nanofluids, which is a dispersion of nanosized particles in a
base fluid. Nanoparticles increase the thermal conductivity of the
base fluid manifold which in turn increases the heat transfer rate.
Nanoparticles also increase the viscosity of the basefluid resulting in
higher pressure drop for the nanofluid compared to the base fluid. So
it is imperative that the Reynolds number (Re) and the volume
fraction have to be optimum for better thermal hydraulic
effectiveness. In this work, the heat transfer enhancement using
aluminium oxide nanofluid using low and high volume fraction
nanofluids in turbulent pipe flow with constant wall temperature has
been studied by computational fluid dynamic modeling of the
nanofluid flow adopting the single phase approach. Nanofluid, up till
a volume fraction of 1% is found to be an effective heat transfer
enhancement technique. The Nusselt number (Nu) and friction factor
predictions for the low volume fractions (i.e. 0.02%, 0.1 and 0.5%)
agree very well with the experimental values of Sundar and Sharma
(2010). While, predictions for the high volume fraction nanofluids
(i.e. 1%, 4% and 6%) are found to have reasonable agreement with
both experimental and numerical results available in the literature.
So the computationally inexpensive single phase approach can be
used for heat transfer and pressure drop prediction of new nanofluids.
Abstract: This paper describes the design of a programmable
FSK-modulator based on VCO and its implementation in 0.35m
CMOS process. The circuit is used to transmit digital data at
100Kbps rate in the frequency range of 400-600MHz. The design
and operation of the modulator is discussed briefly. Further the
characteristics of PLL, frequency synthesizer, VCO and the whole
design are elaborated. The variation among the proposed and tested
specifications is presented. Finally, the layout of sub-modules, pin
configurations, final chip and test results are presented.
Abstract: Dynamic Causal Modeling (DCM) functional
Magnetic Resonance Imaging (fMRI) is a promising technique to
study the connectivity among brain regions and effects of stimuli
through modeling neuronal interactions from time-series
neuroimaging. The aim of this study is to study characteristics of a
mirror neuron system (MNS) in elderly group (age: 60-70 years old).
Twenty volunteers were MRI scanned with visual stimuli to study a
functional brain network. DCM was employed to determine the
mechanism of mirror neuron effects. The results revealed major
activated areas including precentral gyrus, inferior parietal lobule,
inferior occipital gyrus, and supplementary motor area. When visual
stimuli were presented, the feed-forward connectivity from visual
area to conjunction area was increased and forwarded to motor area.
Moreover, the connectivity from the conjunction areas to premotor
area was also increased. Such findings can be useful for future
diagnostic process for elderly with diseases such as Parkinson-s and
Alzheimer-s.
Abstract: In comparison to the original SVM, which involves a
quadratic programming task; LS–SVM simplifies the required
computation, but unfortunately the sparseness of standard SVM is
lost. Another problem is that LS-SVM is only optimal if the training
samples are corrupted by Gaussian noise. In Least Squares SVM
(LS–SVM), the nonlinear solution is obtained, by first mapping the
input vector to a high dimensional kernel space in a nonlinear
fashion, where the solution is calculated from a linear equation set. In
this paper a geometric view of the kernel space is introduced, which
enables us to develop a new formulation to achieve a sparse and
robust estimate.
Abstract: This paper considers the problem of finding low cost
chip set for a minimum cost partitioning of a large logic circuits. Chip
sets are selected from a given library. Each chip in the library has a
different price, area, and I/O pin. We propose a low cost chip set
selection algorithm. Inputs to the algorithm are a netlist and a chip
information in the library. Output is a list of chip sets satisfied with
area and maximum partitioning number and it is sorted by cost. The
algorithm finds the sorted list of chip sets from minimum cost to
maximum cost. We used MCNC benchmark circuits for experiments.
The experimental results show that all of chip sets found satisfy the
multiple partitioning constraints.
Abstract: The utilize of renewable energy sources becomes
more crucial and fascinatingly, wider application of renewable
energy devices at domestic, commercial and industrial levels is not
only affect to stronger awareness but also significantly installed
capacities. Moreover, biomass principally is in form of woods and
converts to be energy for using by humans for a long time.
Gasification is a process of conversion of solid carbonaceous fuel
into combustible gas by partial combustion. Many gasified models
have various operating conditions because the parameters kept in
each model are differentiated. This study applied the experimental
data including three inputs variables including biomass consumption;
temperature at combustion zone and ash discharge rate and gas flow
rate as only one output variable. In this paper, response surface
methods were applied for identification of the gasified system
equation suitable for experimental data. The result showed that linear
model gave superlative results.
Abstract: In this paper, we present C@sa, a multiagent system aiming at modeling, controlling and simulating the behavior of an intelligent house. The developed system aims at providing to architects, designers and psychologists a simulation and control tool for understanding which is the impact of embedded and pervasive technology on people daily life. In this vision, the house is seen as an environment made up of independent and distributed devices, controlled by agents, interacting to support user's goals and tasks.
Abstract: Sparse representation which can represent high dimensional
data effectively has been successfully used in computer vision
and pattern recognition problems. However, it doesn-t consider the
label information of data samples. To overcome this limitation,
we develop a novel dimensionality reduction algorithm namely
dscriminatively regularized sparse subspace learning(DR-SSL) in this
paper. The proposed DR-SSL algorithm can not only make use of
the sparse representation to model the data, but also can effective
employ the label information to guide the procedure of dimensionality
reduction. In addition,the presented algorithm can effectively deal
with the out-of-sample problem.The experiments on gene-expression
data sets show that the proposed algorithm is an effective tool for
dimensionality reduction and gene-expression data classification.
Abstract: This paper presents a robust method to detect obstacles in stereo images using shadow removal technique and color information. Stereo vision based obstacle detection is an algorithm that aims to detect and compute obstacle depth using stereo matching and disparity map. The proposed advanced method is divided into three phases, the first phase is detecting obstacles and removing shadows, the second one is matching and the last phase is depth computing. We propose a robust method for detecting obstacles in stereo images using a shadow removal technique based on color information in HIS space, at the first phase. In this paper we use Normalized Cross Correlation (NCC) function matching with a 5 × 5 window and prepare an empty matching table τ and start growing disparity components by drawing a seed s from S which is computed using canny edge detector, and adding it to τ. In this way we achieve higher performance than the previous works [2,17]. A fast stereo matching algorithm is proposed that visits only a small fraction of disparity space in order to find a semi-dense disparity map. It works by growing from a small set of correspondence seeds. The obstacle identified in phase one which appears in the disparity map of phase two enters to the third phase of depth computing. Finally, experimental results are presented to show the effectiveness of the proposed method.
Abstract: The study aimed to evaluated the reproductive performance response to short term oestrus synchronization during the transition period. One hundred and sixty-five indigenous multiparous non-lactating goats were subdivided into the following six treatment groups for oestrus synchronization: NT control Group (N= 30), Fe-21d, FGA vaginal sponge for 21days+eCG at 19thd; FPe- 11d, FGA 11d + PGF2α and eCG at 9th d; FPe-10d, FGA 10d+ PGF2α and eCG at 8th d; FPe-9d, FGA 9d +PGF2α and eCG at 7thd; PFe-5d, PGF2α at d0 + FGA 5d + eCG at 5thd. The goats were natural mated (1 male/6 females). Fecundity rates (n. births /n. females treated x 100) were statistically higher (P < 0.05) in short term FPe-9d (157.9%), FPe- 11d (115.4%), FPe-10d (111.1%) and PFe-5d (107.7%) groups compared to the NT control Group (66.7%).
Abstract: Little attention has been paid to information
transmission between the portfolios of large stocks and small stocks in the Korean stock market. This study investigates the return and volatility transmission mechanisms between large and small stocks in
the Korea Exchange (KRX). This study also explores whether bad news in the large stock market leads to a volatility of the small stock
market that is larger than the good news volatility of the large stock market. By employing the Granger causality test, we found
unidirectional return transmissions from the large stocks to medium
and small stocks. This evidence indicates that pat information about
the large stocks has a better ability to predict the returns of the medium and small stocks in the Korean stock market. Moreover, by using the
asymmetric GARCH-BEKK model, we observed the unidirectional relationship of asymmetric volatility transmission from large stocks to
the medium and small stocks. This finding suggests that volatility in
the medium and small stocks following a negative shock in the large
stocks is larger than that following a positive shock in the large stocks.
Abstract: Nowadays there are several grid connected converter
in the grid system. These grid connected converters are generally the
converters of renewable energy sources, industrial four quadrant
drives and other converters with DC link. These converters are
connected to the grid through a three phase bridge. The standards
prescribe the maximal harmonic emission which could be easily
limited with high switching frequency. The increased switching
losses can be reduced to the half with the utilization of the wellknown
Flat-top modulation. The suggested control method is the
expansion of the Flat-top modulation with which the losses could be
also reduced to the half compared to the Flat-top modulation.
Comparing to traditional control these requirements can be
simultaneously satisfied much better with the DLF (DC Link
Floating) method.
Abstract: The primary purpose of this article is an attempt to
find the implication of globalization on education. Globalization has
an important role as a process in the economical, political, cultural
and technological dimensions in the life of the contemporary human
being and has been affected by it. Education has its effects in this
procedure and while influencing it through educating global citizens
having universal human features and characteristics, has been
influenced by this phenomenon too. Nowadays, the role of education
is not just to develop in the students the knowledge and skills
necessary for the new kinds of jobs. If education wants to help
students be prepared of the new global society, it has to make them
engaged productive and critical citizens for the global era, so that
they can reflect about their roles as key actors in a dynamic often
uneven, matrix of economic and cultural exchanges. If education
wants to reinforce and raise the national identity, the value system
and the children and teenagers, it should make them ready for living
in the global era of this century. The used method in this research is
documentary and analyzing the documents. Studies in this field show
globalization has influences on the processes of the production,
distribution and consuming of knowledge. The happening of this
event in the information era has not only provide the necessary
opportunities for the exchanges of education worldwide but also has
privileges for the developing countries which enables them to
strengthen educational bases of their society and have an important
step toward their future.
Abstract: In today's world where everything is rapidly changing
and information technology is high in development, many features of culture, society, politic and economy has changed. The advent of
information technology and electronic data transmission lead to easy communication and fields like e-learning and e-commerce, are
accessible for everyone easily. One of these technologies is virtual
training. The "quality" of such kind of education systems is critical. 131 questionnaires were prepared and distributed among university
student in Toba University. So the research has followed factors that affect the quality of learning from the perspective of staff, students, professors and this type of university. It is concluded that the important factors in virtual training are the quality of professors, the
quality of staff, and the quality of the university. These mentioned factors were the most prior factors in this education system and
necessary for improving virtual training.
Abstract: This study aims to examine the determinants of
purchase intention in C2C e-commerce. Specifically the role of
instant messaging in the C2C e-commerce contextis investigated. In
addition to instant messaging, we brought in two antecedents of
purchase intention - trust and customer satisfaction - to establish a
theoretical research model. Structural equation modeling using
LISREL was used to analyze the data.We discussed the research
findings and suggested some implications for researchers and
practitioners.
Abstract: A key requirement for e-learning materials is
reusability and interoperability, that is the possibility to use at least
part of the contents in different courses, and to deliver them trough
different platforms. These features make possible to limit the cost of
new packages, but require the development of material according to
proper specifications. SCORM (Sharable Content Object Reference
Model) is a set of guidelines suitable for this purpose. A specific
adaptation project has been started to make possible to reuse existing
materials. The paper describes the main characteristics of SCORM
specification, and the procedure used to modify the existing material.
Abstract: A learning management system (commonly
abbreviated as LMS) is a software application for the administration,
documentation, tracking, and reporting of training programs,
classroom and online events, e-learning programs, and training
content (Ellis 2009). (Hall 2003) defines an LMS as \"software that
automates the administration of training events. All Learning
Management Systems manage the log-in of registered users, manage
course catalogs, record data from learners, and provide reports to
management\". Evidence of the worldwide spread of e-learning in
recent years is easy to obtain. In April 2003, no fewer than 66,000
fully online courses and 1,200 complete online programs were listed
on the TeleCampus portal from TeleEducation (Paulsen 2003). In the
report \" The US market in the Self-paced eLearning Products and
Services:2010-2015 Forecast and Analysis\" The number of student
taken classes exclusively online will be nearly equal (1% less) to the
number taken classes exclusively in physical campuses. Number of
student taken online course will increase from 1.37 million in 2010 to
3.86 million in 2015 in USA. In another report by The Sloan
Consortium three-quarters of institutions report that the economic
downturn has increased demand for online courses and programs.
Abstract: This paper presents a hand vein authentication system
using fast spatial correlation of hand vein patterns. In order to
evaluate the system performance, a prototype was designed and a
dataset of 50 persons of different ages above 16 and of different
gender, each has 10 images per person was acquired at different
intervals, 5 images for left hand and 5 images for right hand. In
verification testing analysis, we used 3 images to represent the
templates and 2 images for testing. Each of the 2 images is matched
with the existing 3 templates. FAR of 0.02% and FRR of 3.00 %
were reported at threshold 80. The system efficiency at this threshold
was found to be 99.95%. The system can operate at a 97% genuine
acceptance rate and 99.98 % genuine reject rate, at corresponding
threshold of 80. The EER was reported as 0.25 % at threshold 77. We
verified that no similarity exists between right and left hand vein
patterns for the same person over the acquired dataset sample.
Finally, this distinct 100 hand vein patterns dataset sample can be
accessed by researchers and students upon request for testing other
methods of hand veins matching.
Abstract: This study aimed to present the mechanical
performance evaluation of the dynamic hip screw (DHS) for
trochanteric fracture by means of finite element method. The
analyses were performed based on stainless steel and titanium
implant material definitions at various stages of bone healing and
including implant removal. The assessment of the mechanical
performance used two parameters, von Mises stress to evaluate the
strength of bone and implant and elastic strain to evaluate fracture
stability. The results show several critical aspects of dynamic hip
screw for trochanteric fracture stabilization. In the initial stage of
bone healing process, partial weight bearing should be applied to
avoid the implant failure. In the late stage of bone healing, stainless
steel implant should be removed.
Abstract: This paper presents a real-time defect detection
algorithm for high-speed steel bar in coil. Because the target speed is
very high, proposed algorithm should process quickly the large
volumes of image for real-time processing. Therefore, defect detection
algorithm should satisfy two conflicting requirements of reducing the
processing time and improving the efficiency of defect detection. To
enhance performance of detection, edge preserving method is
suggested for noise reduction of target image. Finally, experiment
results show that the proposed algorithm guarantees the condition of
the real-time processing and accuracy of detection.