Abstract: The present paper considers the steady free
convection boundary layer flow of a viscoelastics fluid with constant
temperature in the presence of heat generation. The boundary layer
equations are an order higher than those for the Newtonian (viscous)
fluid and the adherence boundary conditions are insufficient to
determine the solution of these equations completely. The governing
boundary layer equations are first transformed into non-dimensional
form by using special dimensionless group. Computations are
performed numerically by using Keller-box method by augmenting
an extra boundary condition at infinity and the results are displayed
graphically to illustrate the influence of viscoelastic K, heat
generation γ , and Prandtl Number, Pr parameters on the velocity
and temperature profiles. The results of the surface shear stress in
terms of the local skin friction and the surface rate of heat transfer in
terms of the local Nusselt number for a selection of the heat
generation parameterγ (=0.0, 0.2, 0.5, 0.8, 1.0) are obtained and
presented in both tabular and graphical formats. Without effect of the
internal heat generation inside the fluid domain for which we take
γ = 0.0, the present numerical results show an excellent agreement
with previous publication.
Abstract: This paper study about using of nonparametric
models for Gross National Product data in Turkey and Stanford heart
transplant data. It is discussed two nonparametric techniques called
smoothing spline and kernel regression. The main goal is to compare
the techniques used for prediction of the nonparametric regression
models. According to the results of numerical studies, it is concluded
that smoothing spline regression estimators are better than those of
the kernel regression.
Abstract: Histogram plays an important statistical role in digital
image processing. However, the existing quantum image models are
deficient to do this kind of image statistical processing because
different gray scales are not distinguishable. In this paper, a novel
quantum image representation model is proposed firstly in which the
pixels with different gray scales can be distinguished and operated
simultaneously. Based on the new model, a fast quantum algorithm of
constructing histogram for quantum image is designed. Performance
comparison reveals that the new quantum algorithm could achieve an
approximately quadratic speedup than the classical counterpart. The
proposed quantum model and algorithm have significant meanings for
the future researches of quantum image processing.
Abstract: This paper proposes view-point insensitive human
pose recognition system using neural network. Recognition system
consists of silhouette image capturing module, data driven database,
and neural network. The advantages of our system are first, it is
possible to capture multiple view-point silhouette images of 3D human
model automatically. This automatic capture module is helpful to
reduce time consuming task of database construction. Second, we
develop huge feature database to offer view-point insensitivity at pose
recognition. Third, we use neural network to recognize human pose
from multiple-view because every pose from each model have similar
feature patterns, even though each model has different appearance and
view-point. To construct database, we need to create 3D human model
using 3D manipulate tools. Contour shape is used to convert silhouette
image to feature vector of 12 degree. This extraction task is processed
semi-automatically, which benefits in that capturing images and
converting to silhouette images from the real capturing environment is
needless. We demonstrate the effectiveness of our approach with
experiments on virtual environment.
Abstract: Biochemical Oxygen Demand (BOD) is a measure of
the oxygen used in bacteria mediated oxidation of organic substances
in water and wastewater. Theoretically an infinite time is required for
complete biochemical oxidation of organic matter, but the
measurement is made over 5-days at 20 0C or 3-days at 27 0C test
period with or without dilution. Researchers have worked to further
reduce the time of measurement.
The objective of this paper is to review advancement made in
BOD measurement primarily to minimize the time and negate the
measurement difficulties. Survey of literature review in four such
techniques namely BOD-BARTTM, Biosensors, Ferricyanidemediated
approach, luminous bacterial immobilized chip method.
Basic principle, method of determination, data validation and their
advantage and disadvantages have been incorporated of each of the
methods.
In the BOD-BARTTM method the time lag is calculated for the
system to change from oxidative to reductive state. BIOSENSORS
are the biological sensing element with a transducer which produces
a signal proportional to the analyte concentration. Microbial species
has its metabolic deficiencies. Co-immobilization of bacteria using
sol-gel biosensor increases the range of substrate. In ferricyanidemediated
approach, ferricyanide has been used as e-acceptor instead
of oxygen. In Luminous bacterial cells-immobilized chip method,
bacterial bioluminescence which is caused by lux genes was
observed. Physiological responses is measured and correlated to
BOD due to reduction or emission.
There is a scope to further probe into the rapid estimation of BOD.
Abstract: Hepatocellular carcinoma, also called hepatoma, most
commonly appears in a patient with chronic viral hepatitis. In
patients with a higher suspicion of HCC, such as small or subtle
rising of serum enzymes levels, the best method of diagnosis
involves a CT scan of the abdomen, but only at high cost. The aim of
this study was to increase the ability of the physician to early detect
HCC, using a probabilistic neural network-based approach, in order
to save time and hospital resources.
Abstract: Web 2.0 (social networking, blogging and online
forums) can serve as a data source for social science research because
it contains vast amount of information from many different users.
The volume of that information has been growing at a very high rate
and becoming a network of heterogeneous data; this makes things
difficult to find and is therefore not almost useful. We have proposed
a novel theoretical model for gathering and processing data from
Web 2.0, which would reflect semantic content of web pages in
better way. This article deals with the analysis part of the model and
its usage for content analysis of blogs. The introductory part of the
article describes methodology for the gathering and processing data
from blogs. The next part of the article is focused on the evaluation
and content analysis of blogs, which write about specific trend.
Abstract: In this paper, we apply the FM methodology to the
cross-section of Romanian-listed common stocks and investigate the
explanatory power of market beta on the cross-section of commons
stock returns from Bucharest Stock Exchange. Various assumptions
are empirically tested, such us linearity, market efficiency, the “no
systematic effect of non-beta risk" hypothesis or the positive
expected risk-return trade-off hypothesis. We find that the Romanian
stock market shows the same properties as the other emerging
markets in terms of efficiency and significance of the linear riskreturn
models. Our analysis included weekly returns from January
2002 until May 2010 and the portfolio formation, estimation and
testing was performed in a rolling manner using 51 observations (one
year) for each stage of the analysis.
Abstract: This current research focused on development of degradable starch based packaging film with enhanced mechanical properties. A series of low density polyethylene (LDPE)/tapioca starch compounds with various tapioca starch contents were prepared by twin screw extrusion with the addition of maleic anhydride grafted polyethylene as compatibilizer. Palm cooking oil was used as processing aid to ease the blown film process, thus, degradable film can be processed via conventional blown film machine. Studies on their characteristics, mechanical properties and biodegradation were carried out by Fourier Transform Infrared (FTIR) spectroscopy and optical properties, tensile test and exposure to fungi environment respectively. The presence of high starch contents had an adverse effect on the tensile properties of LDPE/tapioca starch blends. However, the addition of compatibilizer to the blends improved the interfacial adhesion between the two materials, hence, improved the tensile properties of the films. High content of starch amount also was found to increase the rate of biodegradability of LDPE/tapioca starch films. It can be proved by exposure of the film to fungi environment. A growth of microbes colony can be seen on the surface of LDPE/tapioca starch film indicates that the granular starch present on the surface of the polymer film is attacked by microorganisms, until most of it is assimilated as a carbon source.
Abstract: An efficient parallel form in digital signal processor can improve the algorithm performance. The butterfly structure is an important role in fast Fourier transform (FFT), because its symmetry form is suitable for hardware implementation. Although it can perform a symmetric structure, the performance will be reduced under the data-dependent flow characteristic. Even though recent research which call as novel memory reference reduction methods (NMRRM) for FFT focus on reduce memory reference in twiddle factor, the data-dependent property still exists. In this paper, we propose a parallel-computing approach for FFT implementation on digital signal processor (DSP) which is based on data-independent property and still hold the property of low-memory reference. The proposed method combines final two steps in NMRRM FFT to perform a novel data-independent structure, besides it is very suitable for multi-operation-unit digital signal processor and dual-core system. We have applied the proposed method of radix-2 FFT algorithm in low memory reference on TI TMSC320C64x DSP. Experimental results show the method can reduce 33.8% clock cycles comparing with the NMRRM FFT implementation and keep the low-memory reference property.
Abstract: To simulate heating systems in buildings, a research oriented computer code has been developed in Sharif University of Technology in Iran where the climate, existing heating equipment in buildings, consumer behavior and their interactions are considered for simulating energy consumption in conventional systems such as heaters, radiators and fan-coils. In order to validate the computer code, the available data of five buildings was used and the computed consumed energy was compared with the estimated energy extracted from monthly bills. The initial heating system was replaced by the alternative system and the effect of this change was observed on the energy consumption. As a result, the effect of changing heating equipment on energy consumption was investigated in different climates. Changing heater to radiator renders energy conservation up to 50% in all climates and changing radiator to fan-coil decreases energy consumption in climates with cold and dry winter.
Abstract: User interaction components of Augmented Reality (AR) systems have to be tested with users in order to find and fix usability problems as early as possible. In this paper we will report on a user-centered design approach for AR systems following the experience acquired during the design and evaluation of a software prototype for an AR-based educational platform. In this respect we will focus on the re-design of the user task based on the results from a formative usability evaluation. The basic idea of our approach is to describe task scenarios in a tabular format, to develop a task model in a task modeling environment and then to simulate the execution.
Abstract: Group work, projects and discussions are important
components of teacher education courses whether they are face-toface,
blended or exclusively online formats. This paper examines the varieties of tasks and challenges with this learning format in a face to
face class teacher education class providing specific examples of both
failure and success from both the student and instructor perspective.
The discussion begins with a brief history of collaborative and cooperative learning, moves to an exploration of the promised
benefits and then takes a look at some of the challenges which can
arise specifically from the use of new technologies. The discussion concludes with guidelines and specific suggestions.
Abstract: There are many researches to detect collision between real object and virtual object in 3D space. In general, these techniques are need to huge computing power. So, many research and study are constructed by using cloud computing, network computing, and distribute computing. As a reason of these, this paper proposed a novel fast 3D collision detection algorithm between real and virtual object using 2D intersection area. Proposed algorithm uses 4 multiple cameras and coarse-and-fine method to improve accuracy and speed performance of collision detection. In the coarse step, this system examines the intersection area between real and virtual object silhouettes from all camera views. The result of this step is the index of virtual sensors which has a possibility of collision in 3D space. To decide collision accurately, at the fine step, this system examines the collision detection in 3D space by using the visual hull algorithm. Performance of the algorithm is verified by comparing with existing algorithm. We believe proposed algorithm help many other research, study and application fields such as HCI, augmented reality, intelligent space, and so on.
Abstract: This study extends research on the relationship
between marketing strategy and market segmentation by
investigating on market segments in the cement industry.
Competitive strength and rivals distance from the factory were used
as business environment. A three segment (positive, neutral or
indifferent and zero zones) were identified as strategic segments. For
each segment a marketing strategy (aggressive, defensive and
decline) were developed. This study employed data from cement
industry to fulfill two objectives, the first is to give a framework to
the segmentation of cement industry and the second is developing
marketing strategy with varying competitive strength. Fifty six
questionnaires containing close-and open-ended questions were
collected and analyzed. Results supported the theory that segments
tend to be more aggressive than defensive when competitive strength
increases. It is concluded that high strength segments follow total
market coverage, concentric diversification and frontal attack to their
competitors. With decreased competitive strength, Business tends to
follow multi-market strategy, product modification/improvement and
flank attack to direct competitors for this kind of segments. Segments
with weak competitive strength followed focus strategy and decline
strategy.
Abstract: Efficient utilization of existing water is a pressing
need for Pakistan. Due to rising population, reduction in present
storage capacity and poor delivery efficiency of 30 to 40% from
canal. A study to evaluate an irrigation system in the cotton-wheat
zone of Pakistan, after the watercourse lining was conducted. The
study is made on the basis of cropping pattern and salinity to evaluate
the system. This study employed an index-based approach of using
Geographic information system with field data. The satellite images
of different years were use to examine the effective area. Several
combinations of the ratio of signals received in different spectral
bands were used for development of this index. Near Infrared and
Thermal IR spectral bands proved to be most effective as this
combination helped easy detection of salt affected area and cropping
pattern of the study area. Result showed that 9.97% area under
salinity in 1992, 9.17% in 2000 and it left 2.29% in year 2005.
Similarly in 1992, 45% area is under vegetation it improves to 56%
and 65% in 2000 and 2005 respectively. On the basis of these results
evaluation is done 30% performance is increase after the watercourse
improvement.
Abstract: Context awareness is a capability whereby mobile
computing devices can sense their physical environment and adapt
their behavior accordingly. The term context-awareness, in
ubiquitous computing, was introduced by Schilit in 1994 and has
become one of the most exciting concepts in early 21st-century
computing, fueled by recent developments in pervasive computing
(i.e. mobile and ubiquitous computing). These include computing
devices worn by users, embedded devices, smart appliances, sensors
surrounding users and a variety of wireless networking technologies.
Context-aware applications use context information to adapt
interfaces, tailor the set of application-relevant data, increase the
precision of information retrieval, discover services, make the user
interaction implicit, or build smart environments. For example: A
context aware mobile phone will know that the user is currently in a
meeting room, and reject any unimportant calls. One of the major
challenges in providing users with context-aware services lies in
continuously monitoring their contexts based on numerous sensors
connected to the context aware system through wireless
communication. A number of context aware frameworks based on
sensors have been proposed, but many of them have neglected the
fact that monitoring with sensors imposes heavy workloads on
ubiquitous devices with limited computing power and battery. In this
paper, we present CALEEF, a lightweight and energy efficient
context aware framework for resource limited ubiquitous devices.
Abstract: Knowledge discovery from text and ontology learning
are relatively new fields. However their usage is extended in many
fields like Information Retrieval (IR) and its related domains. Human
Plausible Reasoning based (HPR) IR systems for example need a
knowledge base as their underlying system which is currently made
by hand. In this paper we propose an architecture based on ontology
learning methods to automatically generate the needed HPR
knowledge base.
Abstract: The current study describes a multi-objective optimization technique for positioning of houses in a residential neighborhood. The main task is the placement of residential houses in a favorable configuration satisfying a number of objectives. Solving the house layout problem is a challenging task. It requires an iterative approach to satisfy design requirements (e.g. energy efficiency, skyview, daylight, roads network, visual privacy, and clear access to favorite views). These design requirements vary from one project to another based on location and client preferences. In the Gulf region, the most important socio-cultural factor is the visual privacy in indoor space. Hence, most of the residential houses in this region are surrounded by high fences to provide privacy, which has a direct impact on other requirements (e.g. daylight and direction to favorite views). This investigation introduces a novel technique to optimally locate and orient residential buildings to satisfy a set of design requirements. The developed technique explores the search space for possible solutions. This study considers two dimensional house planning problems. However, it can be extended to solve three dimensional cases.
Abstract: Finger spelling is an art of communicating by signs
made with fingers, and has been introduced into sign language to serve
as a bridge between the sign language and the verbal language.
Previous approaches to finger spelling recognition are classified into
two categories: glove-based and vision-based approaches. The
glove-based approach is simpler and more accurate recognizing work
of hand posture than vision-based, yet the interfaces require the user to
wear a cumbersome and carry a load of cables that connected the
device to a computer. In contrast, the vision-based approaches provide
an attractive alternative to the cumbersome interface, and promise
more natural and unobtrusive human-computer interaction. The
vision-based approaches generally consist of two steps: hand
extraction and recognition, and two steps are processed independently.
This paper proposes real-time vision-based Korean finger spelling
recognition system by integrating hand extraction into recognition.
First, we tentatively detect a hand region using CAMShift algorithm.
Then fill factor and aspect ratio estimated by width and height
estimated by CAMShift are used to choose candidate from database,
which can reduce the number of matching in recognition step. To
recognize the finger spelling, we use DTW(dynamic time warping)
based on modified chain codes, to be robust to scale and orientation
variations. In this procedure, since accurate hand regions, without
holes and noises, should be extracted to improve the precision, we use
graph cuts algorithm that globally minimize the energy function
elegantly expressed by Markov random fields (MRFs). In the
experiments, the computational times are less than 130ms, and the
times are not related to the number of templates of finger spellings in
database, as candidate templates are selected in extraction step.