Abstract: Tannase (tannin acyl hydrolase, E.C.3.1.1.20) is an
important hydrolysable enzyme with innumerable applications and
industrial potential. In the present study, a kinetic model has been
developed for the batch fermentation used for the production of
tannase by A.flavus MTCC 3783. Maximum tannase activity of
143.30 U/ml was obtained at 96 hours under optimum operating
conditions at 35oC, an initial pH of 5.5 and with an inducer tannic
acid concentration of 3% (w/v) for a fermentation period of 120
hours. The biomass concentration reaches a maximum of 6.62 g/l at
96 hours and further there was no increase in biomass concentration
till the end of the fermentation. Various unstructured kinetic models
were analyzed to simulate the experimental values of microbial
growth, tannase activity and substrate concentration. The Logistic
model for microbial growth , Luedeking - Piret model for production
of tannase and Substrate utilization kinetic model for utilization of
substrate were capable of predicting the fermentation profile with
high coefficient of determination (R2) values of 0.980, 0.942 and
0.983 respectively. The results indicated that the unstructured models
were able to describe the fermentation kinetics more effectively.
Abstract: The goal of this paper is to present the diagnostic
contribution that the screening instrument, Mini-Mental State
Examination-2: Expanded Version (MMSE-2:EV), brings in
detecting the cognitive impairment or in monitoring the progress of
degenerative disorders. The diagnostic signification is underlined by
the interpretation of the MMSE-2:EV scores, resulted from the test
application to patients with mild and major neurocognitive disorders.
The cases were selected from current practice, in order to cover vast
and significant neurocognitive pathology: mild cognitive impairment,
Alzheimer’s disease, vascular dementia, mixed dementia, Parkinson’s
disease, conversion of the mild cognitive impairment into
Alzheimer’s disease. The MMSE-2:EV version was used: it was
applied one month after the initial assessment, three months after the
first reevaluation and then every six months, alternating the blue and
red forms. Correlated with age and educational level, the raw scores
were converted in T scores and then, with the mean and the standard
deviation, the z scores were calculated. The differences of raw scores
between the evaluations were analyzed from the point of view of
statistic signification, in order to establish the progression in time of
the disease. The results indicated that the psycho-diagnostic approach
for the evaluation of the cognitive impairment with MMSE-2:EV is
safe and the application interval is optimal. In clinical settings with a
large flux of patients, the application of the MMSE-2:EV is a safe
and fast psychodiagnostic solution. The clinicians can draw objective
decisions and for the patients: it does not take too much time and
energy, it does not bother them and it doesn’t force them to travel
frequently.
Abstract: The lifetime of a wireless sensor network can be
effectively increased by using scheduling operations. Once the
sensors are randomly deployed, the task at hand is to find the largest
number of disjoint sets of sensors such that every sensor set provides
complete coverage of the target area. At any instant, only one of these
disjoint sets is switched on, while all other are switched off. This
paper proposes a heuristic search method to find the maximum
number of disjoint sets that completely cover the region. A
population of randomly initialized members is made to explore the
solution space. A set of heuristics has been applied to guide the
members to a possible solution in their neighborhood. The heuristics
escalate the convergence of the algorithm. The best solution explored
by the population is recorded and is continuously updated. The
proposed algorithm has been tested for applications which require
sensing of multiple target points, referred to as point coverage
applications. Results show that the proposed algorithm outclasses the
existing algorithms. It always finds the optimum solution, and that
too by making fewer number of fitness function evaluations than the
existing approaches.
Abstract: Distillery spentwash contains high chemical oxygen
demand (COD), biological oxygen demand (BOD), color, total
dissolved solids (TDS) and other contaminants even after biological
treatment. The effluent can’t be discharged as such in the surface
water bodies or land without further treatment. Reverse osmosis (RO)
treatment plants have been installed in many of the distilleries at
tertiary level in many of the distilleries in India, but are not properly
working due to fouling problem which is caused by the presence of
high concentration of organic matter and other contaminants in
biologically treated spentwash. In order to make the membrane
treatment a proven and reliable technology, proper pre-treatment is
mandatory. In the present study, ultra-filtration (UF) for pretreatment
of RO at tertiary stage has been performed. Operating
parameters namely initial pH (pHo: 2–10), trans-membrane pressure
(TMP: 4-20 bars) and temperature (T: 15-43°C) were used for
conducting experiments with UF system. Experiments were
optimized at different operating parameters in terms of COD, color,
TDS and TOC removal by using response surface methodology
(RSM) with central composite design. The results showed that
removal of COD, color and TDS was 62%, 93.5% and 75.5%
respectively, with UF, at optimized conditions with increased
permeate flux from 17.5 l/m2/h (RO) to 38 l/m2/h (UF-RO). The
performance of the RO system was greatly improved both in term of
pollutant removal as well as water recovery.
Abstract: Optic disk segmentation plays a key role in the mass
screening of individuals with diabetic retinopathy and glaucoma
ailments. An efficient hardware-based algorithm for optic disk
localization and segmentation would aid for developing an automated
retinal image analysis system for real time applications. Herein,
TMS320C6416DSK DSP board pixel intensity based fractal analysis
algorithm for an automatic localization and segmentation of the optic
disk is reported. The experiment has been performed on color and
fluorescent angiography retinal fundus images. Initially, the images
were pre-processed to reduce the noise and enhance the quality. The
retinal vascular tree of the image was then extracted using canny
edge detection technique. Finally, a pixel intensity based fractal
analysis is performed to segment the optic disk by tracing the origin
of the vascular tree. The proposed method is examined on three
publicly available data sets of the retinal image and also with the data
set obtained from an eye clinic. The average accuracy achieved is
96.2%. To the best of the knowledge, this is the first work reporting
the use of TMS320C6416DSK DSP board and pixel intensity based
fractal analysis algorithm for an automatic localization and
segmentation of the optic disk. This will pave the way for developing
devices for detection of retinal diseases in the future.
Abstract: In this paper, the cable model of dendrites have been
considered. The dendrites are cylindrical cables of various segments
having variable length and reducing radius from start point at synapse
and end points. For a particular event signal being received by a
neuron in response only some dendrite are active at a particular
instance. Initial current signals with different current flows in
dendrite are assumed. Due to overlapping and coupling of active
dendrite, they induce currents in the dendrite segments of each other
at a particular instance. But how these currents are induced in the
various segments of active dendrites due to coupling between these
dendrites, It is not presented in the literature. Here the paper presents
a model for induced currents in active dendrite segments due to
mutual coupling at the starting instance of an activity in dendrite. The
model is as discussed further.
Abstract: This initial study is concerned with the behavior of
engineering students in Kuwait University which became a concern
due to the global issues of education in all levels. A survey has been
conducted to identify academic and societal issues affecting the
engineering student performance. The study is drawing major
conclusions with regard to private tutoring and the online availability
of textbooks’ solution manuals.
Abstract: Remote arid areas of the vast expanses of the African
deserts hold huge subterranean reserves of brackish water resources
waiting for economic development. This work presents design
guidelines as well as initial performance data of new autonomous
solar desalination equipment which could help local communities
produce their own fresh water using solar energy only and, why not,
contribute to transforming desert lands into lush gardens. The output
of solar distillation equipments are typically low and in the range of 3
l/m2/day on the average. This new design with an integrated, water
based, environmentally-friendly solar heat storage system produced 5
l/m2/day in early spring weather. Equipment output during summer
exceeded 9 liters per m2 per day.
Abstract: Model predictive control is a kind of optimal feedback
control in which control performance over a finite future is optimized
with a performance index that has a moving initial time and a moving
terminal time. This paper examines the stability of model predictive
control for linear discrete-time systems with additive stochastic
disturbances. A sufficient condition for the stability of the closed-loop
system with model predictive control is derived by means of a linear
matrix inequality. The objective of this paper is to show the results
of computational simulations in order to verify the effectiveness of
the obtained stability condition.
Abstract: Floorplanning plays a vital role in the physical design
process of Very Large Scale Integrated (VLSI) chips. It is an
essential design step to estimate the chip area prior to the optimized
placement of digital blocks and their interconnections. Since VLSI
floorplanning is an NP-hard problem, many optimization techniques
were adopted in the literature. In this work, a music-inspired
Harmony Search (HS) algorithm is used for the fixed die outline
constrained floorplanning, with the aim of reducing the total chip
area. HS draws inspiration from the musical improvisation process of
searching for a perfect state of harmony. Initially, B*-tree is used to
generate the primary floorplan for the given rectangular hard
modules and then HS algorithm is applied to obtain an optimal
solution for the efficient floorplan. The experimental results of the
HS algorithm are obtained for the MCNC benchmark circuits.
Abstract: In the present study, response surface methodology has been used to optimize turn-assisted deep cold rolling process of AISI 4140 steel. A regression model is developed to predict surface hardness and surface roughness using response surface methodology and central composite design. In the development of predictive model, deep cold rolling force, ball diameter, initial roughness of the workpiece, and number of tool passes are considered as model variables. The rolling force and the ball diameter are the significant factors on the surface hardness and ball diameter and numbers of tool passes are found to be significant for surface roughness. The predicted surface hardness and surface roughness values and the subsequent verification experiments under the optimal operating conditions confirmed the validity of the predicted model. The absolute average error between the experimental and predicted values at the optimal combination of parameter settings for surface hardness and surface roughness is calculated as 0.16% and 1.58% respectively. Using the optimal processing parameters, the surface hardness is improved from 225 to 306 HV, which resulted in an increase in the near surface hardness by about 36% and the surface roughness is improved from 4.84µm to 0.252 µm, which resulted in decrease in the surface roughness by about 95%. The depth of compression is found to be more than 300µm from the microstructure analysis and this is in correlation with the results obtained from the microhardness measurements. Taylor hobson talysurf tester, micro vickers hardness tester, optical microscopy and X-ray diffractometer are used to characterize the modified surface layer.
Abstract: In recent years, there has been a decline in physical
activity among adults. Motivation has been shown to be a crucial
factor in maintaining physical activity. The purpose of this study was
to whether PA motives measured by the Physical Activity and
Leisure Motivation Scale PALMS predicted the actual amount of PA
at a later time to provide evidence for the construct validity of the
PALMS. A quantitative, cross-sectional descriptive research design
was employed. The Demographic Form, PALMS, and International
Physical Activity Questionnaire Short form (IPAQ-S) questionnaires
were used to assess motives and amount for physical activity in
adults on two occasions. A sample of 489 male undergraduate
students aged 18 to 25 years (mean ±SD; 22.30±8.13 years) took part
in the study. Participants were divided into three types of activities,
namely exercise, racquet sport, and team sports and female
participants only took part in one type of activity, namely team
sports. After 14 weeks, all 489 undergraduate students who had filled
in the initial questionnaire (Occasion 1) received the questionnaire
via email (Occasion 2). Of the 489 students, 378 males emailed back
the completed questionnaire. The results showed that not only were
pertinent sub-scales of PALMS positively related to amount of
physical activity, but separate regression analyses showed the
positive predictive effect of PALMS motives for amount of physical
activity for each type of physical activity among participants. This
study supported the construct validity of the PALMS by showing that
the motives measured by PALMS did predict amount of PA. This
information can be obtained to match people with specific sport or
activity which in turn could potentially promote longer adherence to
the specific activity.
Abstract: In this paper, we present a model-based regression test
suite reducing approach that uses EFSM model dependence analysis
and probability-driven greedy algorithm to reduce software regression
test suites. The approach automatically identifies the difference
between the original model and the modified model as a set of
elementary model modifications. The EFSM dependence analysis is
performed for each elementary modification to reduce the regression
test suite, and then the probability-driven greedy algorithm is adopted
to select the minimum set of test cases from the reduced regression test
suite that cover all interaction patterns. Our initial experience shows
that the approach may significantly reduce the size of regression test
suites.
Abstract: By the evolvement in technology, the way of
expressing opinions switched direction to the digital world. The
domain of politics, as one of the hottest topics of opinion mining
research, merged together with the behavior analysis for affiliation
determination in texts, which constitutes the subject of this paper.
This study aims to classify the text in news/blogs either as
Republican or Democrat with the minimum number of features. As
an initial set, 68 features which 64 were constituted by Linguistic
Inquiry and Word Count (LIWC) features were tested against 14
benchmark classification algorithms. In the later experiments, the
dimensions of the feature vector reduced based on the 7 feature
selection algorithms. The results show that the “Decision Tree”,
“Rule Induction” and “M5 Rule” classifiers when used with “SVM”
and “IGR” feature selection algorithms performed the best up to
82.5% accuracy on a given dataset. Further tests on a single feature
and the linguistic based feature sets showed the similar results. The
feature “Function”, as an aggregate feature of the linguistic category,
was found as the most differentiating feature among the 68 features
with the accuracy of 81% in classifying articles either as Republican
or Democrat.
Abstract: In this paper we propose a computer-aided solution
with Genetic Algorithms in order to reduce the drafting of reports:
FMEA analysis and Control Plan required in the manufacture of the
product launch and improved knowledge development teams for
future projects. The solution allows to the design team to introduce
data entry required to FMEA. The actual analysis is performed using
Genetic Algorithms to find optimum between RPN risk factor and
cost of production. A feature of Genetic Algorithms is that they are
used as a means of finding solutions for multi criteria optimization
problems. In our case, along with three specific FMEA risk factors is
considered and reduce production cost. Analysis tool will generate
final reports for all FMEA processes. The data obtained in FMEA
reports are automatically integrated with other entered parameters in
Control Plan. Implementation of the solution is in the form of an
application running in an intranet on two servers: one containing
analysis and plan generation engine and the other containing the
database where the initial parameters and results are stored. The
results can then be used as starting solutions in the synthesis of other
projects. The solution was applied to welding processes, laser cutting
and bending to manufacture chassis for buses. Advantages of the
solution are efficient elaboration of documents in the current project
by automatically generating reports FMEA and Control Plan using
multiple criteria optimization of production and build a solid
knowledge base for future projects. The solution which we propose is
a cheap alternative to other solutions on the market using Open
Source tools in implementation.
Abstract: Knowledge of bone mechanical properties is important
for bone substitutes design and fabrication, and more efficient
prostheses development. The aim of this study is to characterize the
viscoelastic behavior of bone specimens, through stress relaxation
and fatigue tests performed to trabecular bone samples from bovine
femoral heads. Relaxation tests consisted on preloading the samples
at five different magnitudes and evaluate them for 1020 seconds,
adjusting the results to a KWW mathematical model. Fatigue tests
consisted of 700 load cycles and analyze their status at the end of the
tests. As a conclusion we have that between relaxation stress and
each preload there is linear relation and for samples with initial
Young´s modulus greater than 1.5 GPa showed no effects due fatigue
test loading cycles.
Abstract: A compound parabolic concentrator (CPC) is a wellknown
non-imaging concentrator that will concentrate the solar
radiation onto receiver (PV cell). One of disadvantage of CPC is has
tall and narrow height compared to its diameter entry aperture area.
Therefore, for economic reason, a truncation had been done by
removed from the top of the full height CPC. This also will lead to
the decreases of concentration ratio but it will be negligible. In this
paper, the flux distribution of untruncated and truncated 2-D hollow
compound parabolic trough concentrator (hCPTC) design is
presented. The untruncated design has initial height H=193.4mm
with concentration ratio C_(2-D)=4. This paper presents the optical
simulation of compound parabolic trough concentrator using raytracing
software TracePro. Results showed that, after the truncation,
the height of CPC reduced 45% from initial height with the
geometrical concentration ratio only decrease 10%. Thus, the cost of
reflector and material dielectric usage can be saved especially at
manufacturing site.
Abstract: In this paper, the dependence of soliton pulses with
respect to phase in a 10Gbps, single channel, dispersion
uncompensated telecommunication system was studied. The
characteristic feature of periodic soliton interaction was noted at the
Interaction point (I=6202.5Km) in one collision length of L=12405.1
Km. The interaction point is located for 10Gbps system with an
initial relative spacing (qo) of soliton as 5.28 using Perturbation
theory. It is shown that, when two in-phase solitons are launched,
they interact at the point I=6202.5 Km, but the interaction could be
restricted with introduction of different phase initially. When the
phase of the input solitons increases, the deviation of soliton pulses at
the ‘I’ also increases. We have successfully demonstrated this effect
in a telecommunication set-up in terms of Quality factor (Q), where
the Q=0 for in-phase soliton. The Q was noted to be 125.9, 38.63,
47.53, 59.60, 161.37, and 78.04 for different phases such as 10o, 20o,
30o, 45o, 60o and 90o degrees respectively at Interaction point (I).
Abstract: Toddy sediment (TS) was cultured in a PDA medium
to determine initial yeast load, and also it was undergone sun, shade,
solar, dehumidified cold air (DCA) and hot air oven (at 400, 500 and
60oC) drying with a view to preserve viability of yeast. Thereafter,
this study was conducted according to two factor factorial design in
order to determine best preservation method. Therein the dried TS
from the best drying method was taken and divided into two portions.
One portion was mixed with 3: 7 ratio of TS: rice flour and the
mixture was divided in to two again. While one portion was kept
under in house condition the other was in a refrigerator. Same
procedure was followed to the rest portion of TS too but it was at the
same ratio of corn flour. All treatments were vacuum packed in triple
laminate pouches and the best preservation method was determined
in terms of leavening index (LI). The TS obtained from the best
preservation method was used to make foods (bread and hopper) and
organoleptic properties of it were evaluated against same of ordinary
foods using sensory panel with a five point hedonic scale.
Results revealed that yeast load or fresh TS was 58×106 CFU/g.
The best drying method in preserving viability of yeast was DCA
because LI of this treatment (96%) is higher than that of other three
treatments. Organoleptic properties of foods prepared from best
preservation method are as same as ordinary foods according to Duo
trio test.
Abstract: Driver fatigue is an important factor in the increasing
number of road accidents. Dynamic template matching method was
proposed to address the problem of real-time driver fatigue detection
system based on eye-tracking. An effective vision based approach
was used to analyze the driver’s eye state to detect fatigue. The driver
fatigue system consists of Face detection, Eye detection, Eye
tracking, and Fatigue detection. Initially frames are captured from a
color video in a car dashboard and transformed from RGB into YCbCr
color space to detect the driver’s face. Canny edge operator was used
to estimating the eye region and the locations of eyes are extracted.
The extracted eyes were considered as a template matching for eye
tracking. Edge Map Overlapping (EMO) and Edge Pixel Count
(EPC) matching function were used for eye tracking which is used to
improve the matching accuracy. The pixel of eyeball was tracked
from the eye regions which are used to determine the fatigue state of
the driver.