Abstract: Automatic License plate recognition (ALPR) is a technology which recognizes the registration plate or number plate or License plate of a vehicle. In this paper, an Indian vehicle number plate is mined and the characters are predicted in efficient manner. ALPR involves four major technique i) Pre-processing ii) License Plate Location Identification iii) Individual Character Segmentation iv) Character Recognition. The opening phase, named pre-processing helps to remove noises and enhances the quality of the image using the conception of Morphological Operation and Image subtraction. The second phase, the most puzzling stage ascertain the location of license plate using the protocol Canny Edge detection, dilation and erosion. In the third phase, each characters characterized by Connected Component Approach (CCA) and in the ending phase, each segmented characters are conceptualized using cross correlation template matching- a scheme specifically appropriate for fixed format. Major application of ALPR is Tolling collection, Border Control, Parking, Stolen cars, Enforcement, Access Control, Traffic control. The database consists of 500 car images taken under dissimilar lighting condition is used. The efficiency of the system is 97%. Our future focus is Indian Vehicle License Plate Validation (Whether License plate of a vehicle is as per Road transport and highway standard).
Abstract: Hydrologic models are increasingly used as tools to
predict stormwater quantity and quality from urban catchments.
However, due to a range of practical issues, most models produce
gross errors in simulating complex hydraulic and hydrologic systems.
Difficulty in finding a robust approach for model calibration is one of
the main issues. Though automatic calibration techniques are
available, they are rarely used in common commercial hydraulic and
hydrologic modelling software e.g. MIKE URBAN. This is partly
due to the need for a large number of parameters and large datasets in
the calibration process. To overcome this practical issue, a
framework for automatic calibration of a hydrologic model was
developed in R platform and presented in this paper. The model was
developed based on the time-area conceptualization. Four calibration
parameters, including initial loss, reduction factor, time of
concentration and time-lag were considered as the primary set of
parameters. Using these parameters, automatic calibration was
performed using Approximate Bayesian Computation (ABC). ABC is
a simulation-based technique for performing Bayesian inference
when the likelihood is intractable or computationally expensive to
compute. To test the performance and usefulness, the technique was
used to simulate three small catchments in Gold Coast. For
comparison, simulation outcomes from the same three catchments
using commercial modelling software, MIKE URBAN were used.
The graphical comparison shows strong agreement of MIKE URBAN
result within the upper and lower 95% credible intervals of posterior
predictions as obtained via ABC. Statistical validation for posterior
predictions of runoff result using coefficient of determination (CD),
root mean square error (RMSE) and maximum error (ME) was found
reasonable for three study catchments. The main benefit of using
ABC over MIKE URBAN is that ABC provides a posterior
distribution for runoff flow prediction, and therefore associated
uncertainty in predictions can be obtained. In contrast, MIKE
URBAN just provides a point estimate. Based on the results of the
analysis, it appears as though ABC the developed framework
performs well for automatic calibration.
Abstract: The adjoint method has been used as a successful tool to
obtain sensitivity gradients in aerodynamic design and optimisation
for many years. This work presents an alternative approach to the
continuous adjoint formulation that enables one to compute gradients
of a given measure of merit with respect to control parameters other
than those pertaining to geometry. The procedure is then applied to
the steady 2–D compressible Euler and incompressible Navier–Stokes
flow equations. Finally, the results are compared with sensitivities
obtained by finite differences and theoretical values for validation.
Abstract: The cities of Johannesburg and Pretoria both located in the Gauteng province are separated by a distance of 58 km. The traffic queues on the Ben Schoeman freeway which connects these two cities can stretch for almost 1.5 km. Vehicle traffic congestion impacts negatively on the business and the commuter’s quality of life. The goal of this paper is to identify variables that influence the flow of traffic and to design a vehicle traffic prediction model, which will predict the traffic flow pattern in advance. The model will unable motorist to be able to make appropriate travel decisions ahead of time. The data used was collected by Mikro’s Traffic Monitoring (MTM). Multi-Layer perceptron (MLP) was used individually to construct the model and the MLP was also combined with Bagging ensemble method to training the data. The cross—validation method was used for evaluating the models. The results obtained from the techniques were compared using predictive and prediction costs. The cost was computed using combination of the loss matrix and the confusion matrix. The predicted models designed shows that the status of the traffic flow on the freeway can be predicted using the following parameters travel time, average speed, traffic volume and day of month. The implications of this work is that commuters will be able to spend less time travelling on the route and spend time with their families. The logistics industry will save more than twice what they are currently spending.
Abstract: This paper discusses the applicability of the numerical model for a damage prediction method of the accidental hydrogen explosion occurring in a hydrogen facility. The numerical model was based on an unstructured finite volume method (FVM) code “NuFD/FrontFlowRed”. For simulating unsteady turbulent combustion of leaked hydrogen gas, a combination of Large Eddy Simulation (LES) and a combustion model were used. The combustion model was based on a two scalar flamelet approach, where a G-equation model and a conserved scalar model expressed a propagation of premixed flame surface and a diffusion combustion process, respectively. For validation of this numerical model, we have simulated the previous two types of hydrogen explosion tests. One is open-space explosion test, and the source was a prismatic 5.27 m3 volume with 30% of hydrogen-air mixture. A reinforced concrete wall was set 4 m away from the front surface of the source. The source was ignited at the bottom center by a spark. The other is vented enclosure explosion test, and the chamber was 4.6 m × 4.6 m × 3.0 m with a vent opening on one side. Vent area of 5.4 m2 was used. Test was performed with ignition at the center of the wall opposite the vent. Hydrogen-air mixtures with hydrogen concentrations close to 18% vol. were used in the tests. The results from the numerical simulations are compared with the previous experimental data for the accuracy of the numerical model, and we have verified that the simulated overpressures and flame time-of-arrival data were in good agreement with the results of the previous two explosion tests.
Abstract: This paper outlines the development of an
experimental technique in quantifying supersonic jet flows, in an
attempt to avoid seeding particle problems frequently associated with
particle-image velocimetry (PIV) techniques at high Mach numbers.
Based on optical flow algorithms, the idea behind the technique
involves using high speed cameras to capture Schlieren images of the
supersonic jet shear layers, before they are subjected to an adapted
optical flow algorithm based on the Horn-Schnuck method to
determine the associated flow fields. The proposed method is capable
of offering full-field unsteady flow information with potentially
higher accuracy and resolution than existing point-measurements or
PIV techniques. Preliminary study via numerical simulations of a
circular de Laval jet nozzle successfully reveals flow and shock
structures typically associated with supersonic jet flows, which serve
as useful data for subsequent validation of the optical flow based
experimental results. For experimental technique, a Z-type Schlieren
setup is proposed with supersonic jet operated in cold mode,
stagnation pressure of 4 bar and exit Mach of 1.5. High-speed singleframe
or double-frame cameras are used to capture successive
Schlieren images. As implementation of optical flow technique to
supersonic flows remains rare, the current focus revolves around
methodology validation through synthetic images. The results of
validation test offers valuable insight into how the optical flow
algorithm can be further improved to improve robustness and
accuracy. Despite these challenges however, this supersonic flow
measurement technique may potentially offer a simpler way to
identify and quantify the fine spatial structures within the shock shear
layer.
Abstract: A model to predict the plastic zone size for material
under plane stress condition has been developed and verified
experimentally. The developed model is a function of crack size,
crack angle and material property (dislocation density). Simulation
and validation results show that the model developed show good
agreement with experimental results. Samples of low carbon steel
(0.035%C) with included surface crack angles of 45o, 50o, 60o, 70o
and 90o and crack depths of 2mm and 4mm were subjected to low
strain rate between 0.48 x 10-3 s-1 – 2.38 x 10-3 s-1. The mechanical
properties studied were ductility, tensile strength, modulus of
elasticity, yield strength, yield strain, stress at fracture and fracture
toughness. The experimental study shows that strain rate has no
appreciable effect on the size of plastic zone while crack depth and
crack angle plays an imperative role in determining the size of the
plastic zone of mild steel materials.
Abstract: High Performance Liquid Chromatography (HPLC)
method was developed and validated for simultaneous estimation of
6-Gingerol(6G) and 6-Shogaol(6S) in joint pain relief gel containing
ginger extract. The chromatographic separation was achieved by
using C18 column, 150 x 4.6mm i.d., 5μ Luna, mobile phase
containing acetonitrile and water (gradient elution). The flow rate
was 1.0 ml/min and the absorbance was monitored at 282 nm. The
proposed method was validated in terms of the analytical parameters
such as specificity, accuracy, precision, linearity, range, limit of
detection (LOD), limit of quantification (LOQ), and determined
based on the International Conference on Harmonization (ICH)
guidelines. The linearity ranges of 6G and 6S were obtained over 20-
60 and 6-18 μg/ml respectively. Good linearity was observed over the
above-mentioned range with linear regression equation Y= 11016x-
23778 for 6G and Y = 19276x-19604 for 6S (x is concentration of
analytes in μg/ml and Y is peak area). The value of correlation
coefficient was found to be 0.9994 for both markers. The limit of
detection (LOD) and limit of quantification (LOQ) for 6G were
0.8567 and 2.8555 μg/ml and for 6S were 0.3672 and 1.2238 μg/ml
respectively. The recovery range for 6G and 6S were found to be
91.57 to 102.36 % and 84.73 to 92.85 % for all three spiked levels.
The RSD values from repeated extractions for 6G and 6S were 3.43
and 3.09% respectively. The validation of developed method on
precision, accuracy, specificity, linearity, and range were also
performed with well-accepted results.
Abstract: The purpose of the study was to examine lifelong
education for teachers as a tool for achieving effective teaching and
learning. Lifelong education enhances social inclusion, personal
development, citizenship, employability, teaching and learning,
community and the nation. It is imperative that the teacher needs to
update his knowledge regularly to be able to perform optimally, since
he has a major position in the inculcation of desirable elements in
students, and the challenges of lifelong education were also
discussed. Descriptive survey design was adopted for the study. A
simple random sampling technique was used to select 80 teachers as
sample from a population of 105 senior secondary school teachers in
Makurdi Local Government Area of Benue State. A 20-item self
designed questionnaire subjected to expert validation and reliability
was used to collect data. The reliability Alpha coefficient of 0.87 was
established using Cronbach’s Alpha technique, mean scores and
standard deviation were used to answer the 2 research questions
while chi-square was used to analyse data for the 2 null hypotheses,
which states that lifelong education for teachers is not a significant
tool for achieving effective teaching and lifelong education for
teachers does not significantly impact on effective learning. The
findings of the study revealed that, lifelong education for teachers can
be used as a tool for achieving effective teaching and learning, and
the study recommended among others that government, organizations
and individuals should in collaboration put lifelong education
programmes for teachers on the priority list. The paper concluded
that the strategic position of lifelong education for teachers towards
enhanced teaching, learning and the production of quality manpower
in the society makes it imperative for all hands to be on “deck” to
support the programme financially and otherwise.
Abstract: Presently various computational techniques are used
in modeling and analyzing environmental engineering data. In the
present study, an intra-comparison of polynomial and radial basis
kernel functions based on Support Vector Regression and, in turn, an
inter-comparison with Multi Linear Regression has been attempted in
modeling mass transfer capacity of vertical (θ = 90O) and inclined (θ
multiple plunging jets (varying from 1 to 16 numbers). The data set
used in this study consists of four input parameters with a total of
eighty eight cases, forty four each for vertical and inclined multiple
plunging jets. For testing, tenfold cross validation was used.
Correlation coefficient values of 0.971 and 0.981 along with
corresponding root mean square error values of 0.0025 and 0.0020
were achieved by using polynomial and radial basis kernel functions
based Support Vector Regression respectively. An intra-comparison
suggests improved performance by radial basis function in
comparison to polynomial kernel based Support Vector Regression.
Further, an inter-comparison with Multi Linear Regression
(correlation coefficient = 0.973 and root mean square error = 0.0024)
reveals that radial basis kernel functions based Support Vector
Regression performs better in modeling and estimating mass transfer
by multiple plunging jets.
Abstract: The aim of this study was to determine the factor
structure and psychometric properties (i.e., reliability and convergent
validity) of the Employee Trust Scale, a newly created instrument by
the researchers. The Employee Trust Scale initially contained 82
items to measure employees’ trust toward their supervisors. A sample
of 818 (343 females, 449 males) employees were selected randomly
from public and private organization sectors in Kota Kinabalu,
Sabah, Malaysia. Their ages ranged from 19 to 67 years old with a
mean of 34.55 years old. Their average tenure with their current
employer was 11.2 years (s.d. = 7.5 years). The respondents were
asked to complete the Employee Trust Scale, as well as a managerial
trust questionnaire from Mishra. The exploratory factor analysis on
employees’ trust toward their supervisor’s extracted three factors,
labeled ‘trustworthiness’ (32 items), ‘position status’ (11 items) and
‘relationship’ (6 items) which accounted for 62.49% of the total
variance. Trustworthiness factors were re-categorized into three sub
factors: competency (11 items), benevolence (8 items) and integrity
(13 items). All factors and sub factors of the scales demonstrated
clear reliability with internal consistency of Cronbach’s Alpha above
.85. The convergent validity of the Scale was supported by an
expected pattern of correlations (positive and significant correlation)
between the score of all factors and sub factors of the scale and the
score on the managerial trust questionnaire, which measured the same
construct. The convergent validity of Employee Trust Scale was
further supported by the significant and positive inter-correlation
between the factors and sub factors of the scale. The results suggest
that the Employee Trust Scale is a reliable and valid measure.
However, further studies need to be carried out in other groups of
sample as to further validate the Scale.
Abstract: The work reported through this paper is an
experimental work conducted on High Performance Concrete (HPC)
with super plasticizer with the aim to develop some models suitable
for prediction of compressive strength of HPC mixes. In this study,
the effect of varying proportions of fly ash (0% to 50% @ 10%
increment) on compressive strength of high performance concrete has
been evaluated. The mix designs studied were M30, M40 and M50 to
compare the effect of fly ash addition on the properties of these
concrete mixes. In all eighteen concrete mixes that have been
designed, three were conventional concretes for three grades under
discussion and fifteen were HPC with fly ash with varying
percentages of fly ash. The concrete mix designing has been done in
accordance with Indian standard recommended guidelines. All the
concrete mixes have been studied in terms of compressive strength at
7 days, 28 days, 90 days, and 365 days. All the materials used have
been kept same throughout the study to get a perfect comparison of
values of results. The models for compressive strength prediction
have been developed using Linear Regression method (LR), Artificial
Neural Network (ANN) and Leave-One-Out Validation (LOOV)
methods.
Abstract: Due to the interference effects, the intrinsic
aerodynamic parameters obtained from the individual component
testing are always fundamentally different than those obtained for
complete model testing. Consideration and limitation for such testing
need to be taken into account in any design work related to the
component buildup method. In this paper, the scaled model of a
straight rectangular canard of a hybrid buoyant aircraft is tested at 50
m/s in IIUM-LSWT (Low Speed Wind Tunnel). Model and its
attachment with the balance are kept rigid to have results free from
the aeroelastic distortion. Based on the velocity profile of the test
section’s floor; the height of the model is kept equal to the
corresponding boundary layer displacement. Balance measurements
provide valuable but limited information of overall aerodynamic
behavior of the model. Zero lift coefficient is obtained at -2.2o and
the corresponding drag coefficient was found to be less than that at
zero angle of attack. As a part of the validation of low fidelity tool,
plot of lift coefficient plot was verified by the experimental data and
except the value of zero lift coefficients, the overall trend has under
predicted the lift coefficient. Based on this comparative study, a
correction factor of 1.36 is proposed for lift curve slope obtained
from the panel method.
Abstract: This paper presents the modeling approach in SBO
sequence for VVER 1000 reactors and describes the reactor core
behavior at late in-vessel phase in case of late reflooding by HPIS
and gives preliminary results for the ASTECv2 validation. The work
is focused on investigation of plant behavior during total loss of
power and the operator actions. The main goal of these analyses is to
assess the phenomena arising during the Station blackout (SBO)
followed by primary side high pressure injection system (HPIS)
reflooding of already damaged reactor core at very late “in-vessel”
phase. The purpose of the analyses is to define how the later HPIS
switching on can delay the time of vessel failure or possibly avoid
vessel failure. The times for HPP injection were chosen based on
previously performed investigations.
Abstract: In Automotive Industry, sliding door systems that are
also used as body closures are safety members. Extreme product tests
are realized to prevent failures in design process, but these tests
realized experimentally result in high costs. Finite element analysis is
an effective tool used for design process. These analyses are used
before production of prototype for validation of design according to
customer requirement. In result of this, substantial amount of time
and cost is saved. Finite element model is created for geometries that are designed in
3D CAD programs. Different element types as bar, shell and solid,
can be used for creating mesh model. Cheaper model can be created
by selection of element type, but combination of element type that
was used in model, number and geometry of element and degrees of
freedom affects the analysis result. Sliding door system is a good
example which used these methods for this study. Structural analysis
was realized for sliding door mechanism by using FE models. As
well, physical tests that have same boundary conditions with FE
models were realized. Comparison study for these element types,
were done regarding test and analyses results then optimum
combination was achieved.
Abstract: The web services applications for digital reference
service (WSDRS) of LIS model is an informal model that claims to
reduce the problems of digital reference services in libraries. It uses
web services technology to provide efficient way of satisfying users’
needs in the reference section of libraries. The formal WSDRS model
consists of the Z specifications of all the informal specifications of
the model. This paper discusses the formal validation of the Z
specifications of WSDRS model. The authors formally verify and
thus validate the properties of the model using Z/EVES theorem
prover.
Abstract: Anammox is a novel and promising technology that has changed the traditional concept of biological nitrogen removal. The process facilitates direct oxidation of ammonical nitrogen under anaerobic conditions with nitrite as an electron acceptor without addition of external carbon sources. The present study investigated the feasibility of Anammox Hybrid Reactor (AHR) combining the dual advantages of suspended and attached growth media for biodegradation of ammonical nitrogen in wastewater. Experimental unit consisted of 4 nos. of 5L capacity AHR inoculated with mixed seed culture containing anoxic and activated sludge (1:1). The process was established by feeding the reactors with synthetic wastewater containing NH4-H and NO2-N in the ratio 1:1 at HRT (hydraulic retention time) of 1 day. The reactors were gradually acclimated to higher ammonium concentration till it attained pseudo steady state removal at a total nitrogen concentration of 1200 mg/l. During this period, the performance of the AHR was monitored at twelve different HRTs varying from 0.25-3.0 d with increasing NLR from 0.4 to 4.8 kg N/m3d. AHR demonstrated significantly higher nitrogen removal (95.1%) at optimal HRT of 1 day. Filter media in AHR contributed an additional 27.2% ammonium removal in addition to 72% reduction in the sludge washout rate. This may be attributed to the functional mechanism of filter media which acts as a mechanical sieve and reduces the sludge washout rate many folds. This enhances the biomass retention capacity of the reactor by 25%, which is the key parameter for successful operation of high rate bioreactors. The effluent nitrate concentration, which is one of the bottlenecks of anammox process was also minimised significantly (42.3-52.3 mg/L). Process kinetics was evaluated using first order and Grau-second order models. The first-order substrate removal rate constant was found as 13.0 d-1. Model validation revealed that Grau second order model was more precise and predicted effluent nitrogen concentration with least error (1.84±10%). A new mathematical model based on mass balance was developed to predict N2 gas in AHR. The mass balance model derived from total nitrogen dictated significantly higher correlation (R2=0.986) and predicted N2 gas with least error of precision (0.12±8.49%). SEM study of biomass indicated the presence of heterogeneous population of cocci and rod shaped bacteria of average diameter varying from 1.2-1.5 mm. Owing to enhanced NRE coupled with meagre production of effluent nitrate and its ability to retain high biomass, AHR proved to be the most competitive reactor configuration for dealing with nitrogen laden wastewater.
Abstract: The aim of the current work was to employ the finite
element method to model a slab, with a small hole across its width,
undergoing plastic plane strain deformation. The computational
model had, however, to be validated by comparing its results with
those obtained experimentally. Since they were in good agreement,
the finite element method can therefore be considered a reliable tool
that can help gain better understanding of the mechanism of ductile
failure in structural members having stress raisers. The finite element
software used was ANSYS, and the PLANE183 element was utilized.
It is a higher order 2-D, 8-node or 6-node element with quadratic
displacement behavior. A bilinear stress-strain relationship was used
to define the material properties, with constants similar to those of the
material used in the experimental study. The model was run for
several tensile loads in order to observe the progression of the plastic
deformation region, and the stress concentration factor was
determined in each case. The experimental study involved employing the visioplasticity
technique, where a circular mesh (each circle was 0.5 mm in
diameter, with 0.05 mm line thickness) was initially printed on the
side of an aluminum slab having a small hole across its width.
Tensile loading was then applied to produce a small increment of
plastic deformation. Circles in the plastic region became ellipses,
where the directions of the principal strains and stresses coincided
with the major and minor axes of the ellipses. Next, we were able to
determine the directions of the maximum and minimum shear
stresses at the center of each ellipse, and the slip-line field was then
constructed. We were then able to determine the stress at any point in
the plastic deformation zone, and hence the stress concentration
factor. The experimental results were found to be in good agreement
with the analytical ones.
Abstract: In order to help the expert to validate association rules
extracted from data, some quality measures are proposed in the
literature. We distinguish two categories: objective and subjective
measures. The first one depends on a fixed threshold and on data
quality from which the rules are extracted. The second one consists
on providing to the expert some tools in the objective to explore and
visualize rules during the evaluation step. However, the number of
extracted rules to validate remains high. Thus, the manually mining
rules task is very hard. To solve this problem, we propose, in this
paper, a semi-automatic method to assist the expert during the
association rule's validation. Our method uses rule-based
classification as follow: (i) We transform association rules into
classification rules (classifiers), (ii) We use the generated classifiers
for data classification. (iii) We visualize association rules with their
quality classification to give an idea to the expert and to assist him
during validation process.
Abstract: In this paper, we provided a literature survey on the
artificial stock problem (ASM). The paper began by exploring the
complexity of the stock market and the needs for ASM. ASM
aims to investigate the link between individual behaviors (micro
level) and financial market dynamics (macro level). The variety of
patterns at the macro level is a function of the AFM complexity. The
financial market system is a complex system where the relationship
between the micro and macro level cannot be captured analytically.
Computational approaches, such as simulation, are expected to
comprehend this connection. Agent-based simulation is a simulation
technique commonly used to build AFMs. The paper proceeds by
discussing the components of the ASM. We consider the roles
of behavioral finance (BF) alongside the traditionally risk-averse
assumption in the construction of agent’s attributes. Also, the
influence of social networks in the developing of agents interactions is
addressed. Network topologies such as a small world, distance-based,
and scale-free networks may be utilized to outline economic
collaborations. In addition, the primary methods for developing
agents learning and adaptive abilities have been summarized.
These incorporated approach such as Genetic Algorithm, Genetic
Programming, Artificial neural network and Reinforcement Learning.
In addition, the most common statistical properties (the stylized facts)
of stock that are used for calibration and validation of ASM are
discussed. Besides, we have reviewed the major related previous
studies and categorize the utilized approaches as a part of these
studies. Finally, research directions and potential research questions
are argued. The research directions of ASM may focus on the macro
level by analyzing the market dynamic or on the micro level by
investigating the wealth distributions of the agents.