Abstract: Electro Chemical Discharge Machining (ECDM) is an
emerging hybrid machining process used in precision machining of hard and brittle non-conducting materials. The present paper gives a
critical review on materials machined by ECDM under the prevailing machining conditions; capability indicators of the process are
reported. Some results obtained while performing experiments in micro-channeling on soda lime glass using ECDM are also presented. In these experiments, Tool Wear (TW) and Material Removal (MR)
were studied using design of experiments and L–4 orthogonal array. Experimental results showed that the applied voltage was the most influencing parameter in both MR and TW studies. Field
emission scanning electron microscopy (FESEM) results obtained on the microchannels confirmed the presence of micro-cracks, primarily responsible for MR. Chemical etching was also seen along the edges.
The Energy dispersive spectroscopy (EDS) results were used to detect the elements present in the debris and specimens.
Abstract: Dengue fever is prevalent in Malaysia with numerous
cases including mortality recorded over the years. Public education
on the prevention of the desease through various means has been
carried out besides the enforcement of legal means to eradicate
Aedes mosquitoes, the dengue vector breeding ground. Hence, other
means need to be explored, such as predicting the seasonal peak
period of the dengue outbreak and identifying related climate factors
contributing to the increase in the number of mosquitoes. Simulation
model can be employed for this purpose. In this study, we created a
simulation of system dynamic to predict the spread of dengue
outbreak in Hulu Langat, Selangor Malaysia. The prototype was
developed using STELLA 9.1.2 software. The main data input are
rainfall, temperature and denggue cases. Data analysis from the graph
showed that denggue cases can be predicted accurately using these
two main variables- rainfall and temperature. However, the model
will be further tested over a longer time period to ensure its
accuracy, reliability and efficiency as a prediction tool for dengue
outbreak.
Abstract: This study presents a mathematical modeling approach to the planning of HIV therapies on an individual basis. The model replicates clinical data from typical-progressors to AIDS for all stages of the disease with good agreement. Clinical data from rapid-progressors and long-term non-progressors is also matched by estimation of immune system parameters only. The ability of the model to reproduce these phenomena validates the formulation, a fact which is exploited in the investigation of effective therapies. The therapy investigation suggests that, unlike continuous therapy, structured treatment interruptions (STIs) are able to control the increase in both the drug-sensitive and drug-resistant virus population and, hence, prevent the ultimate progression from HIV to AIDS. The optimization results further suggest that even patients characterised by the same progression type can respond very differently to the same treatment and that the latter should be designed on a case-by-case basis. Such a methodology is presented here.
Abstract: In this paper, we investigate the appearance of the giant component in random subgraphs G(p) of a given large finite graph family Gn = (Vn, En) in which each edge is present independently with probability p. We show that if the graph Gn satisfies a weak isoperimetric inequality and has bounded degree, then the probability p under which G(p) has a giant component of linear order with some constant probability is bounded away from zero and one. In addition, we prove the probability of abnormally large order of the giant component decays exponentially. When a contact graph is modeled as Gn, our result is of special interest in the study of the spread of infectious diseases or the identification of community in various social networks.
Abstract: Rapid advancement in computing technology brings
computers and humans to be seamlessly integrated in future. The
emergence of smartphone has driven computing era towards
ubiquitous and pervasive computing. Recognizing human activity has
garnered a lot of interest and has raised significant researches-
concerns in identifying contextual information useful to human
activity recognition. Not only unobtrusive to users in daily life,
smartphone has embedded built-in sensors that capable to sense
contextual information of its users supported with wide range
capability of network connections. In this paper, we will discuss the
classification algorithms used in smartphone-based human activity.
Existing technologies pertaining to smartphone-based researches in
human activity recognition will be highlighted and discussed. Our
paper will also present our findings and opinions to formulate
improvement ideas in current researches- trends. Understanding
research trends will enable researchers to have clearer research
direction and common vision on latest smartphone-based human
activity recognition area.
Abstract: This paper considers the development of a two-point
predictor-corrector block method for solving delay differential
equations. The formulae are represented in divided difference form
and the algorithm is implemented in variable stepsize variable order
technique. The block method produces two new values at a single
integration step. Numerical results are compared with existing
methods and it is evident that the block method performs very well.
Stability regions of the block method are also investigated.
Abstract: This paper will discuss about an active power generator scheduling method in order to increase the limit level of steady state systems. Some power generator optimization methods such as Langrange, PLN (Indonesian electricity company) Operation, and the proposed Z-Thevenin-based method will be studied and compared in respect of their steady state aspects. A method proposed in this paper is built upon the thevenin equivalent impedance values between each load respected to each generator. The steady state stability index obtained with the REI DIMO method. This research will review the 500kV-Jawa-Bali interconnection system. The simulation results show that the proposed method has the highest limit level of steady state stability compared to other optimization techniques such as Lagrange, and PLN operation. Thus, the proposed method can be used to create the steady state stability limit of the system especially in the peak load condition.
Abstract: Although backpropagation ANNs generally predict
better than decision trees do for pattern classification problems, they
are often regarded as black boxes, i.e., their predictions cannot be
explained as those of decision trees. In many applications, it is
desirable to extract knowledge from trained ANNs for the users to
gain a better understanding of how the networks solve the problems.
A new rule extraction algorithm, called rule extraction from artificial
neural networks (REANN) is proposed and implemented to extract
symbolic rules from ANNs. A standard three-layer feedforward ANN
is the basis of the algorithm. A four-phase training algorithm is
proposed for backpropagation learning. Explicitness of the extracted
rules is supported by comparing them to the symbolic rules generated
by other methods. Extracted rules are comparable with other methods
in terms of number of rules, average number of conditions for a rule,
and predictive accuracy. Extensive experimental studies on several
benchmarks classification problems, such as breast cancer, iris,
diabetes, and season classification problems, demonstrate the
effectiveness of the proposed approach with good generalization
ability.
Abstract: The feature extraction method(s) used to recognize
hand-printed characters play an important role in ICR applications.
In order to achieve high recognition rate for a recognition system, the
choice of a feature that suits for the given script is certainly an
important task. Even if a new feature required to be designed for a
given script, it is essential to know the recognition ability of the
existing features for that script. Devanagari script is being used in
various Indian languages besides Hindi the mother tongue of majority
of Indians. This research examines a variety of feature extraction
approaches, which have been used in various ICR/OCR applications,
in context to Devanagari hand-printed script. The study is conducted
theoretically and experimentally on more that 10 feature extraction
methods. The various feature extraction methods have been evaluated
on Devanagari hand-printed database comprising more than 25000
characters belonging to 43 alphabets. The recognition ability of the
features have been evaluated using three classifiers i.e. k-NN, MLP
and SVM.
Abstract: As the majority of faults are found in a few of its
modules so there is a need to investigate the modules that are
affected severely as compared to other modules and proper
maintenance need to be done in time especially for the critical
applications. As, Neural networks, which have been already applied
in software engineering applications to build reliability growth
models predict the gross change or reusability metrics. Neural
networks are non-linear sophisticated modeling techniques that are
able to model complex functions. Neural network techniques are
used when exact nature of input and outputs is not known. A key
feature is that they learn the relationship between input and output
through training. In this present work, various Neural Network Based
techniques are explored and comparative analysis is performed for
the prediction of level of need of maintenance by predicting level
severity of faults present in NASA-s public domain defect dataset.
The comparison of different algorithms is made on the basis of Mean
Absolute Error, Root Mean Square Error and Accuracy Values. It is
concluded that Generalized Regression Networks is the best
algorithm for classification of the software components into different
level of severity of impact of the faults. The algorithm can be used to
develop model that can be used for identifying modules that are
heavily affected by the faults.
Abstract: Inter-organizational Workflow (IOW) is commonly
used to support the collaboration between heterogeneous and
distributed business processes of different autonomous organizations
in order to achieve a common goal. E-government is considered as an
application field of IOW. The coordination of the different
organizations is the fundamental problem in IOW and remains the
major cause of failure in e-government projects. In this paper, we
introduce a new coordination model for IOW that improves the
collaboration between government administrations and that respects
IOW requirements applied to e-government. For this purpose, we
adopt a Multi-Agent approach, which deals more easily with interorganizational
digital government characteristics: distribution,
heterogeneity and autonomy. Our model integrates also different
technologies to deal with the semantic and technologic
interoperability. Moreover, it conserves the existing systems of
government administrations by offering a distributed coordination
based on interfaces communication. This is especially applied in
developing countries, where administrations are not necessary
equipped with workflow systems. The use of our coordination
techniques allows an easier migration for an e-government solution
and with a lower cost. To illustrate the applicability of the proposed
model, we present a case study of an identity card creation in Tunisia.
Abstract: One of the long standing challenging aspect in mobile robotics is the ability to navigate autonomously, avoiding modeled and unmodeled obstacles especially in crowded and unpredictably changing environment. A successful way of structuring the navigation task in order to deal with the problem is within behavior based navigation approaches. In this study, Issues of individual behavior design and action coordination of the behaviors will be addressed using fuzzy logic. A layered approach is employed in this work in which a supervision layer based on the context makes a decision as to which behavior(s) to process (activate) rather than processing all behavior(s) and then blending the appropriate ones, as a result time and computational resources are saved.
Abstract: The recent global financial problem urges government
to play role in stimulating the economy due to the fact that private
sector has little ability to purchase during the recession. A concerned
question is whether the increased government spending crowds out
private consumption and whether it helps stimulate the economy. If
the government spending policy is effective; the private consumption
is expected to increase and can compensate the recent extra
government expense. In this study, the government spending is
categorized into government consumption spending and government
capital spending. The study firstly examines consumer consumption
along the line with the demand function in microeconomic theory.
Three categories of private consumption are used in the study. Those
are food consumption, non food consumption, and services
consumption. The dynamic Almost Ideal Demand System of the three
categories of the private consumption is estimated using the Vector
Error Correction Mechanism model. The estimated model indicates
the substituting effects (negative impacts) of the government
consumption spending on budget shares of private non food
consumption and of the government capital spending on budget share
of private food consumption, respectively. Nevertheless the result
does not necessarily indicate whether the negative effects of changes
in the budget shares of the non food and the food consumption means
fallen total private consumption. Microeconomic consumer demand
analysis clearly indicates changes in component structure of
aggregate expenditure in the economy as a result of the government
spending policy. The macroeconomic concept of aggregate demand
comprising consumption, investment, government spending (the
government consumption spending and the government capital
spending), export, and import are used to estimate for their
relationship using the Vector Error Correction Mechanism model.
The macroeconomic study found no effect of the government capital
spending on either the private consumption or the growth of GDP
while the government consumption spending has negative effect on
the growth of GDP. Therefore no crowding out effect of the
government spending is found on the private consumption but it is
ineffective and even inefficient expenditure as found reducing growth
of the GDP in the context of Thailand.
Abstract: Large scale systems such as computational Grid is
a distributed computing infrastructure that can provide globally
available network resources. The evolution of information processing
systems in Data Grid is characterized by a strong decentralization of
data in several fields whose objective is to ensure the availability and
the reliability of the data in the reason to provide a fault tolerance
and scalability, which cannot be possible only with the use of the
techniques of replication. Unfortunately the use of these techniques
has a height cost, because it is necessary to maintain consistency
between the distributed data. Nevertheless, to agree to live with
certain imperfections can improve the performance of the system by
improving competition. In this paper, we propose a multi-layer protocol
combining the pessimistic and optimistic approaches conceived
for the data consistency maintenance in large scale systems. Our
approach is based on a hierarchical representation model with tree
layers, whose objective is with double vocation, because it initially
makes it possible to reduce response times compared to completely
pessimistic approach and it the second time to improve the quality
of service compared to an optimistic approach.
Abstract: This study investigated a strategy of blending lead-laden sludge and Al-rich precursors to reduce the release of metals from the stabilized products. Using PbO as the simulated lead-laden sludge to sinter with γ-Al2O3 by Pb:Al molar ratios of 1:2 and 1:12, PbAl2O4 and PbAl12O19 were formed as final products during the sintering process, respectively. By firing the PbO + γ-Al2O3 mixtures with different Pb/Al molar ratios at 600 to 1000 °C, the lead transformation was determined through X-ray diffraction (XRD) data. In Pb/Al molar ratio of 1/2 system, the formation of PbAl2O4 is initiated at 700 °C, but an effective formation was observed above 750 °C. An intermediate phase, Pb9Al8O21, was detected in the temperature range of 800-900 °C. However, different incorporation behavior for sintering PbO with Al-rich precursors at a Pb/Al molar ratio of 1/12 was observed during the formation of PbAl12O19 in this system. In the sintering process, both temperature and time effect on the formation of PbAl2O4 and PbAl12O19 phases were estimated. Finally, a prolonged leaching test modified from the U.S. Environmental Protection Agency-s toxicity characteristic leaching procedure (TCLP) was used to evaluate the durability of PbO, Pb9Al8O21, PbAl2O4 and PbAl12O19 phases. Comparison for the leaching results of the four phases demonstrated the higher intrinsic resistance of PbAl12O19 against acid attack.
Abstract: This paper reports the results of an experimental work
conducted to investigate the effect of curing conditions on the
compressive strength of self-compacting geopolymer concrete
prepared by using fly ash as base material and combination of sodium
hydroxide and sodium silicate as alkaline activator. The experiments
were conducted by varying the curing time and curing temperature in
the range of 24-96 hours and 60-90°C respectively. The essential
workability properties of freshly prepared Self-compacting
Geopolymer concrete such as filling ability, passing ability and
segregation resistance were evaluated by using Slump flow,
V-funnel, L-box and J-ring test methods. The fundamental
requirements of high flowability and resistance to segregation as
specified by guidelines on Self-compacting Concrete by EFNARC
were satisfied. Test results indicate that longer curing time and curing
the concrete specimens at higher temperatures result in higher
compressive strength. There was increase in compressive strength
with the increase in curing time; however increase in compressive
strength after 48 hours was not significant. Concrete specimens cured
at 70°C produced the highest compressive strength as compared to
specimens cured at 60°C, 80°C and 90°C.
Abstract: Fecal coliform bacteria are widely used as indicators of
sewage contamination in surface water. However, there are some
disadvantages in these microbial techniques including time consuming
(18-48h) and inability in discriminating between human and animal
fecal material sources. Therefore, it is necessary to seek a more
specific indicator of human sanitary waste. In this study, the feasibility
was investigated to apply caffeine and human pharmaceutical
compounds to identify the human-source contamination. The
correlation between caffeine and fecal coliform was also explored.
Surface water samples were collected from upstream, middle-stream
and downstream points respectively, along Rochor Canal, as well as 8
locations of Marina Bay. Results indicate that caffeine is a suitable
chemical tracer in Singapore because of its easy detection (in the range
of 0.30-2.0 ng/mL), compared with other chemicals monitored.
Relative low concentrations of human pharmaceutical compounds (<
0.07 ng/mL) in Rochor Canal and Marina Bay water samples make
them hard to be detected and difficult to be chemical tracer. However,
their existence can help to validate sewage contamination. In addition,
it was discovered the high correlation exists between caffeine
concentration and fecal coliform density in the Rochor Canal water
samples, demonstrating that caffeine is highly related to the
human-source contamination.
Abstract: Dual phase steels (DPS)s have a microstructure
consisting of a hard second phase called Martensite in the soft Ferrite
matrix. In recent years, there has been interest in dual-phase steels,
because the application of these materials has made significant usage;
particularly in the automotive sector Composite microstructure of
(DPS)s exhibit interesting characteristic mechanical properties such
as continuous yielding, low yield stress to tensile strength
ratios(YS/UTS), and relatively high formability; which offer
advantages compared with conventional high strength low alloy
steels(HSLAS). The research dealt with the characterization of
damage in (DPS)s. In this study by review the mechanisms of failure
due to volume fraction of martensite second phase; a new method is
introduced to identifying the mechanisms of failure in the various
phases of these types of steels. In this method the acoustic emission
(AE) technique was used to detect damage progression. These failure
mechanisms consist of Ferrite-Martensite interface decohesion and/or
martensite phase fracture. For this aim, dual phase steels with
different volume fraction of martensite second phase has provided by
various heat treatment methods on a low carbon steel (0.1% C), and
then AE monitoring is used during tensile test of these DPSs. From
AE measurements and an energy ratio curve elaborated from the
value of AE energy (it was obtained as the ratio between the strain
energy to the acoustic energy), that allows detecting important
events, corresponding to the sudden drops. These AE signals events
associated with various failure mechanisms are classified for ferrite
and (DPS)s with various amount of Vm and different martensite
morphology. It is found that AE energy increase with increasing Vm.
This increasing of AE energy is because of more contribution of
martensite fracture in the failure of samples with higher Vm. Final
results show a good relationship between the AE signals and the
mechanisms of failure.
Abstract: The hot deformation behavior of high strength low
alloy (HSLA) steels with different chemical compositions under hot
working conditions in the temperature range of 900 to 1100℃ and
strain rate range from 0.1 to 10 s-1 has been studied by performing a
series of hot compression tests. The dynamic materials model has been
employed for developing the processing maps, which show variation
of the efficiency of power dissipation with temperature and strain rate.
Also the Kumar-s model has been used for developing the instability
map, which shows variation of the instability for plastic deformation
with temperature and strain rate. The efficiency of power dissipation
increased with decreasing strain rate and increasing temperature in the
steel with higher Cr and Ti content. High efficiency of power
dissipation over 20 % was obtained at a finite strain level of 0.1 under
the conditions of strain rate lower than 1 s-1 and temperature higher
than 1050 ℃ . Plastic instability was expected in the regime of
temperatures lower than 1000 ℃ and strain rate lower than 0.3 s-1. Steel
with lower Cr and Ti contents showed high efficiency of power
dissipation at higher strain rate and lower temperature conditions.
Abstract: Road signs are the elements of roads with a lot of
influence in driver-s behavior. So that signals can fulfill its function,
they must overcome visibility and durability requirements,
particularly needed at night, when the coefficient of retroreflection
becomes a decisive factor in ensuring road safety. Accepting that the
visibility of the signage has implications for people-s safety, we
understand the importance to fulfill its function: to foster the highest
standards of service and safety in drivers. The usual conditions of
perception of any sign are determined by: age of the driver, reflective
material, luminosity, vehicle speed and emplacement. In this way,
this paper evaluates the different signals to increase the safety road.