Abstract: The Malaysia Highway Authority (MHA) was
established by the Government in 1980 for the purpose of designing,
constructing and maintaining toll highways in Malaysia that include
the North-South Expressway and the Penang Bridge, which were
procured using the publicly-funded traditional procurement. However
following a recession in the mid 80-s, the operations of these tolledhighways
had been privatized to ensure that their operational services
continue through private financing as a result of long-term
concession agreement concurred between the Malaysian Government
and private operators. The change in the contract strategy for
highway projects in Malaysia would have a great tendency to dictate
a significant risk exposure towards the key parties involved,
particularly the Malaysian Government as project principal, unless
operational risks are clearly identified and managed via appropriate
mitigation measures prior to a contract signing.
This research identifies potential operational risks that have a
possibility to occur in highway projects in Malaysia from the
perspective of public sector clients. Since this research focuses on the
operational risks for highway projects in Malaysia, the initial results
acquired from literature review on the operational risks of highway
projects in some Asian countries are then justified by a number of
key individuals from the MHA through interviews. As a result,
among key operational risks that have possibility to occur in the
highway projects in Malaysia include initial toll-tariff decided by the
Government, traffic congestion, change of road network and overloaded
freight transportation, which could cause damage to the road
surface and hence affecting the operation of a particular highway.
Abstract: In this paper, a near lossless image coding scheme
based on Orthogonal Polynomials Transform (OPT) has been
presented. The polynomial operators and polynomials basis operators
are obtained from set of orthogonal polynomials functions for the
proposed transform coding. The image is partitioned into a number of
distinct square blocks and the proposed transform coding is applied to
each of these individually. After applying the proposed transform
coding, the transformed coefficients are rearranged into a sub-band
structure. The Embedded Zerotree (EZ) coding algorithm is then
employed to quantize the coefficients. The proposed transform is
implemented for various block sizes and the performance is
compared with existing Discrete Cosine Transform (DCT) transform
coding scheme.
Abstract: The operating control parameters of injection
flushing type of electrical discharge machining process on stainless
steel 304 workpiece with copper tools are being optimized
according to its individual machining characteristic i.e. material
removal rate (MRR). Lower MRR during EDM machining process
may decrease its- machining productivity. Hence, the quality
characteristic for MRR is set to higher-the-better to achieve the
optimum machining productivity. Taguchi method has been used
for the construction, layout and analysis of the experiment for each
of the machining characteristic for the MRR. The use of Taguchi
method in the experiment saves a lot of time and cost of preparing
and machining the experiment samples. Therefore, an L18
Orthogonal array which was the fundamental component in the
statistical design of experiments has been used to plan the
experiments and Analysis of Variance (ANOVA) is used to
determine the optimum machining parameters for this machining
characteristic. The control parameters selected for this
optimization experiments are polarity, pulse on duration, discharge
current, discharge voltage, machining depth, machining diameter
and dielectric liquid pressure. The result had shown that the higher
the discharge voltage, the higher will be the MRR.
Abstract: Aim. We have introduced the notion of order to multinormed spaces and countable union spaces and their duals. The topology of bounded convergence is assigned to the dual spaces. The aim of this paper is to develop the theory of ordered topological linear
spaces La,b, L(w, z), the dual spaces of ordered multinormed spaces
La,b, ordered countable union spaces L(w, z), with the topology of bounded convergence assigned to the dual spaces. We apply Laplace transformation to the ordered linear space of Laplace transformable
generalized functions. We ultimately aim at finding solutions to nonhomogeneous
nth order linear differential equations with constant
coefficients in terms of generalized functions and comparing different
solutions evolved out of different initial conditions.
Method. The above aim is achieved by
• Defining the spaces La,b, L(w, z).
• Assigning an order relation on these spaces by identifying a
positive cone on them and studying the properties of the cone.
• Defining an order relation on the dual spaces La,b, L(w, z) of La,b, L(w, z) and assigning a topology to these dual spaces which makes the order dual and the topological dual the same. • Defining the adjoint of a continuous map on these spaces
and studying its behaviour when the topology of bounded
convergence is assigned to the dual spaces.
• Applying the two-sided Laplace Transformation on the ordered
linear space of generalized functions W and studying some
properties of the transformation which are used in solving
differential equations.
Result. The above techniques are applied to solve non-homogeneous
n-th order linear differential equations with constant coefficients in
terms of generalized functions and to compare different solutions of the differential equation.
Abstract: It is believed that continuously variable transmission (CVT) will dominate the automotive transmissions in the future. The most popular design is Van Doorne-s CVT with single metal pushing V-belt. However, it is only applicable to low power passenger cars because its major limitation is low torque capacity. Therefore, this research studies a novel dual-belt CVT system to overcome the limitation of traditional single-belt CVT, such that it can be applicable to the heavy-duty vehicles. This paper presents the mathematical model of the design and its experimental verification. Experimental and simulated results show that the model developed is valid and the proposed dual-belt CVT can really overcome the traditional limitation of single-belt Van Doorne-s CVT.
Abstract: Overhead conveyor systems are in use in many installations around the world, meeting the widest range of applications possible. Overhead conveyor systems are particularly preferred in automotive industry but also at post offices. Overhead conveyor systems must always be integrated with a logistical process by finding the best way for a cheaper material flow in order to guarantee precise and fast workflows. With their help, any transport can take place without wasting ground and space, without excessive company capacity, lost or damaged products, erroneous delivery, endless travels and without wasting time. Ultra-light overhead conveyor systems are rope-based conveying systems with individually driven vehicles. The vehicles can move automatically on the rope and this can be realized by energy and signals. Crossings are realized by switches. Ultra-light overhead conveyor systems provide optimal material flow, which produces profit and saves time. This article introduces two new ultra-light overhead conveyor designs in logistics and explains their components. According to the explanation of the components, scenarios are created by means of their technical characteristics. The scenarios are visualized with the help of CAD software. After that, assumptions are made for application area. According to these assumptions scenarios are visualized. These scenarios help logistics companies achieve lower development costs as well as quicker market maturity.
Abstract: Bringing change to the housing industry requires
multiple efforts from various angles especially to overcome any
resistances in the form of technology, human aspects, financial and
resources. The transition from conventional to sustainable approach
consumes time as it requires changes from different facets in the
industry ranging from individual, organisational to industry level. In
Malaysia, there are various efforts to bring green into the industry but
the progress is low-moderate. Will the current efforts bear larger
fruits in the near future? This study examines the perceptions of the
developers in Malaysia on the future of the green housing sector for
the next 5 years. The introduction of GBI rating system, improvement
of awareness and knowledge among the stakeholders, support from
the government and local industry and the effect of competitive
advantage would support brighter future. Meanwhile, the status quo
in rules and regulation, lack of public interest and demand,
organization disinterest, local authority enforcement and project cost
escalation would hinder a faster progress.
Abstract: Cardiac pulse-related artifacts in the EEG recorded
simultaneously with fMRI are complex and highly variable. Their
effective removal is an unsolved problem. Our aim is to develop an
adaptive removal algorithm based on the matching pursuit (MP)
technique and to compare it to established methods using a visual
evoked potential (VEP). We recorded the VEP inside the static
magnetic field of an MR scanner (with artifacts) as well as in an
electrically shielded room (artifact free). The MP-based artifact
removal outperformed average artifact subtraction (AAS) and
optimal basis set removal (OBS) in terms of restoring the EEG field
map topography of the VEP. Subsequently, a dipole model was fitted
to the VEP under each condition using a realistic boundary element
head model. The source location of the VEP recorded inside the MR
scanner was closest to that of the artifact free VEP after cleaning
with the MP-based algorithm as well as with AAS. While none of the
tested algorithms offered complete removal, MP showed promising
results due to its ability to adapt to variations of latency, frequency
and amplitude of individual artifact occurrences while still utilizing a
common template.
Abstract: The steady mixed convection boundary layer flow from
a vertical cone in a porous medium filled with a nanofluid is
numerically investigated using different types of nanoparticles as Cu
(copper), Al2O3 (alumina) and TiO2 (titania). The boundary value
problem is solved by using the shooting technique by reducing it
into an ordinary differential equation. Results of interest for the local
Nusselt number with various values of the constant mixed convection
parameter and nanoparticle volume fraction parameter are evaluated.
It is found that dual solutions exist for a certain range of mixed
convection parameter.
Abstract: In the world of Peer-to-Peer (P2P) networking
different protocols have been developed to make the resource sharing
or information retrieval more efficient. The SemPeer protocol is a
new layer on Gnutella that transforms the connections of the nodes
based on semantic information to make information retrieval more
efficient. However, this transformation causes high clustering in the
network that decreases the number of nodes reached, therefore the
probability of finding a document is also decreased. In this paper we
describe a mathematical model for the Gnutella and SemPeer
protocols that captures clustering-related issues, followed by a
proposition to modify the SemPeer protocol to achieve moderate
clustering. This modification is a sort of link management for the
individual nodes that allows the SemPeer protocol to be more
efficient, because the probability of a successful query in the P2P
network is reasonably increased. For the validation of the models, we
evaluated a series of simulations that supported our results.
Abstract: Software project effort estimation is frequently seen
as complex and expensive for individual software engineers.
Software production is in a crisis. It suffers from excessive costs.
Software production is often out of control. It has been suggested that
software production is out of control because we do not measure.
You cannot control what you cannot measure. During last decade, a
number of researches on cost estimation have been conducted. The
metric-set selection has a vital role in software cost estimation
studies; its importance has been ignored especially in neural network
based studies. In this study we have explored the reasons of those
disappointing results and implemented different neural network
models using augmented new metrics. The results obtained are
compared with previous studies using traditional metrics. To be able
to make comparisons, two types of data have been used. The first
part of the data is taken from the Constructive Cost Model
(COCOMO'81) which is commonly used in previous studies and the
second part is collected according to new metrics in a leading
international company in Turkey. The accuracy of the selected
metrics and the data samples are verified using statistical techniques.
The model presented here is based on Multi-Layer Perceptron
(MLP). Another difficulty associated with the cost estimation studies
is the fact that the data collection requires time and care. To make a
more thorough use of the samples collected, k-fold, cross validation
method is also implemented. It is concluded that, as long as an
accurate and quantifiable set of metrics are defined and measured
correctly, neural networks can be applied in software cost estimation
studies with success
Abstract: Knowing about the customer behavior in a grocery has
been a long-standing issue in the retailing industry. The advent of
RFID has made it easier to collect moving data for an individual
shopper's behavior. Most of the previous studies used the traditional
statistical clustering technique to find the major characteristics of
customer behavior, especially shopping path. However, in using the
clustering technique, due to various spatial constraints in the store,
standard clustering methods are not feasible because moving data such
as the shopping path should be adjusted in advance of the analysis,
which is time-consuming and causes data distortion. To alleviate this
problem, we propose a new approach to spatial pattern clustering
based on the longest common subsequence. Experimental results using
real data obtained from a grocery confirm the good performance of the
proposed method in finding the hot spot, dead spot and major path
patterns of customer movements.
Abstract: The paper evaluates the ongoing reform of VAT in the Czech Republic in terms of impacts on individual households. The main objective is to analyse the impact of given changes on individual households. The adopted method is based on the data related to household consumption by individual household quintiles; obtained data are subjected to micro-simulation examining. Results are discussed in terms of vertical tax justice. Results of the analysis reveal that VAT behaves regressively and a sole consolidation of rates at a higher level only increases the regression of this tax in the Czech Republic.
Abstract: This paper introduces two decoders for binary linear
codes based on Metaheuristics. The first one uses a genetic algorithm
and the second is based on a combination genetic algorithm with
a feed forward neural network. The decoder based on the genetic
algorithms (DAG) applied to BCH and convolutional codes give good
performances compared to Chase-2 and Viterbi algorithm respectively
and reach the performances of the OSD-3 for some Residue
Quadratic (RQ) codes. This algorithm is less complex for linear
block codes of large block length; furthermore their performances
can be improved by tuning the decoder-s parameters, in particular the
number of individuals by population and the number of generations.
In the second algorithm, the search space, in contrast to DAG which
was limited to the code word space, now covers the whole binary
vector space. It tries to elude a great number of coding operations
by using a neural network. This reduces greatly the complexity of
the decoder while maintaining comparable performances.
Abstract: We apply a particle tracking technique to track the motion of individual pathogenic Leptospira. We observe and capture images of motile Leptospira by means of CCD and darkfield microscope. Image processing, statistical theories and simulations are used for data analysis. Based on trajectory patterns, mean square displacement, and power spectral density characteristics, we found that the motion modes are most likely to be directed motion mode (70%) and the rest are either normal diffusion or unidentified mode. Our findings may support the fact that why leptospires are very well efficient toward targeting internal tissues as a result of increase in virulence factor.
Abstract: In this paper, we apply a semismooth active set method to image inpainting. The method exploits primal and dual features of a proposed regularized total variation model, following after the technique presented in [4]. Numerical results show that the method is fast and efficient in inpainting sufficiently thin domains.
Abstract: Titanium alloys like Ti-6Al-2Sn-4Zr-6Mo (Ti-
6246) are widely used in aerospace applications. Component
manufacturing, however, is difficult and expensive as their
machinability is extremely poor. A thorough understanding of the
chip formation process is needed to improve related metal cutting
operations.In the current study, orthogonal cutting experiments have
been performed and theresulting chips were analyzed by optical
microscopy and scanning electron microscopy.Chips from aTi-
6246ingot were produced at different cutting speeds and cutting
depths. During the experiments, depending of the cutting conditions,
continuous or segmented chips were formed. Narrow, highly
deformed and grain oriented zones, the so-called shear zone,
separated individual segments. Different material properties have
been measured in the shear zones and the segments.
Abstract: Study of fire and explosion is very important mainly
in oil and gas industries due to several accidents which have been
reported in the past and present. In this work, we have investigated
the flammability of bio oil vapour mixtures. This mixture may
contribute to fire during the storage and transportation process. Bio
oil sample derived from Palm Kernell shell was analysed using Gas
Chromatography Mass Spectrometry (GC-MS) to examine the
composition of the sample. Mole fractions of 12 selected
components in the liquid phase were obtained from the GC-FID data
and used to calculate mole fractions of components in the gas phase
via modified Raoult-s law. Lower Flammability Limits (LFLs) and
Upper Flammability Limits (UFLs) for individual components were
obtained from published literature. However, stoichiometric
concentration method was used to calculate the flammability limits
of some components which their flammability limit values are not
available in the literature. The LFL and UFL values for the mixture
were calculated using the Le Chatelier equation. The LFLmix and
UFLmix values were used to construct a flammability diagram and
subsequently used to determine the flammability of the mixture. The
findings of this study can be used to propose suitable inherently
safer method to prevent the flammable mixture from occurring and
to minimizing the loss of properties, business, and life due to fire
accidents in bio oil productions.
Abstract: We present a dual-band (Cellular & PCS) dual-path
zero-IF receiver for CDMA2000 diversity, monitoring and
simultaneous-GPS. The secondary path is a SAW-less diversity
CDMA receiver which can be also used for advanced features like
monitoring when supported with an additional external VCO. A GPS
receiver is integrated with its dedicated VCO allowing simultaneous
positioning during a cellular call. The circuit is implemented in a
0.25μm 40GHz-fT BiCMOS process and uses a HVQFN 56-pin
package. It consumes a maximum 300mW from a 2.8V supply in
dual-modes. The chip area is 12.8mm2.
Abstract: One of the major disadvantages of the minimally
invasive surgery (MIS) is the lack of tactile feedback to the surgeon.
In order to identify and avoid any damage to the grasped complex
tissue by endoscopic graspers, it is important to measure the local
softness of tissue during MIS. One way to display the measured
softness to the surgeon is a graphical method. In this paper, a new
tactile sensor has been reported. The tactile sensor consists of an
array of four softness sensors, which are integrated into the jaws of a
modified commercial endoscopic grasper. Each individual softness
sensor consists of two piezoelectric polymer Polyvinylidene Fluoride
(PVDF) films, which are positioned below a rigid and a compliant
cylinder. The compliant cylinder is fabricated using a micro molding
technique. The combination of output voltages from PVDF films is
used to determine the softness of the grasped object. The theoretical
analysis of the sensor is also presented.
A method has been developed with the aim of reproducing the
tactile softness to the surgeon by using a graphical method. In this
approach, the proposed system, including the interfacing and the data
acquisition card, receives signals from the array of softness sensors.
After the signals are processed, the tactile information is displayed
by means of a color coding method. It is shown that the degrees of
softness of the grasped objects/tissues can be visually differentiated
and displayed on a monitor.