Abstract: This paper tests the level of market integration between Malaysia and Singapore stock markets with the world market. Kalman Filter (KF) methodology is used on the International Capital Asset Pricing Model (ICAPM) and the pricing errors estimated within the framework of ICAPM are used as a measure of market integration or segmentation. The advantage of the KF technique is that it allows for time-varying coefficients in estimating ICAPM and hence able to capture the varying degree of market integration. Empirical results show clear evidence of varying degree of market integration for both case of Malaysia and Singapore. Furthermore, the results show that the changes in the level of market integration are found to coincide with certain economic events that have taken placed. The findings certainly provide evidence on the practicability of the KF technique to estimate stock markets integration. In the comparison between Malaysia and Singapore stock market, the result shows that the trends of the market integration indices for Malaysia and Singapore look similar through time but the magnitude is notably different with the Malaysia stock market showing greater degree of market integration. Finally, significant evidence of varying degree of market integration shows the inappropriate use of OLS in estimating the level of market integration.
Abstract: In the LFC problem, the interconnections among some areas are the input of disturbances, and therefore, it is important to suppress the disturbances by the coordination of governor systems. In contrast, tie-line power flow control by TCPS located between two areas makes it possible to stabilize the system frequency oscillations positively through interconnection, which is also expected to provide a new ancillary service for the further power systems. Thus, a control strategy using controlling the phase angle of TCPS is proposed for provide active control facility of system frequency in this paper. Also, the optimum adjustment of PID controller's parameters in a robust way under bilateral contracted scenario following the large step load demands and disturbances with and without TCPS are investigated by Particle Swarm Optimization (PSO), that has a strong ability to find the most optimistic results. This newly developed control strategy combines the advantage of PSO and TCPS and has simple stricture that is easy to implement and tune. To demonstrate the effectiveness of the proposed control strategy a three-area restructured power system is considered as a test system under different operating conditions and system nonlinearities. Analysis reveals that the TCPS is quite capable of suppressing the frequency and tie-line power oscillations effectively as compared to that obtained without TCPS for a wide range of plant parameter changes, area load demands and disturbances even in the presence of system nonlinearities.
Abstract: The effects of dynamic subgrid scale (SGS) models are
investigated in variational multiscale (VMS) LES simulations of bluff
body flows. The spatial discretization is based on a mixed finite
element/finite volume formulation on unstructured grids. In the VMS
approach used in this work, the separation between the largest and the
smallest resolved scales is obtained through a variational projection
operator and a finite volume cell agglomeration. The dynamic version
of Smagorinsky and WALE SGS models are used to account for
the effects of the unresolved scales. In the VMS approach, these
effects are only modeled in the smallest resolved scales. The dynamic
VMS-LES approach is applied to the simulation of the flow around a
circular cylinder at Reynolds numbers 3900 and 20000 and to the flow
around a square cylinder at Reynolds numbers 22000 and 175000. It
is observed as in previous studies that the dynamic SGS procedure
has a smaller impact on the results within the VMS approach than in
LES. But improvements are demonstrated for important feature like
recirculating part of the flow. The global prediction is improved for
a small computational extra cost.
Abstract: This paper presents the development of low cost Nano membrane fabrication system. The system is specially designed for anodic aluminum oxide membrane. This system is capable to perform the processes such as anodization and electro-polishing. The designed machine was successfully tested for 'mild anodization' (MA) for 48 hours and 'hard anodization' (HA) for 3 hours at constant 0oC. The system is digitally controlled and guided for temperature maintenance during anodization and electro-polishing. The total cost of the developed machine is 20 times less than the multi-cooling systems available in the market which are generally used for this purpose.
Abstract: Most often the contaminants are not taken seriously into consideration, and this behavior comes out directly from the lack of monitoring and professional reporting about pollution in the printing facilities in Serbia. The goal of planned and systematic ozone measurements in ambient air of the screen printing facilities in Novi Sad is to examine of its impact on the employees health, and to track trends in concentration. In this study, ozone concentrations were determined by using discontinuous and continuous method during the automatic and manual screen printing process. Obtained results indicates that the average concentrations of ozone measured during the automatic process were almost 3 to 28 times higher for discontinuous and 10 times higher for continuous method (1.028 ppm) compared to the values prescribed by OSHA. In the manual process, average concentrations of ozone were within prescribed values for discontinuous and almost 3 times higher for continuous method (0.299 ppm).
Abstract: This paper makes an attempt to solve the problem of
searching and retrieving of similar MRI photos via Internet services
using morphological features which are sourced via the original
image. This study is aiming to be considered as an additional tool of
searching and retrieve methods. Until now the main way of the
searching mechanism is based on the syntactic way using keywords.
The technique it proposes aims to serve the new requirements of
libraries. One of these is the development of computational tools for
the control and preservation of the intellectual property of digital
objects, and especially of digital images. For this purpose, this paper
proposes the use of a serial number extracted by using a previously
tested semantic properties method. This method, with its center being
the multi-layers of a set of arithmetic points, assures the following
two properties: the uniqueness of the final extracted number and the
semantic dependence of this number on the image used as the
method-s input. The major advantage of this method is that it can
control the authentication of a published image or its partial
modification to a reliable degree. Also, it acquires the better of the
known Hash functions that the digital signature schemes use and
produces alphanumeric strings for cases of authentication checking,
and the degree of similarity between an unknown image and an
original image.
Abstract: Speckle noise affects all coherent imaging systems
including medical ultrasound. In medical images, noise suppression
is a particularly delicate and difficult task. A tradeoff between noise
reduction and the preservation of actual image features has to be made
in a way that enhances the diagnostically relevant image content.
Even though wavelets have been extensively used for denoising
speckle images, we have found that denoising using contourlets gives
much better performance in terms of SNR, PSNR, MSE, variance and
correlation coefficient. The objective of the paper is to determine the
number of levels of Laplacian pyramidal decomposition, the number
of directional decompositions to perform on each pyramidal level and
thresholding schemes which yields optimal despeckling of medical
ultrasound images, in particular. The proposed method consists of the
log transformed original ultrasound image being subjected to contourlet
transform, to obtain contourlet coefficients. The transformed
image is denoised by applying thresholding techniques on individual
band pass sub bands using a Bayes shrinkage rule. We quantify the
achieved performance improvement.
Abstract: Experiments have been performed to investigate the radiation effects on mixed convection heat transfer for thermally developing airflow in vertical ducts with two differentially heated isothermal walls and two adiabatic walls. The investigation covers the Reynolds number Re = 800 to Re = 2900, heat flux varied from 256 W/m2 to 863 W/m2, hot wall temperature ranges from 27°C to 100 °C, aspect ratios 1 & 0.5 and the emissivity of internal walls are 0.05 and 0.85. In the present study, combined flow visualization was conducted to observe the flow patterns. The effect of surface temperature along the walls was studied to investigate the local Nusselt number variation within the duct. The result shows that flow condition and radiation significantly affect the total Nusselt number and tends to reduce the buoyancy condition.
Abstract: In this paper, a new method of image edge-detection
and characterization is presented. “Parametric Filtering method" uses
a judicious defined filter, which preserves the signal correlation
structure as input in the autocorrelation of the output. This leads,
showing the evolution of the image correlation structure as well as
various distortion measures which quantify the deviation between
two zones of the signal (the two Hamming signals) for the protection
of an image edge.
Abstract: This research study the application of the immobilized
TiO2 layer and Cu-TiO2 layer on graphite substrate as a negative
electrode or anode for Li-ion battery. The titania layer was produced
through chemical bath deposition method, meanwhile Cu particles
were deposited electrochemically. A material can be used as an
electrode as it has capability to intercalates Li ions into its crystal
structure. The Li intercalation into TiO2/Graphite and Cu-
TiO2/Graphite were analyzed from the changes of its XRD pattern
after it was used as electrode during discharging process. The XRD
patterns were refined by Le Bail method in order to determine the
crystal structure of the prepared materials. A specific capacity and the
cycle ability measurement were carried out to study the performance
of the prepared materials as negative electrode of the Li-ion battery.
The specific capacity was measured during discharging process from
fully charged until the cut off voltage. A 300 was used as a load.
The result shows that the specific capacity of Li-ion battery with
TiO2/Graphite as negative electrode is 230.87 ± 1.70mAh.g-1 which is
higher than the specific capacity of Li-ion battery with pure graphite
as negative electrode, i.e 140.75 ±0.46mAh.g-1. Meanwhile
deposition of Cu onto TiO2 layer does not increase the specific
capacity, and the value even lower than the battery with
TiO2/Graphite as electrode. The cycle ability of the prepared battery
is only two cycles, due to the Li ribbon which was used as cathode
became fragile and easily broken.
Abstract: In this paper, a theoretical formula is presented to
predict the instantaneous folding force of the first fold creation in a
square column under axial loading. Calculations are based on analysis
of “Basic Folding Mechanism" introduced by Wierzbicki and
Abramowicz. For this purpose, the sum of dissipated energy rate under
bending around horizontal and inclined hinge lines and dissipated
energy rate under extensional deformations are equated to the work rate
of the external force on the structure. Final formula obtained in this
research, reasonably predicts the instantaneous folding force of the first
fold creation versus folding distance and folding angle and also predicts
the instantaneous folding force instead of the average value. Finally,
according to the calculated theoretical relation, instantaneous folding
force of the first fold creation in a square column was sketched
versus folding distance and was compared to the experimental results
which showed a good correlation.
Abstract: In theoretical computer science, the Turing machine has played a number of important roles in understanding and exploiting basic concepts and mechanisms in computing and information processing [20]. It is a simple mathematical model of computers [9]. After that, M.Blum and C.Hewitt first proposed two-dimensional automata as a computational model of two-dimensional pattern processing, and investigated their pattern recognition abilities in 1967 [7]. Since then, a lot of researchers in this field have been investigating many properties about automata on a two- or three-dimensional tape. On the other hand, the question of whether processing fourdimensional digital patterns is much more difficult than two- or threedimensional ones is of great interest from the theoretical and practical standpoints. Thus, the study of four-dimensional automata as a computasional model of four-dimensional pattern processing has been meaningful [8]-[19],[21]. This paper introduces a cooperating system of four-dimensional finite automata as one model of four-dimensional automata. A cooperating system of four-dimensional finite automata consists of a finite number of four-dimensional finite automata and a four-dimensional input tape where these finite automata work independently (in parallel). Those finite automata whose input heads scan the same cell of the input tape can communicate with each other, that is, every finite automaton is allowed to know the internal states of other finite automata on the same cell it is scanning at the moment. In this paper, we mainly investigate some accepting powers of a cooperating system of eight- or seven-way four-dimensional finite automata. The seven-way four-dimensional finite automaton is an eight-way four-dimensional finite automaton whose input head can move east, west, south, north, up, down, or in the fu-ture, but not in the past on a four-dimensional input tape.
Abstract: In recent years a number of applications with multirobot
systems (MRS) is growing in various areas. But their design
is in practice often difficult and algorithms are proposed for the
theoretical background and do not consider errors and noise in real
conditions, so they are not usable in real environment. These errors
are visible also in task of target localization enough, when robots
try to find and estimate the position of the target by the sensors.
Localization of target is possible also with one robot but as it was
examined target finding and localization with group of mobile robots
can estimate the target position more accurately and faster. The
accuracy of target position estimation is made by cooperation of
MRS and particle filtering. Advantage of usage the MRS with particle
filtering was tested on task of fixed target localization by group of
mobile robots.
Abstract: In this paper, an inventory model with finite and
constant replenishment rate, price dependant demand rate, time
value of money and inflation, finite time horizon, lead time and
exponential deterioration rate and with the objective of maximizing
the present worth of the total system profit is developed. Using a
dynamic programming based solution algorithm, the optimal
sequence of the cycles can be found and also different optimal
selling prices, optimal order quantities and optimal maximum
inventories can be obtained for the cycles with unequal lengths,
which have never been done before for this model. Also, a
numerical example is used to show accuracy of the solution
procedure.
Abstract: Passive systems were born with the purpose of the
greatest exploitation of solar energy in cold climates and high
altitudes. They spread themselves until the 80-s all over the world
without any attention to the specific climate and the summer
behavior; this caused the deactivation of the systems due to a series
of problems connected to the summer overheating, the complex
management and the rising of the dust.
Until today the European regulation limits only the winter
consumptions without any attention to the summer behavior but, the
recent European EN 15251 underlines the relevance of the indoor
comfort, and the necessity of the analytic studies validation by
monitoring case studies.
In the porpose paper we demonstrate that the solar wall is an
efficient system both from thermal comfort and energy saving point
of view and it is the most suitable for our temperate climates because
it can be used as a passive cooling sistem too. In particular the paper
present an experimental and numerical analisys carried out on a case
study with nine different solar passive systems in Ancona, Italy.
We carried out a detailed study of the lodging provided by the
solar wall by the monitoring and the evaluation of the indoor
conditions.
Analyzing the monitored data, on the base of recognized models
of comfort (ISO, ASHRAE, Givoni-s BBCC), is emerged that the
solar wall has an optimal behavior in the middle seasons. In winter
phase this passive system gives more advantages in terms of energy
consumptions than the other systems, because it gives greater heat
gain and therefore smaller consumptions. In summer, when outside
air temperature return in the mean seasonal value, the indoor comfort
is optimal thanks to an efficient transversal ventilation activated from
the same wall.
Abstract: Frequent patterns are patterns such as sets of features or items that appear in data frequently. Finding such frequent patterns has become an important data mining task because it reveals associations, correlations, and many other interesting relationships hidden in a dataset. Most of the proposed frequent pattern mining algorithms have been implemented with imperative programming languages such as C, Cµ, Java. The imperative paradigm is significantly inefficient when itemset is large and the frequent pattern is long. We suggest a high-level declarative style of programming using a functional language. Our supposition is that the problem of frequent pattern discovery can be efficiently and concisely implemented via a functional paradigm since pattern matching is a fundamental feature supported by most functional languages. Our frequent pattern mining implementation using the Haskell language confirms our hypothesis about conciseness of the program. The performance studies on speed and memory usage support our intuition on efficiency of functional language.
Abstract: School homework has been synonymous with students- life in Chinese national type primary schools in Malaysia. Although many reports in the press claimed that students were burdened with too much of it, homework continues to be a common practice in national type schools that is believed to contribute to academic achievement. This study is conducted to identify the relationship between the burden of school homework and academic achievement among pupils in Chinese National Type Primary School in the state of Perak, Malaysia. A total of 284 students (142 from urban and 142 from rural) respectively were chosen as participants in this study. Variables of gender and location (urban/rural areas) has shown significant difference in student academic achievement. Female Chinese student from rural areas showed a higher mean score than males from urban area. Therefore, the Chinese language teachers should give appropriate and relevant homework to primary school students to achieve good academic performance.
Abstract: As the data-driven economy is growing faster than
ever and the demand for energy is being spurred, we are facing
unprecedented challenges of improving energy efficiency in data
centers. Effectively maximizing energy efficiency or minimising the
cooling energy demand is becoming pervasive for data centers. This
paper investigates overall energy consumption and the energy
efficiency of cooling system for a data center in Finland as a case
study. The power, cooling and energy consumption characteristics
and operation condition of facilities are examined and analysed.
Potential energy and cooling saving opportunities are identified and
further suggestions for improving the performance of cooling system
are put forward. Results are presented as a comprehensive evaluation
of both the energy performance and good practices of energy
efficient cooling operations for the data center. Utilization of an
energy recovery concept for cooling system is proposed. The
conclusion we can draw is that even though the analysed data center
demonstrated relatively high energy efficiency, based on its power
usage effectiveness value, there is still a significant potential for
energy saving from its cooling systems.
Abstract: Grid computing is a form of distributed computing
that involves coordinating and sharing computational power, data
storage and network resources across dynamic and geographically
dispersed organizations. Scheduling onto the Grid is NP-complete,
so there is no best scheduling algorithm for all grid computing
systems. An alternative is to select an appropriate scheduling
algorithm to use in a given grid environment because of the
characteristics of the tasks, machines and network connectivity. Job
and resource scheduling is one of the key research area in grid
computing. The goal of scheduling is to achieve highest possible
system throughput and to match the application need with the
available computing resources. Motivation of the survey is to
encourage the amateur researcher in the field of grid computing, so
that they can understand easily the concept of scheduling and can
contribute in developing more efficient scheduling algorithm. This
will benefit interested researchers to carry out further work in this
thrust area of research.
Abstract: Before performing polymerase chain reactions (PCR), a feasible primer set is required. Many primer design methods have been proposed for design a feasible primer set. However, the majority of these methods require a relatively long time to obtain an optimal solution since large quantities of template DNA need to be analyzed. Furthermore, the designed primer sets usually do not provide a specific PCR product. In recent years, evolutionary computation has been applied to PCR primer design and yielded promising results. In this paper, a particle swarm optimization (PSO) algorithm is proposed to solve primer design problems associated with providing a specific product for PCR experiments. A test set of the gene CYP1A1, associated with a heightened lung cancer risk was analyzed and the comparison of accuracy and running time with the genetic algorithm (GA) and memetic algorithm (MA) was performed. A comparison of results indicated that the proposed PSO method for primer design finds optimal or near-optimal primer sets and effective PCR products in a relatively short time.