Abstract: Operational safety of critical systems, such as nuclear power plants, industrial chemical processes and means of transportation, is a major concern for system engineers and operators. A means to assure that is on-line safety monitors that deliver three safety tasks; fault detection and diagnosis, alarm annunciation and fault controlling. While current monitors deliver these tasks, benefits and limitations in their approaches have at the same time been highlighted. Drawing from those benefits, this paper develops a distributed monitor based on semi-independent agents, i.e. a multiagent system, and monitoring knowledge derived from a safety assessment model of the monitored system. Agents are deployed hierarchically and provided with knowledge portions and collaboration protocols to reason and integrate over the operational conditions of the components of the monitored system. The monitor aims to address limitations arising from the large-scale, complicated behaviour and distributed nature of monitored systems and deliver the aforementioned three monitoring tasks effectively.
Abstract: Due to memory leaks, often-valuable system memory
gets wasted and denied for other processes thereby affecting the
computational performance. If an application-s memory usage
exceeds virtual memory size, it can leads to system crash. Current
memory leak detection techniques for clusters are reactive and
display the memory leak information after the execution of the
process (they detect memory leak only after it occur).
This paper presents a Dynamic Memory Monitoring Agent
(DMMA) technique. DMMA framework is a dynamic memory leak
detection, that detects the memory leak while application is in
execution phase, when memory leak in any process in the cluster is
identified by DMMA it gives information to the end users to enable
them to take corrective actions and also DMMA submit the affected
process to healthy node in the system. Thus provides reliable service
to the user. DMMA maintains information about memory
consumption of executing processes and based on this information
and critical states, DMMA can improve reliability and
efficaciousness of cluster computing.
Abstract: Limited infrastructure development on peats and
organic soils is a serious geotechnical issues common to many
countries of the world especially Malaysia which distributed 1.5 mill
ha of those problematic soil. These soils have high water content and
organic content which exhibit different mechanical properties and
may also change chemically and biologically with time. Constructing
structures on peaty ground involves the risk of ground failure and
extreme settlement. Nowdays, much efforts need to be done in
making peatlands usable for construction due to increased landuse.
Deep mixing method employing cement as binders, is generally used
as measure again peaty/ organic ground failure problem. Where the
technique is widely adopted because it can improved ground
considerably in a short period of time. An understanding of
geotechnical properties as shear strength, stiffness and compressibility
behavior of these soils was requires before continues construction on
it. Therefore, 1- 1.5 meter peat soil sample from states of Johor and
an organic soil from Melaka, Malaysia were investigated. Cement
were added to the soil in the pre-mixing stage with water cement ratio
at range 3.5,7,14,140 for peats and 5,10,30 for organic soils,
essentially to modify the original soil textures and properties. The
mixtures which in slurry form will pour to polyvinyl chloride (pvc)
tube and cured at room temperature 250C for 7,14 and 28 days.
Laboratory experiments were conducted including unconfined
compressive strength and bender element , to monitor the improved
strength and stiffness of the 'stabilised mixed soils'. In between,
scanning electron miscroscopic (SEM) were observations to
investigate changes in microstructures of stabilised soils and to
evaluated hardening effect of a peat and organic soils stabilised
cement. This preliminary effort indicated that pre-mixing peat and
organic soils contributes in gaining soil strength while help the
engineers to establish a new method for those problematic ground
improvement in further practical and long term applications.
Abstract: Most of the well known methods for generating
Gaussian variables require at least one standard uniform distributed
value, for each Gaussian variable generated. The length of the
random number generator therefore, limits the number of
independent Gaussian distributed variables that can be generated
meanwhile the statistical solution of complex systems requires a
large number of random numbers for their statistical analysis. We
propose an alternative simple method of generating almost infinite
number of Gaussian distributed variables using a limited number of
standard uniform distributed random numbers.
Abstract: Global climate change has become the preeminent
threat to human security in the 21st century. From mitigation perspective, this study aims to evaluate the performance of biogas
renewable project under clean development mechanism activities
(namely Korat-Waste-to-Energy) in Thailand and to assess local perceptions towards the significance of climate change mitigation and
sustainability of such project in their community. Questionnaire was
developed based on the national sustainable development criteria and
was distributed among systematically selected households within
project boundaries (n=260). Majority of the respondents strongly agreed with the reduction of odor problems (81%) and air pollution
(76%). However, they were unsure about greenhouse gas reduction from such project and ignorant about the key issues of climate change. A lesson learned suggested that there is a need to further
investigate the possible socio-psychological barriers may significantly shape public perception and understandings of climate
change in the local context.
Abstract: In this work we present some matrix operators named
circulant operators and their action on square matrices. This study on
square matrices provides new insights into the structure of the space
of square matrices. Moreover it can be useful in various fields as in
agents networking on Grid or large-scale distributed self-organizing
grid systems.
Abstract: In this work, a radial basis function (RBF) neural network is developed for the identification of hyperbolic distributed parameter systems (DPSs). This empirical model is based only on process input-output data and used for the estimation of the controlled variables at specific locations, without the need of online solution of partial differential equations (PDEs). The nonlinear model that is obtained is suitably transformed to a nonlinear state space formulation that also takes into account the model mismatch. A stable robust control law is implemented for the attenuation of external disturbances. The proposed identification and control methodology is applied on a long duct, a common component of thermal systems, for a flow based control of temperature distribution. The closed loop performance is significantly improved in comparison to existing control methodologies.
Abstract: In this paper, by using Mawhin-s continuation theorem of coincidence degree and a method based on delay differential inequality, some sufficient conditions are obtained for the existence and global exponential stability of periodic solutions of cellular neural networks with distributed delays and impulses on time scales. The results of this paper generalized previously known results.
Abstract: This paper deals with dynamic load balancing using PVM. In distributed environment Load Balancing and Heterogeneity are very critical issues and needed to drill down in order to achieve the optimal results and efficiency. Various techniques are being used in order to distribute the load dynamically among different nodes and to deal with heterogeneity. These techniques are using different approaches where Process Migration is basic concept with different optimal flavors. But Process Migration is not an easy job, it impose lot of burden and processing effort in order to track each process in nodes. We will propose a dynamic load balancing technique in which application will intelligently balance the load among different nodes, resulting in efficient use of system and have no overheads of process migration. It would also provide a simple solution to problem of load balancing in heterogeneous environment.
Abstract: A topologically oriented neural network is very
efficient for real-time path planning for a mobile robot in changing
environments. When using a recurrent neural network for this
purpose and with the combination of the partial differential equation
of heat transfer and the distributed potential concept of the network,
the problem of obstacle avoidance of trajectory planning for a
moving robot can be efficiently solved. The related dimensional
network represents the state variables and the topology of the robot's
working space. In this paper two approaches to problem solution are
proposed. The first approach relies on the potential distribution of
attraction distributed around the moving target, acting as a unique
local extreme in the net, with the gradient of the state variables
directing the current flow toward the source of the potential heat. The
second approach considers two attractive and repulsive potential
sources to decrease the time of potential distribution. Computer
simulations have been carried out to interrogate the performance of
the proposed approaches.
Abstract: Over the past few years, a number of efforts have
been exerted to build parallel processing systems that utilize the idle
power of LAN-s and PC-s available in many homes and corporations.
The main advantage of these approaches is that they provide cheap
parallel processing environments for those who cannot afford the
expenses of supercomputers and parallel processing hardware.
However, most of the solutions provided are not very flexible in the
use of available resources and very difficult to install and setup.
In this paper, a multi-level web-based parallel processing system
(MWPS) is designed (appendix). MWPS is based on the idea of
volunteer computing, very flexible, easy to setup and easy to use.
MWPS allows three types of subscribers: simple volunteers (single
computers), super volunteers (full networks) and end users. All of
these entities are coordinated transparently through a secure web site.
Volunteer nodes provide the required processing power needed by
the system end users. There is no limit on the number of volunteer
nodes, and accordingly the system can grow indefinitely. Both
volunteer and system users must register and subscribe. Once, they
subscribe, each entity is provided with the appropriate MWPS
components. These components are very easy to install.
Super volunteer nodes are provided with special components that
make it possible to delegate some of the load to their inner nodes.
These inner nodes may also delegate some of the load to some other
lower level inner nodes .... and so on. It is the responsibility of the
parent super nodes to coordinate the delegation process and deliver
the results back to the user.
MWPS uses a simple behavior-based scheduler that takes into
consideration the current load and previous behavior of processing
nodes. Nodes that fulfill their contracts within the expected time get a
high degree of trust. Nodes that fail to satisfy their contract get a
lower degree of trust.
MWPS is based on the .NET framework and provides the minimal
level of security expected in distributed processing environments.
Users and processing nodes are fully authenticated. Communications
and messages between nodes are very secure. The system has been
implemented using C#.
MWPS may be used by any group of people or companies to
establish a parallel processing or grid environment.
Abstract: Multiparty voice over IP (MVoIP) systems allows a group of people to freely communicate each other via the internet, which have many applications such as online gaming, teleconferencing, online stock trading etc. Peertalk is a peer to peer multiparty voice over IP system (MVoIP) which is more feasible than existing approaches such as p2p overlay multicast and coupled distributed processing. Since the stream mixing and distribution are done by the peers, it is vulnerable to major security threats like nodes misbehavior, eavesdropping, Sybil attacks, Denial of Service (DoS), call tampering, Man in the Middle attacks etc. To thwart the security threats, a security framework called PEERTS (PEEred Reputed Trustworthy System for peertalk) is implemented so that efficient and secure communication can be carried out between peers.
Abstract: This study reports the preparation of soft magnetic
ribbons of Fe-based amorphous alloys using the single-roller melt-spinning technique. Ribbon width varied from 142 mm to 213
mm and, with a thickness of approximately 22 μm ± 2 μm. The microstructure and magnetic properties of the ribbons were
characterized by differential scanning calorimeter (DSC), X-ray diffraction (XRD), vibrating sample magnetometer (VSM), and electrical resistivity measurements (ERM). The amorphous material
properties dependence of the cooling rate and nozzle pressure have uneven surface in ribbon thicknesses are investigated. Magnetic
measurement results indicate that some region of the ribbon exhibits good magnetic properties, higher saturation induction and lower coercivity. However, due to the uneven surface of 213 mm wide
ribbon, the magnetic responses are not uniformly distributed. To
understand the transformer magnetic performances, this study analyzes the measurements of a three-phase 2 MVA amorphous-cored transformer. Experimental results confirm that the transformer with a
ribbon width of 142 mm has better magnetic properties in terms of lower core loss, exciting power, and audible noise.
Abstract: A two dimensional numerical simulation has been
performed for incompressible and compressible fluid flow through
microchannels in slip flow regime. The Navier-Stokes equations have
been solved in conjunction with Maxwell slip conditions for
modeling flow field associated with slip flow regime. The wall
roughness is simulated with triangular microelements distributed on
wall surfaces to study the effects of roughness on fluid flow. Various
Mach and Knudsen numbers are used to investigate the effects of
rarefaction as well as compressibility. It is found that rarefaction has
more significant effect on flow field in microchannels with higher
relative roughness. It is also found that compressibility has more
significant effects on Poiseuille number when relative roughness
increases. In addition, similar to incompressible models the increase
in average fRe is more significant at low Knudsen number flows but
the increase of Poiseuille number duo to relative roughness is sharper
for compressible models. The numerical results have also validated
with some available theoretical and experimental relations and good
agreements have been seen.
Abstract: UK breweries generate extensive by products in the
form of spent grain, slurry and yeast. Much of the spent grain is
produced by large breweries and processed in bulk for animal feed.
Spent brewery grains contain up to 20% protein dry weight and up to
60% fiber and are useful additions to animal feed. Bulk processing is
economic and allows spent grain to be sold so providing an income
to the brewery. A proportion of spent grain, however, is produced by
small local breweries and is more variably distributed to farms or
other users using intermittent collection methods. Such use is much
less economic and may incur losses if not carefully assessed for
transport costs. This study reports an economic returns of using wet
brewery spent grain (WBSG) in animal feed using the Co-product
Optimizer Decision Evaluator model (Cattle CODE) developed by
the University of Nebraska to predict performance and economic
returns when byproducts are fed to finishing cattle. The results
indicated that distance from brewery to farm had a significantly
greater effect on the economics of use of small brewery spent grain
and that alternative uses than cattle feed may be important to
develop.
Abstract: A typical definition of the Computer Aided Diagnosis
(CAD), found in literature, can be: A diagnosis made by a radiologist
using the output of a computerized scheme for automated image
analysis as a diagnostic aid. Often it is possible to find the expression
Computer Aided Detection (CAD or CADe): this definition
emphasizes the intent of CAD to support rather than substitute the
human observer in the analysis of radiographic images. In this article
we will illustrate the application of CAD systems and the aim of
these definitions.
Commercially available CAD systems use computerized
algorithms for identifying suspicious regions of interest. In this paper
are described the general CAD systems as an expert system
constituted of the following components: segmentation / detection,
feature extraction, and classification / decision making.
As example, in this work is shown the realization of a Computer-
Aided Detection system that is able to assist the radiologist in
identifying types of mammary tumor lesions. Furthermore this
prototype of station uses a GRID configuration to work on a large
distributed database of digitized mammographic images.
Abstract: One of the main problems of suspended cable structures is initial shape change under the action of non uniform load. The problem can be solved by increasing of weight of construction or by using of prestressing. But this methods cause increasing of materials consumption of suspended cable structure. The cable truss usage is another way how the problem of shape change under the action of non uniform load can be fixed. The cable trusses with the vertical and inclined suspensions, cross web and single cable were analyzed as the main load-bearing structures of suspension bridge. It was shown, that usage of cable truss allows to reduce the vertical displacements up to 32% in comparison with the single cable in case of non uniformly distributed load. In case of uniformly distributed load single cable is preferable.
Abstract: Fine-grained data replication over the Internet allows duplication of frequently accessed data objects, as opposed to entire sites, to certain locations so as to improve the performance of largescale content distribution systems. In a distributed system, agents representing their sites try to maximize their own benefit since they are driven by different goals such as to minimize their communication costs, latency, etc. In this paper, we will use game theoretical techniques and in particular auctions to identify a bidding mechanism that encapsulates the selfishness of the agents, while having a controlling hand over them. In essence, the proposed game theory based mechanism is the study of what happens when independent agents act selfishly and how to control them to maximize the overall performance. A bidding mechanism asks how one can design systems so that agents- selfish behavior results in the desired system-wide goals. Experimental results reveal that this mechanism provides excellent solution quality, while maintaining fast execution time. The comparisons are recorded against some well known techniques such as greedy, branch and bound, game theoretical auctions and genetic algorithms.
Abstract: The purpose of semantic web research is to transform
the Web from a linked document repository into a distributed knowledge base and application platform, thus allowing the vast range of available information and services to be more efficiently
exploited. As a first step in this transformation, languages such as
OWL have been developed. Although fully realizing the Semantic Web still seems some way off, OWL has already been very
successful and has rapidly become a defacto standard for ontology
development in fields as diverse as geography, geology, astronomy,
agriculture, defence and the life sciences. The aim of this paper is to classify key concepts of Semantic Web as well as introducing a new
practical approach which uses these concepts to outperform Word Wide Web.
Abstract: This paper shows a new method for design of fuzzy observers for Takagi-Sugeno systems. The method is based on Linear matrix inequalities (LMIs) and it allows to insert H constraint into the design procedure. The speed of estimation can tuned be specification of a decay rate of the observer closed loop system. We discuss here also the influence of parametric uncertainties at the output control system stability.