Abstract: Autoregressive Moving average (ARMA) is a parametric based method of signal representation. It is suitable for problems in which the signal can be modeled by explicit known source functions with a few adjustable parameters. Various methods have been suggested for the coefficients determination among which are Prony, Pade, Autocorrelation, Covariance and most recently, the use of Artificial Neural Network technique. In this paper, the method of using Artificial Neural network (ANN) technique is compared with some known and widely acceptable techniques. The comparisons is entirely based on the value of the coefficients obtained. Result obtained shows that the use of ANN also gives accurate in computing the coefficients of an ARMA system.
Abstract: In this study, an experimental investigation was carried
out to fix CO2 into the electronic arc furnace (EAF) reducing slag from
stainless steelmaking process under wet grinding. The slag was ground
by the vibrating ball mill with the CO2 and pure water. The reaction
behavior was monitored with constant pressure method, and the
change of CO2 volume in the experimental system with grinding time
was measured. It was found that the CO2 absorption occurred as soon
as the grinding started. The CO2 absorption under wet grinding was
significantly larger than that under dry grinding. Generally, the
amount of CO2 absorption increased as the amount of water, the
amount of slag, the diameter of alumina ball and the initial pressure of
CO2 increased. However, the initial absorption rate was scarcely
influenced by the experimental conditions except for the initial CO2
pressure. According to this research, the CO2 reacted with the CaO
inside the slag to form CaCO3.
Abstract: In this paper, we rely on the story of the late British
weapons inspector David Kelly to illustrate how sensemaking can
inform the study of the ethics of suppression of dissent. Using
archival data, we reconstruct Dr. Kelly-s key responsibilities as a
weapons inspector and government employee. We begin by clarifying
the concept of dissent and how it is a useful organizational process.
We identify the various ways that dissent has been discussed in the
organizational literature and reconsider the process of sensemaking.
We conclude that suppression of opinions that deviate from the
majority is part of the identity maintenance of the sensemaking
process. We illustrate the prevention of dissent in organizations
consists of a set of unsatisfactory trade-offs.
Abstract: Psoriasis is a widespread skin disease affecting up to 2% population with plaque psoriasis accounting to about 80%. It can be identified as a red lesion and for the higher severity the lesion is usually covered with rough scale. Psoriasis Area Severity Index (PASI) scoring is the gold standard method for measuring psoriasis severity. Scaliness is one of PASI parameter that needs to be quantified in PASI scoring. Surface roughness of lesion can be used as a scaliness feature, since existing scale on lesion surface makes the lesion rougher. The dermatologist usually assesses the severity through their tactile sense, therefore direct contact between doctor and patient is required. The problem is the doctor may not assess the lesion objectively. In this paper, a digital image analysis technique is developed to objectively determine the scaliness of the psoriasis lesion and provide the PASI scaliness score. Psoriasis lesion is modelled by a rough surface. The rough surface is created by superimposing a smooth average (curve) surface with a triangular waveform. For roughness determination, a polynomial surface fitting is used to estimate average surface followed by a subtraction between rough and average surface to give elevation surface (surface deviations). Roughness index is calculated by using average roughness equation to the height map matrix. The roughness algorithm has been tested to 444 lesion models. From roughness validation result, only 6 models can not be accepted (percentage error is greater than 10%). These errors occur due the scanned image quality. Roughness algorithm is validated for roughness measurement on abrasive papers at flat surface. The Pearson-s correlation coefficient of grade value (G) of abrasive paper and Ra is -0.9488, its shows there is a strong relation between G and Ra. The algorithm needs to be improved by surface filtering, especially to overcome a problem with noisy data.
Abstract: Corporate Social Responsibility (CSR) has become a
new trend of business governance. Few research studies on CSR
published in Taiwanese academia, especially for medical settings, we
were interested in probing the relationship of CSR and financial
performance in medical settings in Taiwan. The results illustrate that:
(1) a time delay effect exists with a lag between CSR effort and its
performance in the hospital foundation, (2) input into the internal
domains of CSR will be helpful to improve employee productivity in
the hospital foundation, and (3) input into the external domains of CSR
will be helpful in improving financial performance in the hospital
foundation. This study overviews CSR in the medical industry in
Taiwan and the relationship of CSR and financial performance.
Discussions of possible implications from the study results are applied
to consult the CSR concept that will be transferred into a business
strategy for the organization manager.
Abstract: The proper selection of the AC-side passive filter
interconnecting the voltage source converter to the power supply is
essential to obtain satisfactory performances of an active power filter
system. The use of the LCL-type filter has the advantage of
eliminating the high frequency switching harmonics in the current
injected into the power supply. This paper is mainly focused on
analyzing the influence of the interface filter parameters on the active
filtering performances. Some design aspects are pointed out. Thus,
the design of the AC interface filter starts from transfer functions by
imposing the filter performance which refers to the significant current
attenuation of the switching harmonics without affecting the
harmonics to be compensated. A Matlab/Simulink model of the entire
active filtering system including a concrete nonlinear load has been
developed to examine the system performances. It is shown that a
gamma LC filter could accomplish the attenuation requirement of the
current provided by converter. Moreover, the existence of an optimal
value of the grid-side inductance which minimizes the total harmonic
distortion factor of the power supply current is pointed out.
Nevertheless, a small converter-side inductance and a damping
resistance in series with the filter capacitance are absolutely needed
in order to keep the ripple and oscillations of the current at the
converter side within acceptable limits. The effect of change in the
LCL-filter parameters is evaluated. It is concluded that good active
filtering performances can be achieved with small values of the
capacitance and converter-side inductance.
Abstract: Artificial Intelligence (AI) methods are increasingly being used for problem solving. This paper concerns using AI-type learning machines for power quality problem, which is a problem of general interest to power system to provide quality power to all appliances. Electrical power of good quality is essential for proper operation of electronic equipments such as computers and PLCs. Malfunction of such equipment may lead to loss of production or disruption of critical services resulting in huge financial and other losses. It is therefore necessary that critical loads be supplied with electricity of acceptable quality. Recognition of the presence of any disturbance and classifying any existing disturbance into a particular type is the first step in combating the problem. In this work two classes of AI methods for Power quality data mining are studied: Artificial Neural Networks (ANNs) and Support Vector Machines (SVMs). We show that SVMs are superior to ANNs in two critical respects: SVMs train and run an order of magnitude faster; and SVMs give higher classification accuracy.
Abstract: If science is supposed to gain greater social
relevance and acceptance, researchers must not only relate to
the broader public, but also promote intercourse within the
ivory tower itself. The latter process has been under way
successfully for a number of years in the form of
transdisciplinary research initiatives. What is still lacking is a
broad debate about the necessity to look around properly and
face up to opposing views on one and the same topic within
our own discipline.
Abstract: This paper presents a formalisation of the different existing code mutation techniques (polymorphism and metamorphism) by means of formal grammars. While very few theoretical results are known about the detection complexity of viral mutation techniques, we exhaustively address this critical issue by considering the Chomsky classification of formal grammars. This enables us to determine which family of code mutation techniques are likely to be detected or on the contrary are bound to remain undetected. As an illustration we then present, on a formal basis, a proof-of-concept metamorphic mutation engine denoted PB MOT, whose detection has been proven to be undecidable.
Abstract: Knowledge sharing enables the information or
knowledge to be transmitted from one source to another. This paper
demonstrates the needs of having the online book catalogue which
can be used to facilitate disseminating information on textbook used
in the university. This project is aimed to give access to the students
and lecturers to the list of books in the bookstore and at the same
time to allow book reviewing without having to visit the bookstore
physically. Research is carried out according to the boundaries which
accounts to current process of new book purchasing, current system
used by the bookstore and current process the lecturers go through
for reviewing textbooks. The questionnaire is used to gather the
requirements and it is distributed to 100 students and 40 lecturers.
This project has enabled the improvement of a manual process to be
carried out automatically, through a web based platform. It is shown
based on the user acceptance survey carried out that target groups
found that this web service is feasible to be implemented in
Universiti Teknologi PETRONAS (UTP), and they have shown
positive signs of interest in utilizing it in the future.
Abstract: This article discusses the questions concerning of creating small packet networks for energy companies with application of high voltage power line carrier equipment (PLC) with functionality of IP traffic transmission. The main idea is to create converged PLC links between substations and dispatching centers where packet data and voice are transmitted in one data flow. The article contents description of basic conception of the network, evaluation of voice traffic transmission parameters, and discussion of header compression techniques in relation to PLC links. The results of exploration show us, that convergent packet PLC links can be very useful in the construction of small packet networks between substations in remote locations, such as deposits or low populated areas.
Abstract: The main goal of this work is to propose a way for
combined use of two nontraditional algorithms by solving topological
problems on telecommunications concentrator networks. The
algorithms suggested are the Simulated Annealing algorithm and the
Genetic Algorithm. The Algorithm of Simulated Annealing unifies
the well known local search algorithms. In addition - Simulated
Annealing allows acceptation of moves in the search space witch lead
to decisions with higher cost in order to attempt to overcome any
local minima obtained. The Genetic Algorithm is a heuristic approach
witch is being used in wide areas of optimization works. In the last
years this approach is also widely implemented in
Telecommunications Networks Planning. In order to solve less or
more complex planning problem it is important to find the most
appropriate parameters for initializing the function of the algorithm.
Abstract: 20 years of dentistry was a period of transition from
communist to market economy but Romanian doctors have
insufficient management knowledge. Recently, the need for modern
management has increased due to technologies and superior materials
appearance, as patient-s demands.
Research goal is to increase efficiency by evaluating dental
medical office cost categories in real pricing procedures.
Empirical research is based on guided study that includes
information about the association between categories of cost
perception and therapeutic procedures commonly used in dental
offices.
Due to the obtained results to identify all the labours that make up
a settled procedure costs were determined for each procedure.
Financial evaluation software was created with the main functions:
introducing and maintaining patient records, treatment and
appointments made, procedures cost and monitoring office
productivity.
We believe that the study results can significantly improve the
financial management of dental offices, increasing the effectiveness
and quality of services.
Abstract: Components of a software system may be related in a
wide variety of ways. These relationships need to be represented in
software architecture in order develop quality software. In practice, software architecture is immensely challenging, strikingly
multifaceted, extravagantly domain based, perpetually changing,
rarely cost-effective, and deceptively ambiguous. This paper analyses
relations among the major components of software systems and
argues for using several broad categories for software architecture for
assessment purposes: strongly adequate, weakly adequate and
functionally adequate software architectures among other categories.
These categories are intended for formative assessments of
architectural designs.
Abstract: Several approaches such as linear programming, network
modeling, greedy heuristic and decision support system are well-known
approaches in solving irregular airline operation problem. This paper
presents an alternative approach based on Multi Objective Micro Genetic
Algorithm. The aim of this research is to introduce the concept of Multi
Objective Micro Genetic Algorithm as a tool to solve irregular airline
operation, combine and reroute problem. The experiment result indicated
that the model could obtain optimal solutions within a few second.
Abstract: Rooted in the study of social functioning of space in architecture, Space Syntax (SS) and the more recent Network Pattern (NP) researches demonstrate the 'spatial structures' of city, i.e. the hierarchical patterns of streets, junctions and alley ends. Applying SS and NP models, planners can conceptualize the real city-s patterns. Although, both models yield the optimal path of the city their underpinning displays of the city-s spatial configuration differ. The Axial Map analyzes the topological non-distance-based connectivity structure, whereas, the Central-Node Map and the Shortcut-Path Map, in contrast, analyze the metrical distance-based structures. This research contrasts and combines them to understand various forms of city-s structures. It concludes that, while they reveal different spatial structures, Space Syntax and Network Pattern urban models support each the other. Combining together they simulate the global access and the locally compact structures namely the central nodes and the shortcuts for the city.
Abstract: Early detection of dementia by testing the spatial
memory can be applied using a virtual environment. This paper
presents guidelines on how to design a virtual environment
specifically for elderly in early detection of dementia. The specific
design needs to be considered because the effectiveness of the
technology relies on the ability of the end user to use it. The primary
goal of these guidelines is to promote accessibility. Based on these
guidelines, a virtual simulation was developed and evaluated. The
results on usability of acceptance and satisfaction that are tested on
young (control group) and elderly participants indicate that these
guidelines are reliable and useful for use with elderly people.
Abstract: The introduction of haptic elements in a graphic user interfaces are becoming more widespread. Since haptics are being introduced rapidly into computational tools, investigating how these models affect Human-Computer Interaction would help define how to integrate and model new modes of interaction. The interest of this paper is to discuss and investigate the issues surrounding Haptic and Graphic User Interface designs (GUI) as separate systems, as well as understand how these work in tandem. The development of these systems is explored from a psychological perspective, based on how usability is addressed through learning and affordances, defined by J.J. Gibson. Haptic design can be a powerful tool, aiding in intuitive learning. The problems discussed within the text is how can haptic interfaces be integrated within a GUI without the sense of frivolity. Juxtaposing haptics and Graphic user interfaces has issues of motivation; GUI tends to have a performatory process, while Haptic Interfaces use affordances to learn tool use. In a deeper view, it is noted that two modes of perception, foveal and ambient, dictate perception. These two modes were once thought to work in tandem, however it has been discovered that these processes work independently from each other. Foveal modes interpret orientation is space which provide for posture, locomotion, and motor skills with variations of the sensory information, which instructs perceptions of object-task performance. It is contended, here, that object-task performance is a key element in the use of Haptic Interfaces because exploratory learning uses affordances in order to use an object, without meditating an experience cognitively. It is a direct experience that, through iteration, can lead to skill-sets. It is also indicated that object-task performance will not work as efficiently without the use of exploratory or kinesthetic learning practices. Therefore, object-task performance is not as congruently explored in GUI than it is practiced in Haptic interfaces.
Abstract: This paper describes a new approach of classification
using genetic programming. The proposed technique consists of
genetically coevolving a population of non-linear transformations on
the input data to be classified, and map them to a new space with a
reduced dimension, in order to get a maximum inter-classes
discrimination. The classification of new samples is then performed
on the transformed data, and so become much easier. Contrary to the
existing GP-classification techniques, the proposed one use a
dynamic repartition of the transformed data in separated intervals, the
efficacy of a given intervals repartition is handled by the fitness
criterion, with a maximum classes discrimination. Experiments were
first performed using the Fisher-s Iris dataset, and then, the KDD-99
Cup dataset was used to study the intrusion detection and
classification problem. Obtained results demonstrate that the
proposed genetic approach outperform the existing GP-classification
methods [1],[2] and [3], and give a very accepted results compared to
other existing techniques proposed in [4],[5],[6],[7] and [8].
Abstract: Most real world systems express themselves formally
as a set of nonlinear algebraic equations. As applications grow, the
size and complexity of these equations also increase. In this work, we
highlight the key concepts in using the homotopy analysis method
as a methodology used to construct efficient iteration formulas for
nonlinear equations solving. The proposed method is experimentally
characterized according to a set of determined parameters which
affect the systems. The experimental results show the potential and
limitations of the new method and imply directions for future work.