Abstract: Quality evaluation of urban environment is an integral
part of efficient urban environment planning and management. The
development of fuzzy set theory (FST) and the introduction of FST
to the urban study field attempts to incorporate the gradual variation
and avoid loss of information. Urban environmental quality
assessment pertain to interpretation and forecast of the urban
environmental quality according to the national regulation about the
permitted content of contamination for the sake of protecting human
health and subsistence environment . A strategic motor vehicle
control strategy has to be proposed to mitigate the air pollution in the
city. There is no well defined guideline for the assessment of urban
air pollution and no systematic study has been reported so far for
Indian cities. The methodology adopted may be useful in similar
cities of India. Remote sensing & GIS can play significant role in
mapping air pollution.
Abstract: Game theory could be used to analyze the conflicted
issues in the field of information hiding. In this paper, 2-phase game
can be used to build the embedder-attacker system to analyze the
limits of hiding capacity of embedding algorithms: the embedder
minimizes the expected damage and the attacker maximizes it. In the
system, the embedder first consumes its resource to build embedded
units (EU) and insert the secret information into EU. Then the attacker
distributes its resource evenly to the attacked EU. The expected
equilibrium damage, which is maximum damage in value from the
point of view of the attacker and minimum from the embedder against
the attacker, is evaluated by the case when the attacker attacks a
subset from all the EU. Furthermore, the optimal equilibrium capacity
of hiding information is calculated through the optimal number of EU
with the embedded secret information. Finally, illustrative examples
of the optimal equilibrium capacity are presented.
Abstract: Authentication plays a vital role in many secure
systems. Most of these systems require user to log in with his or her
secret password or pass phrase before entering it. This is to ensure all
the valuables information is kept confidential guaranteeing also its
integrity and availability. However, to achieve this goal, users are
required to memorize high entropy passwords or pass phrases.
Unfortunately, this sometimes causes difficulty for user to remember
meaningless strings of data. This paper presents a new scheme which
assigns a weight to each personal question given to the user in
revealing the encrypted secrets or password. Concentration of this
scheme is to offer fault tolerance to users by allowing them to forget
the specific password to a subset of questions and still recover the
secret and achieve successful authentication. Comparison on level of
security for weight-based and weightless secret recovery scheme is
also discussed. The paper concludes with the few areas that requires
more investigation in this research.
Abstract: In this paper, a Gaussian multiple input multiple output multiple eavesdropper (MIMOME) channel is considered where a transmitter communicates to a receiver in the presence of an eavesdropper. We present a technique for determining the secrecy capacity of the multiple input multiple output (MIMO) channel under Gaussian noise. We transform the degraded MIMOME channel into multiple single input multiple output (SIMO) Gaussian wire-tap channels and then use scalar approach to convert it into two equivalent multiple input single output (MISO) channels. The secrecy capacity model is then developed for the condition where the channel state information (CSI) for main channel only is known to the transmitter. The results show that the secret communication is possible when the eavesdropper channel noise is greater than a cutoff noise level. The outage probability is also analyzed of secrecy capacity is also analyzed. The effect of fading and outage probability is also analyzed.
Abstract: The binary phase-only filter digital watermarking
embeds the phase information of the discrete Fourier transform of the
image into the corresponding magnitudes for better image authentication.
The paper proposed an approach of how to implement watermark
embedding by quantizing the magnitude, with discussing how to
regulate the quantization steps based on the frequencies of the magnitude
coefficients of the embedded watermark, and how to embed the
watermark at low frequency quantization. The theoretical analysis and
simulation results show that algorithm flexibility, security, watermark
imperceptibility and detection performance of the binary phase-only
filter digital watermarking can be effectively improved with quantization
based watermark embedding, and the robustness against JPEG
compression will also be increased to some extent.
Abstract: As the network based technologies become
omnipresent, demands to secure networks/systems against threat
increase. One of the effective ways to achieve higher security is
through the use of intrusion detection systems (IDS), which are a
software tool to detect anomalous in the computer or network. In this
paper, an IDS has been developed using an improved machine
learning based algorithm, Locally Linear Neuro Fuzzy Model
(LLNF) for classification whereas this model is originally used for
system identification. A key technical challenge in IDS and LLNF
learning is the curse of high dimensionality. Therefore a feature
selection phase is proposed which is applicable to any IDS. While
investigating the use of three feature selection algorithms, in this
model, it is shown that adding feature selection phase reduces
computational complexity of our model. Feature selection algorithms
require the use of a feature goodness measure. The use of both a
linear and a non-linear measure - linear correlation coefficient and
mutual information- is investigated respectively
Abstract: In this paper, we propose an efficient hierarchical DNA
sequence search method to improve the search speed while the
accuracy is being kept constant. For a given query DNA sequence,
firstly, a fast local search method using histogram features is used as a
filtering mechanism before scanning the sequences in the database.
An overlapping processing is newly added to improve the robustness
of the algorithm. A large number of DNA sequences with low
similarity will be excluded for latter searching. The Smith-Waterman
algorithm is then applied to each remainder sequences. Experimental
results using GenBank sequence data show the proposed method
combining histogram information and Smith-Waterman algorithm is
more efficient for DNA sequence search.
Abstract: Re-entrant scheduling is an important search problem
with many constraints in the flow shop. In the literature, a number of
approaches have been investigated from exact methods to
meta-heuristics. This paper presents a genetic algorithm that encodes
the problem as multi-level chromosomes to reflect the dependent
relationship of the re-entrant possibility and resource consumption.
The novel encoding way conserves the intact information of the data
and fastens the convergence to the near optimal solutions. To test the
effectiveness of the method, it has been applied to the
resource-constrained re-entrant flow shop scheduling problem.
Computational results show that the proposed GA performs better than
the simulated annealing algorithm in the measure of the makespan
Abstract: The purpose of this study is to explore how the emotions at the moment of conflict escalation are expressed nonverbally and how it can be detected by the parties involved in the conflicting situation. The study consists of two parts, in the first part it starts with the definition of "conflict" and "nonverbal communication". Further it includes the analysis of emotions and types of emotions, which may bring to the conflict escalation. Four types of emotions and emotion constructs are analyzed, particularly fear, anger, guilt and frustration. The second part of the study analyses the general role of nonverbal behavior in interaction and communication, which information it may give during communication to the person, who sends or receives those signals. The study finishes with the analysis of the nonverbal expression of analyzed emotions and on how it can be used during interaction.
Abstract: Collaborative networked learning (hereafter CNL)
was first proposed by Charles Findley in his work “Collaborative
networked learning: online facilitation and software support" as part
of instructional learning for the future of the knowledge worker. His
premise was that through electronic dialogue learners and experts
could interactively communicate within a contextual framework to
resolve problems, and/or to improve product or process knowledge.
Collaborative learning has always been the forefront of educational
technology and pedagogical research, but not in the mainstream of
operations management. As a result, there is a large disparity in the
study of CNL, and little is known about the antecedents of network
collaboration and sharing of information among diverse employees in
the manufacturing environment. This paper presents a model to
bridge the gap between theory and practice. The objective is that
manufacturing organizations will be able to accelerate organizational
learning and sharing of information through various collaborative
Abstract: One of the problems in fault diagnosis of transformer
based on dissolved gas, is lack of matching the result of fault
diagnosis of different standards with the real world. In this paper, the
result of the different standards is analyzed using fuzzy and the result
is compared with the empirical test. The comparison between the
suggested method and existing methods indicate the capability of the
suggested method in on-line fault diagnosis of the transformers. In
addition, in some cases the existing standards are not able to
diagnose the fault. In theses cases, the presented method has the
potential of diagnosing the fault. The information of three
transformers is used to the show the capability of the suggested
method in diagnosing the fault. The results validate the capability of
the presented method in fault diagnosis of the transformer.
Abstract: This paper proposes a new model to support user
queries on postgraduate research information at Universiti Tenaga
Nasional. The ontology to be developed will contribute towards
shareable and reusable domain knowledge that makes knowledge
assets intelligently accessible to both people and software. This work
adapts a methodology for ontology development based on the
framework proposed by Uschold and King. The concepts and
relations in this domain are represented in a class diagram using the
Protégé software. The ontology will be used to support a menudriven
query system for assisting students in searching for
information related to postgraduate research at the university.
Abstract: detecting the deadlock is one of the important
problems in distributed systems and different solutions have been
proposed for it. Among the many deadlock detection algorithms,
Edge-chasing has been the most widely used. In Edge-chasing
algorithm, a special message called probe is made and sent along
dependency edges. When the initiator of a probe receives the probe
back the existence of a deadlock is revealed. But these algorithms are
not problem-free. One of the problems associated with them is that
they cannot detect some deadlocks and they even identify false
deadlocks. A key point not mentioned in the literature is that when
the process is waiting to obtain the required resources and its
execution has been blocked, how it can actually respond to probe
messages in the system. Also the question of 'which process should
be victimized in order to achieve a better performance when multiple
cycles exist within one single process in the system' has received
little attention. In this paper, one of the basic concepts of the
operating system - daemon - will be used to solve the problems
mentioned. The proposed Algorithm becomes engaged in sending
probe messages to the mandatory daemons and collects enough
information to effectively identify and resolve multi-cycle deadlocks
in distributed systems.
Abstract: New graph similarity methods have been proposed in this work with the aim to refining the chemical information extracted from molecules matching. For this purpose, data fusion of the isomorphic and nonisomorphic subgraphs into a new similarity measure, the Approximate Similarity, was carried out by several approaches. The application of the proposed method to the development of quantitative structure-activity relationships (QSAR) has provided reliable tools for predicting several pharmacological parameters: binding of steroids to the globulin-corticosteroid receptor, the activity of benzodiazepine receptor compounds, and the blood brain barrier permeability. Acceptable results were obtained for the models presented here.
Abstract: In this paper a new approach to prioritize urban planning projects in an efficient and reliable way is presented. It is based on environmental pressure indices and multicriteria decision methods. The paper introduces a rigorous method with acceptable complexity of rank ordering urban development proposals according to their environmental pressure. The technique combines the use of Environmental Pressure Indicators, the aggregation of indicators in an Environmental Pressure Index by means of the Analytic Network Process method and interpreting the information obtained from the experts during the decision-making process. The ANP method allows the aggregation of the experts- judgments on each of the indicators into one Environmental Pressure Index. In addition, ANP is based on utility ratio functions which are the most appropriate for the analysis of uncertain data, like experts- estimations. Finally, unlike the other multicriteria techniques, ANP allows the decision problem to be modelled using the relationships among dependent criteria. The method has been applied to the proposal for urban development of La Carlota airport in Caracas (Venezuela). The Venezuelan Government would like to see a recreational project develop on the abandoned area and mean a significant improvement for the capital. There are currently three options on their table which are currently under evaluation. They include a Health Club, a Residential area and a Theme Park. The participating experts coincided in the appreciation that the method proposed in this paper is useful and an improvement from traditional techniques such as environmental impact studies, lifecycle analysis, etc. They find the results obtained coherent, the process seems sufficiently rigorous and precise, and the use of resources is significantly less than in other methods.
Abstract: One of the major challenges in the Information
Retrieval field is handling the massive amount of information
available to Internet users. Existing ranking techniques and strategies
that govern the retrieval process fall short of expected accuracy.
Often relevant documents are buried deep in the list of documents
returned by the search engine. In order to improve retrieval accuracy
we examine the issue of language effect on the retrieval process.
Then, we propose a solution for a more biased, user-centric relevance
for retrieved data. The results demonstrate that using indices based
on variations of the same language enhances the accuracy of search
engines for individual users.
Abstract: Automatic reusability appraisal could be helpful in
evaluating the quality of developed or developing reusable software
components and in identification of reusable components from
existing legacy systems; that can save cost of developing the software
from scratch. But the issue of how to identify reusable components
from existing systems has remained relatively unexplored. In this
paper, we have mentioned two-tier approach by studying the
structural attributes as well as usability or relevancy of the
component to a particular domain. Latent semantic analysis is used
for the feature vector representation of various software domains. It
exploits the fact that FeatureVector codes can be seen as documents
containing terms -the idenifiers present in the components- and so
text modeling methods that capture co-occurrence information in
low-dimensional spaces can be used. Further, we devised Neuro-
Fuzzy hybrid Inference System, which takes structural metric values
as input and calculates the reusability of the software component.
Decision tree algorithm is used to decide initial set of fuzzy rules for
the Neuro-fuzzy system. The results obtained are convincing enough
to propose the system for economical identification and retrieval of
reusable software components.
Abstract: Fiber optic sensor technology offers the possibility of
sensing different parameters like strain, temperature, pressure in
harsh environment and remote locations. these kinds of sensors
modulates some features of the light wave in an optical fiber such an
intensity and phase or use optical fiber as a medium for transmitting
the measurement information.
The advantages of fiber optic sensors in contrast to conventional
electrical ones make them popular in different applications and now a
day they consider as a key component in improving industrial
processes, quality control systems, medical diagnostics, and
preventing and controlling general process abnormalities.
This paper is an introduction to fiber optic sensor technology and
some of the applications that make this branch of optic technology,
which is still in its early infancy, an interesting field.
Abstract: As open innovation has received increasingly attention
in the management of innovation, the importance of identifying
potential partnership is increasing. This paper suggests a methodology
to identify the interested parties as one of Innovation intermediaries to
enable open innovation with patent network. To implement the
methodology, multi-stage patent citation analysis such as
bibliographic coupling and information visualization method such as
keyword vector mapping are utilized. This paper has contribution in
that it can present meaningful collaboration keywords to identified
potential partners in network since not only citation information but
also patent textual information is used.
Abstract: This paper is devoted to present and discuss a model that allows a local segmentation by using statistical information of a given image. It is based on Chan-Vese model, curve evolution, partial differential equations and binary level sets method. The proposed model uses the piecewise constant approximation of Chan-Vese model to compute Signed Pressure Force (SPF) function, this one attracts the curve to the true object(s)-s boundaries. The implemented model is used to extract weld defects from weld radiographic images in the aim to calculate the perimeter and surfaces of those weld defects; encouraged resultants are obtained on synthetic and real radiographic images.