Abstract: Every organization is continually subject to new damages and threats which can be resulted from their operations or their goal accomplishment. Methods of providing the security of space and applied tools have been widely changed with increasing application and development of information technology (IT). From this viewpoint, information security management systems were evolved to construct and prevent reiterating the experienced methods. In general, the correct response in information security management systems requires correct decision making, which in turn requires the comprehensive effort of managers and everyone involved in each plan or decision making. Obviously, all aspects of work or decision are not defined in all decision making conditions; therefore, the possible or certain risks should be considered when making decisions. This is the subject of risk management and it can influence the decisions. Investigation of different approaches in the field of risk management demonstrates their progress from quantitative to qualitative methods with a process approach.
Abstract: Knowledge sharing in general and the contextual
access to knowledge in particular, still represent a key challenge in
the knowledge management framework. Researchers on semantic
web and human machine interface study techniques to enhance this
access. For instance, in semantic web, the information retrieval is
based on domain ontology. In human machine interface, keeping
track of user's activity provides some elements of the context that can
guide the access to information. We suggest an approach based on
these two key guidelines, whilst avoiding some of their weaknesses.
The approach permits a representation of both the context and the
design rationale of a project for an efficient access to knowledge. In
fact, the method consists of an information retrieval environment
that, in the one hand, can infer knowledge, modeled as a semantic
network, and on the other hand, is based on the context and the
objectives of a specific activity (the design). The environment we
defined can also be used to gather similar project elements in order to
build classifications of tasks, problems, arguments, etc. produced in a
company. These classifications can show the evolution of design
strategies in the company.
Abstract: Results of Chilean wine classification based on the
information provided by an electronic nose are reported in this paper.
The classification scheme consists of two parts; in the first stage,
Principal Component Analysis is used as feature extraction method to
reduce the dimensionality of the original information. Then, Radial
Basis Functions Neural Networks is used as pattern recognition
technique to perform the classification. The objective of this study is
to classify different Cabernet Sauvignon, Merlot and Carménère wine
samples from different years, valleys and vineyards of Chile.
Abstract: Previous the 3D model texture generation from multi-view images and mapping algorithms has issues in the texture chart generation which are the self-intersection and the concentration of the texture in texture space. Also we may suffer from some problems due to the occluded areas, such as inside parts of thighs. In this paper we propose a texture mapping technique for 3D models using multi-view images on the GPU. We do texture mapping directly on the GPU fragment shader per pixel without generation of the texture map. And we solve for the occluded area using the 3D model depth information. Our method needs more calculation on the GPU than previous works, but it has shown real-time performance and previously mentioned problems do not occur.
Abstract: This research work is aimed at speech recognition
using scaly neural networks. A small vocabulary of 11 words were
established first, these words are “word, file, open, print, exit, edit,
cut, copy, paste, doc1, doc2". These chosen words involved with
executing some computer functions such as opening a file, print
certain text document, cutting, copying, pasting, editing and exit.
It introduced to the computer then subjected to feature extraction
process using LPC (linear prediction coefficients). These features are
used as input to an artificial neural network in speaker dependent
mode. Half of the words are used for training the artificial neural
network and the other half are used for testing the system; those are
used for information retrieval.
The system components are consist of three parts, speech
processing and feature extraction, training and testing by using neural
networks and information retrieval.
The retrieve process proved to be 79.5-88% successful, which is
quite acceptable, considering the variation to surrounding, state of
the person, and the microphone type.
Abstract: In ad hoc networks, the main issue about designing of protocols is quality of service, so that in wireless sensor networks the main constraint in designing protocols is limited energy of sensors. In fact, protocols which minimize the power consumption in sensors are more considered in wireless sensor networks. One approach of reducing energy consumption in wireless sensor networks is to reduce the number of packages that are transmitted in network. The technique of collecting data that combines related data and prevent transmission of additional packages in network can be effective in the reducing of transmitted packages- number. According to this fact that information processing consumes less power than information transmitting, Data Aggregation has great importance and because of this fact this technique is used in many protocols [5]. One of the Data Aggregation techniques is to use Data Aggregation tree. But finding one optimum Data Aggregation tree to collect data in networks with one sink is a NP-hard problem. In the Data Aggregation technique, related information packages are combined in intermediate nodes and form one package. So the number of packages which are transmitted in network reduces and therefore, less energy will be consumed that at last results in improvement of longevity of network. Heuristic methods are used in order to solve the NP-hard problem that one of these optimization methods is to solve Simulated Annealing problems. In this article, we will propose new method in order to build data collection tree in wireless sensor networks by using Simulated Annealing algorithm and we will evaluate its efficiency whit Genetic Algorithm.
Abstract: Image Compression using Artificial Neural Networks
is a topic where research is being carried out in various directions
towards achieving a generalized and economical network.
Feedforward Networks using Back propagation Algorithm adopting
the method of steepest descent for error minimization is popular and
widely adopted and is directly applied to image compression.
Various research works are directed towards achieving quick
convergence of the network without loss of quality of the restored
image. In general the images used for compression are of different
types like dark image, high intensity image etc. When these images
are compressed using Back-propagation Network, it takes longer
time to converge. The reason for this is, the given image may
contain a number of distinct gray levels with narrow difference with
their neighborhood pixels. If the gray levels of the pixels in an image
and their neighbors are mapped in such a way that the difference in
the gray levels of the neighbors with the pixel is minimum, then
compression ratio as well as the convergence of the network can be
improved. To achieve this, a Cumulative distribution function is
estimated for the image and it is used to map the image pixels. When
the mapped image pixels are used, the Back-propagation Neural
Network yields high compression ratio as well as it converges
quickly.
Abstract: Transport and land use are two systems that are
mutually influenced. Their interaction is a complex process
associated with continuous feedback. The paper examines the
existing land use around an under construction metro station of the
new metro network of Thessaloniki, Greece, through the use of field
investigations, around the station-s predefined location. Moreover,
except from the analytical land use recording, a sampling
questionnaire survey is addressed to several selected enterprises of
the study area. The survey aims to specify the characteristics of the
enterprises, the trip patterns of their employees and clients, as well as
the stated preferences towards the changes the new metro station is
considered to bring to the area. The interpretation of the interrelationships
among selected data from the questionnaire survey takes
place using the method of Principal Components Analysis for
Categorical Data. The followed methodology and the survey-s results
contribute to the enrichment of the relevant bibliography concerning
the way the creation of a new metro station can have an impact on the
land use pattern of an area, by examining the situation before the
operation of the station.
Abstract: Despite the recent surge of research in control of
worm propagation, currently, there is no effective defense system
against such cyber attacks. We first design a distributed detection
architecture called Detection via Distributed Blackholes (DDBH).
Our novel detection mechanism could be implemented via virtual
honeypots or honeynets. Simulation results show that a worm can be
detected with virtual honeypots on only 3% of the nodes. Moreover,
the worm is detected when less than 1.5% of the nodes are infected.
We then develop two control strategies: (1) optimal dynamic trafficblocking,
for which we determine the condition that guarantees
minimum number of removed nodes when the worm is contained and
(2) predictive dynamic traffic-blocking–a realistic deployment of
the optimal strategy on scale-free graphs. The predictive dynamic
traffic-blocking, coupled with the DDBH, ensures that more than
40% of the network is unaffected by the propagation at the time
when the worm is contained.
Abstract: Concrete strength evaluated from compression tests
on cores is affected by several factors causing differences from the
in-situ strength at the location from which the core specimen was
extracted. Among the factors, there is the damage possibly occurring
during the drilling phase that generally leads to underestimate the
actual in-situ strength. In order to quantify this effect, in this study
two wide datasets have been examined, including: (i) about 500 core
specimens extracted from Reinforced Concrete existing structures,
and (ii) about 600 cube specimens taken during the construction of
new structures in the framework of routine acceptance control. The
two experimental datasets have been compared in terms of
compression strength and specific weight values, accounting for the
main factors affecting a concrete property, that is type and amount of
cement, aggregates' grading, type and maximum size of aggregates,
water/cement ratio, placing and curing modality, concrete age. The
results show that the magnitude of the strength reduction due to
drilling damage is strongly affected by the actual properties of
concrete, being inversely proportional to its strength. Therefore, the
application of a single value of the correction coefficient, as generally
suggested in the technical literature and in structural codes, appears
inappropriate. A set of values of the drilling damage coefficient is
suggested as a function of the strength obtained from compressive
tests on cores.
Abstract: Secure electronic payment system is presented in this
paper. This electronic payment system is to be secure for clients such
as customers and shop owners. The security architecture of the
system is designed by RC5 encryption / decryption algorithm. This
eliminates the fraud that occurs today with stolen credit card
numbers. The symmetric key cryptosystem RC5 can protect
conventional transaction data such as account numbers, amount and
other information. This process can be done electronically using RC5
encryption / decryption program written by Microsoft Visual Basic
6.0. There is no danger of any data sent within the system being
intercepted, and replaced. The alternative is to use the existing
network, and to encrypt all data transmissions. The system with
encryption is acceptably secure, but that the level of encryption has
to be stepped up, as computing power increases. Results In order to
be secure the system the communication between modules is
encrypted using symmetric key cryptosystem RC5. The system will
use simple user name, password, user ID, user type and cipher
authentication mechanism for identification, when the user first
enters the system. It is the most common method of authentication in
most computer system.
Abstract: One major issue that is regularly cited as a block to
the widespread use of online assessments in eLearning, is that of the
authentication of the student and the level of confidence that an
assessor can have that the assessment was actually completed by that
student. Currently, this issue is either ignored, in which case
confidence in the assessment and any ensuing qualification is
damaged, or else assessments are conducted at central, controlled
locations at specified times, losing the benefits of the distributed
nature of the learning programme. Particularly as we move towards
constructivist models of learning, with intentions towards achieving
heutagogic learning environments, the benefits of a properly
managed online assessment system are clear. Here we discuss some
of the approaches that could be adopted to address these issues,
looking at the use of existing security and biometric techniques,
combined with some novel behavioural elements. These approaches
offer the opportunity to validate the student on accessing an
assessment, on submission, and also during the actual production of
the assessment. These techniques are currently under development in
the DECADE project, and future work will evaluate and report their
use..
Abstract: Grid computing is a group of clusters connected over
high-speed networks that involves coordinating and sharing
computational power, data storage and network resources operating
across dynamic and geographically dispersed locations. Resource
management and job scheduling are critical tasks in grid computing.
Resource selection becomes challenging due to heterogeneity and
dynamic availability of resources. Job scheduling is a NP-complete
problem and different heuristics may be used to reach an optimal or
near optimal solution. This paper proposes a model for resource and
job scheduling in dynamic grid environment. The main focus is to
maximize the resource utilization and minimize processing time of
jobs. Grid resource selection strategy is based on Max Heap Tree
(MHT) that best suits for large scale application and root node of
MHT is selected for job submission. Job grouping concept is used to
maximize resource utilization for scheduling of jobs in grid
computing. Proposed resource selection model and job grouping
concept are used to enhance scalability, robustness, efficiency and
load balancing ability of the grid.
Abstract: “Web of Trust" is one of the recognized goals for
Web 2.0. It aims to make it possible for the people to take
responsibility for what they publish on the web, including
organizations, businesses and individual users. These objectives,
among others, drive most of the technologies and protocols recently
standardized by the governing bodies. One of the great advantages of
Web infrastructure is decentralization of publication. The primary
motivation behind Web 2.0 is to assist the people to add contents for
Collective Intelligence (CI) while providing mechanisms to link
content with people for evaluations and accountability of
information. Such structure of contents will interconnect users and
contents so that users can use contents to find participants and vice
versa. This paper proposes conceptual information storage and
linking model, based on decentralized information structure, that
links contents and people together. The model uses FOAF, Atom,
RDF and RDFS and can be used as a blueprint to develop Web 2.0
applications for any e-domain. However, primary target for this
paper is online trust evaluation domain. The proposed model targets
to assist the individuals to establish “Web of Trust" in online trust
domain.
Abstract: Traffic Engineering (TE) is the process of controlling
how traffic flows through a network in order to facilitate efficient and
reliable network operations while simultaneously optimizing network
resource utilization and traffic performance. TE improves the
management of data traffic within a network and provides the better
utilization of network resources. Many research works considers intra
and inter Traffic Engineering separately. But in reality one influences
the other. Hence the effective network performances of both inter and
intra Autonomous Systems (AS) are not optimized properly. To
achieve a better Joint Optimization of both Intra and Inter AS TE, we
propose a joint Optimization technique by considering intra-AS
features during inter – AS TE and vice versa. This work considers the
important criterion say latency within an AS and between ASes. and
proposes a Bi-Criteria Latency optimization model. Hence an overall
network performance can be improved by considering this jointoptimization
technique in terms of Latency.
Abstract: The aim of this article is to explain how features of attacks could be extracted from the packets. It also explains how vectors could be built and then applied to the input of any analysis stage. For analyzing, the work deploys the Feedforward-Back propagation neural network to act as misuse intrusion detection system. It uses ten types if attacks as example for training and testing the neural network. It explains how the packets are analyzed to extract features. The work shows how selecting the right features, building correct vectors and how correct identification of the training methods with nodes- number in hidden layer of any neural network affecting the accuracy of system. In addition, the work shows how to get values of optimal weights and use them to initialize the Artificial Neural Network.
Abstract: The performance and complexity of QoS routing depends on the complex interaction between a large set of parameters. This paper investigated the scaling properties of source-directed link-state routing in large core networks. The simulation results show that the routing algorithm, network topology, and link cost function each have a significant impact on the probability of successfully routing new connections. The experiments confirm and extend the findings of other studies, and also lend new insight designing efficient quality-of-service routing policies in large networks.
Abstract: The aim of the present work is to study the effect of annealing on the vibration damping capacity of high-chromium (16%) ferromagnetic steel. The alloys were prepared from raw materials of 99.9% purity melted in a high frequency induction furnace under high vacuum. The samples were heat-treated in vacuum at various temperatures (800 to 1200ºC) for 1 hour followed by slow cooling (120ºC/h). The inverted torsional pendulum method was used to evaluate the vibration damping capacity. The results indicated that the vibration damping capacity of the alloys is influenced by annealing and there exists a critical annealing temperature after 1000ºC. The damping capacity increases quickly below the critical temperature since the magnetic domains move more easily.
Abstract: This paper presents an intrusion detection system of hybrid neural network model based on RBF and Elman. It is used for anomaly detection and misuse detection. This model has the memory function .It can detect discrete and related aggressive behavior effectively. RBF network is a real-time pattern classifier, and Elman network achieves the memory ability for former event. Based on the hybrid model intrusion detection system uses DARPA data set to do test evaluation. It uses ROC curve to display the test result intuitively. After the experiment it proves this hybrid model intrusion detection system can effectively improve the detection rate, and reduce the rate of false alarm and fail.
Abstract: This paper examines the relationships between and
among the various drivers of climate change that have both climatic
and ecological consequences for vegetation and land cover change in
arctic areas, particularly in arctic Alaska. It discusses the various
processes that have created spatial and climatic structures that have
facilitated observable vegetation and land cover changes in the
Arctic. Also, it indicates that the drivers of both climatic and
ecological changes in the Arctic are multi-faceted and operate in a
system with both positive and negative feedbacks that largely results
in further increases or decreases of the initial drivers of climatic and
vegetation change mainly at the local and regional scales. It
demonstrates that the impact of arctic warming on land cover change
and the Arctic ecosystems is not unidirectional and one dimensional
in nature but it represents a multi-directional and multi-dimensional
forces operating in a feedback system.