Abstract: The lack of any centralized infrastructure in mobile ad
hoc networks (MANET) is one of the greatest security concerns in
the deployment of wireless networks. Thus communication in
MANET functions properly only if the participating nodes cooperate
in routing without any malicious intention. However, some of the
nodes may be malicious in their behavior, by indulging in flooding
attacks on their neighbors. Some others may act malicious by
launching active security attacks like denial of service. This paper
addresses few related works done on trust evaluation and
establishment in ad hoc networks. Related works on flooding attack
prevention are reviewed. A new trust approach based on the extent of
friendship between the nodes is proposed which makes the nodes to
co-operate and prevent flooding attacks in an ad hoc environment.
The performance of the trust algorithm is tested in an ad hoc network
implementing the Ad hoc On-demand Distance Vector (AODV)
protocol.
Abstract: Wimax (Worldwide Interoperability for Microwave Access)
is a promising technology which can offer high speed data,
voice and video service to the customer end, which is presently, dominated
by the cable and digital subscriber line (DSL) technologies.
The performance assessment of Wimax systems is dealt with. The
biggest advantage of Broadband wireless application (BWA) over its
wired competitors is its increased capacity and ease of deployment.
The aims of this paper are to model and simulate the fixed OFDM
IEEE 802.16d physical layer under variant combinations of digital
modulation (BPSK, QPSK, and 16-QAM) over diverse combination
of fading channels (AWGN, SUIs). Stanford University Interim (SUI)
Channel serial was proposed to simulate the fixed broadband wireless
access channel environments where IEEE 802.16d is to be deployed.
It has six channel models that are grouped into three categories
according to three typical different outdoor Terrains, in order to give
a comprehensive effect of fading channels on the overall performance
of the system.
Abstract: The development of distributed systems has been affected by the need to accommodate an increasing degree of flexibility, adaptability, and autonomy. The Mobile Agent technology is emerging as an alternative to build a smart generation of highly distributed systems. In this work, we investigate the performance aspect of agent-based technologies for information retrieval. We present a comparative performance evaluation model of Mobile Agents versus Remote Method Invocation by means of an analytical approach. We demonstrate the effectiveness of mobile agents for dynamic code deployment and remote data processing by reducing total latency and at the same time producing minimum network traffic. We argue that exploiting agent-based technologies significantly enhances the performance of distributed systems in the domain of information retrieval.
Abstract: We propose an enhanced key management scheme
based on Key Infection, which is lightweight scheme for tiny sensors.
The basic scheme, Key Infection, is perfectly secure against node
capture and eavesdropping if initial communications after node
deployment is secure. If, however, an attacker can eavesdrop on
the initial communications, they can take the session key. We use
common neighbors for each node to generate the session key. Each
node has own secret key and shares it with its neighbor nodes. Then
each node can establish the session key using common neighbors-
secret keys and a random number. Our scheme needs only a few
communications even if it uses neighbor nodes- information. Without
losing the lightness of basic scheme, it improves the resistance against
eavesdropping on the initial communications more than 30%.
Abstract: Wireless Sensor Networks (WSN) are emerging
because of the developments in wireless communication technology and miniaturization of the hardware. WSN consists of a large number of low-cost, low-power, multifunctional sensor nodes to monitor physical conditions, such as temperature, sound, vibration, pressure,
motion, etc. The MAC protocol to be used in the sensor networks must be energy efficient and this should aim at conserving the energy during its operation. In this paper, with the focus of analyzing the
MAC protocols used in wireless Adhoc networks to WSN, simulation
experiments were conducted in Global Mobile Simulator
(GloMoSim) software. Number of packets sent by regular nodes, and received by sink node in different deployment strategies, total energy
spent, and the network life time have been chosen as the metric for comparison. From the results of simulation, it is evident that the IEEE 802.11 protocol performs better compared to CSMA and MACA protocols.
Abstract: From the perspective of system of systems (SoS) and
emergent behaviors, this paper describes large scale application
software systems, and proposes framework methods to further depict
systems- functional and non-functional characteristics. Besides, this
paper also specifically discusses some functional frameworks. In the
end, the framework-s applications in system disintegrations, system
architecture and stable intermediate forms are additionally dealt with
in this in building, deployment and maintenance of large scale
software applications.
Abstract: An information procuring and processing emerging technology wireless sensor network (WSN) Consists of autonomous nodes with versatile devices underpinned by applications. Nodes are equipped with different capabilities such as sensing, computing, actuation and wireless communications etc. based on application requirements. A WSN application ranges from military implementation in the battlefield, environmental monitoring, health sector as well as emergency response of surveillance. The nodes are deployed independently to cooperatively monitor the physical and environmental conditions. The architecture of WSN differs based on the application requirements and focus on low cost, flexibility, fault tolerance capability, deployment process as well as conserve energy. In this paper we have present the characteristics, architecture design objective and architecture of WSN
Abstract: These days wireless local area networks has become
very popular, when the initial IEEE802.11 is the standard for
providing wireless connectivity to automatic machinery, equipment
and stations that require rapid deployment, which may be portable,
handheld or which may be mounted on moving vehicles within a
local area. IEEE802.11 Wireless local area network is a sharedmedium
communication network that transmits information over
wireless links for all IEEE802.11 stations in its transmission range to
receive. When a user is moving from one location to another, how
the other user knows about the required station inside WLAN. For
that we designed and implemented a system to locate a mobile user
inside the wireless local area network based on RSSI with the help of
four specially designed architectures. These architectures are based
on statistical or we can say manual configuration of mapping and
radio map of indoor and outdoor location with the help of available
Sniffer based and cluster based techniques. We found a better
location of a mobile user in WLAN. We tested this work in indoor
and outdoor environments with different locations with the help of
Pamvotis, a simulator for WLAN.
Abstract: This paper is a part of research, in which the way the
biomedical engineers follow in their work is analyzed. The goal of
this paper is to present a method for specification of user
requirements in the medical devices maintenance process. Data
Gathering Methods, Research Model Phases and Descriptive
Analysis is presented. These technology and verification rules can be
implemented in Medical devices maintenance management process to
the maintenance process.
Abstract: The inherent complexity in nowadays- business
environments is forcing organizations to be attentive to the dynamics
in several fronts. Therefore, the management of technological
innovation is continually faced with uncertainty about the future.
These issues lead to a need for a systemic perspective, able to analyze
the consequences of interactions between different factors. The field
of technology foresight has proposed methods and tools to deal with
this broader perspective. In an attempt to provide a method to analyze
the complex interactions between events in several areas, departing
from the identification of the most strategic competencies, this paper
presents a methodology based on the Delphi method and Quality
Function Deployment. This methodology is applied in a sheet metal
processing equipment manufacturer, as a case study.
Abstract: Due to the fact that in the new century customers tend
to express globally increasing demands, networks of interconnected
businesses have been established in societies and the management of
such networks seems to be a major key through gaining competitive
advantages. Supply chain management encompasses such managerial
activities. Within a supply chain, a critical role is played by quality.
QFD is a widely-utilized tool which serves the purpose of not only
bringing quality to the ultimate provision of products or service
packages required by the end customer or the retailer, but it can also
initiate us into a satisfactory relationship with our initial customer;
that is the wholesaler. However, the wholesalers- cooperation is
considerably based on the capabilities that are heavily dependent on
their locations and existing circumstances. Therefore, it is undeniable
that for all companies each wholesaler possesses a specific
importance ratio which can heavily influence the figures calculated in
the House of Quality in QFD. Moreover, due to the competitiveness
of the marketplace today, it-s been widely recognized that
consumers- expression of demands has been highly volatile in
periods of production. Apparently, such instability and proneness to
change has been very tangibly noticed and taking it into account
during the analysis of HOQ is widely influential and doubtlessly
required. For a more reliable outcome in such matters, this article
demonstrates the application viability of Analytic Network Process
for considering the wholesalers- reputation and simultaneously
introduces a mortality coefficient for the reliability and stability of
the consumers- expressed demands in course of time. Following to
this, the paper provides further elaboration on the relevant
contributory factors and approaches through the calculation of such
coefficients. In the end, the article concludes that an empirical
application is needed to achieve broader validity.
Abstract: This paper proposes the requirements and design of
RFID based system for SFC (Shop Floor Control) in order to achieve
the factory real time controllability, Allowing to develop EManufacturing
System. The detailed logical specifications of the core
functions and the design diagrams of RFID based system are
developed. Then RFID deployment in E-Manufacturing systems is
investigated..
Abstract: An autonomous environmental monitoring system
(Smart Landfill) has been constructed for the quantitative
measurement of the components of landfill gas found at borehole
wells at the perimeter of landfill sites. The main components of
landfill gas are the greenhouse gases, methane and carbon dioxide
and have been monitored in the range 0-5 % volume. This monitoring
system has not only been tested in the laboratory but has been
deployed in multiple field trials and the data collected successfully
compared with on-site monitors. This success shows the potential of
this system for application in environments where reliable gas
monitoring is crucial.
Abstract: Partitions can play a significant role in minimising cochannel
interference of Wireless LANs by attenuating signals across
room boundaries. This could pave the way towards higher density
deployments in home and office environments through spatial
channel reuse. Yet, due to protocol limitations, the latest incantation
of IEEE 802.11 standard is still unable to take advantage of this fact:
Despite having clearly adequate Signal to Interference Ratio (SIR)
over co-channel neighbouring networks in other rooms, its goodput
falls significantly lower than its maximum in the absence of cochannel
interferers. In this paper, we describe how this situation can
be remedied via modest modifications to the standard.
Abstract: A new deployment of the multiple criteria decision
making (MCDM) techniques: the Simple Additive Weighting
(SAW), and the Technique for Order Preference by Similarity to
Ideal Solution (TOPSIS) for portfolio allocation, is demonstrated in
this paper. Rather than exclusive reference to mean and variance as in
the traditional mean-variance method, the criteria used in this
demonstration are the first four moments of the portfolio distribution.
Each asset is evaluated based on its marginal impacts to portfolio
higher moments that are characterized by trapezoidal fuzzy numbers.
Then centroid-based defuzzification is applied to convert fuzzy
numbers to the crisp numbers by which SAW and TOPSIS can be
deployed. Experimental results suggest the similar efficiency of these
MCDM approaches to selecting dominant assets for an optimal
portfolio under higher moments. The proposed approaches allow
investors flexibly adjust their risk preferences regarding higher
moments via different schemes adapting to various (from
conservative to risky) kinds of investors. The other significant
advantage is that, compared to the mean-variance analysis, the
portfolio weights obtained by SAW and TOPSIS are consistently
well-diversified.
Abstract: Model Predictive Control (MPC) is increasingly being
proposed for real time applications and embedded systems. However
comparing to PID controller, the implementation of the MPC in
miniaturized devices like Field Programmable Gate Arrays (FPGA)
and microcontrollers has historically been very small scale due to its
complexity in implementation and its computation time requirement.
At the same time, such embedded technologies have become an
enabler for future manufacturing enterprises as well as a transformer
of organizations and markets. Recently, advances in microelectronics
and software allow such technique to be implemented in embedded
systems. In this work, we take advantage of these recent advances
in this area in the deployment of one of the most studied and
applied control technique in the industrial engineering. In fact in
this paper, we propose an efficient framework for implementation
of Generalized Predictive Control (GPC) in the performed STM32
microcontroller. The STM32 keil starter kit based on a JTAG interface
and the STM32 board was used to implement the proposed GPC
firmware. Besides the GPC, the PID anti windup algorithm was
also implemented using Keil development tools designed for ARM
processor-based microcontroller devices and working with C/Cµ
langage. A performances comparison study was done between both
firmwares. This performances study show good execution speed and
low computational burden. These results encourage to develop simple
predictive algorithms to be programmed in industrial standard hardware.
The main features of the proposed framework are illustrated
through two examples and compared with the anti windup PID
controller.
Abstract: The traditional software product and process metrics
are neither suitable nor sufficient in measuring the complexity of
software components, which ultimately is necessary for quality and
productivity improvement within organizations adopting CBSE.
Researchers have proposed a wide range of complexity metrics for
software systems. However, these metrics are not sufficient for
components and component-based system and are restricted to the
module-oriented systems and object-oriented systems. In this
proposed study it is proposed to find the complexity of the JavaBean
Software Components as a reflection of its quality and the component
can be adopted accordingly to make it more reusable. The proposed
metric involves only the design issues of the component and does not
consider the packaging and the deployment complexity. In this way,
the software components could be kept in certain limit which in turn
help in enhancing the quality and productivity.
Abstract: Quality Function Deployment (QFD) is an expounded, multi-step planning method for delivering commodity, services, and processes to customers, both external and internal to an organization. It is a way to convert between the diverse customer languages expressing demands (Voice of the Customer), and the organization-s languages expressing results that sate those demands. The policy is to establish one or more matrices that inter-relate producer and consumer reciprocal expectations. Due to its visual presence is called the “House of Quality" (HOQ). In this paper, we assumed HOQ in multi attribute decision making (MADM) pattern and through a proposed MADM method, rank technical specifications. Thereafter compute satisfaction degree of customer requirements and for it, we apply vagueness and uncertainty conditions in decision making by fuzzy set theory. This approach would propound supervised neural network (perceptron) for MADM problem solving.
Abstract: Dedicated Short Range Communication (DSRC) is a
key enabling technology for the next generation of
communication-based safety applications. One of the important
problems for DSRC deployment is maintaining high performance
under heavy channel load. Many studies focus on congestion control
mechanisms for simulating hundreds of physical radios deployed on
vehicles. The U.S. department of transportation-s (DOT) Intelligent
Transportation Systems (ITS) division has a plan to chosen prototype
on-board devices capable of transmitting basic “Here I am" safety
messages to other vehicles. The devices will be used in an IntelliDrive
safety pilot deployment of up to 3,000 vehicles. It is hard to log the
information of 3,000 vehicles. In this paper we present the designs and
issues related to the DSRC Radio Testbed under heavy channel load.
The details not only include the architecture of DSRC Radio Testbed,
but also describe how the Radio Interfere System is used to help for
emulating the congestion radio environment.
Abstract: Mobile IP has been developed to provide the
continuous information network access to mobile users. In IP-based
mobile networks, location management is an important component of
mobility management. This management enables the system to track
the location of mobile node between consecutive communications. It
includes two important tasks- location update and call delivery.
Location update is associated with signaling load. Frequent updates
lead to degradation in the overall performance of the network and the
underutilization of the resources. It is, therefore, required to devise
the mechanism to minimize the update rate. Mobile IPv6 (MIPv6)
and Hierarchical MIPv6 (HMIPv6) have been the potential
candidates for deployments in mobile IP networks for mobility
management. HMIPv6 through studies has been shown with better
performance as compared to MIPv6. It reduces the signaling
overhead traffic by making registration process local. In this paper,
we present performance analysis of MIPv6 and HMIPv6 using an
analytical model. Location update cost function is formulated based
on fluid flow mobility model. The impact of cell residence time, cell
residence probability and user-s mobility is investigated. Numerical
results are obtained and presented in graphical form. It is shown that
HMIPv6 outperforms MIPv6 for high mobility users only and for low
mobility users; performance of both the schemes is almost equivalent
to each other.