Abstract: Location-based services (LBS) exploit the known
location of a user to provide services dependent on their geographic
context and personalized needs [1].
The development and arrival of broadband mobile data networks
supported with mobile terminals equipped with new location
technologies like GPS have finally created opportunities for
implementation of LBS applications. But, from the other side,
collecting location information data in general raises privacy
concerns.
This paper presents results from two surveys of LBS acceptance in
Croatia. The first survey was administered on 181 students, and the
second extended survey involved pattern of 180 Croatian citizens.
We developed questionnaire which consists of descriptions of 15
different applications with scale which measures perceptions and
attitudes of users towards these applications.
We report the results to identify potential commercial applications
for LBS in B2C segment. Our findings suggest that some types of
applications like emergency&safety services and navigation have
significantly higher rate of acceptance than other types.
Abstract: In this research, the authors analyze network stability
using agent-based simulation. Firstly, the authors focus on analyzing
large networks (eight agents) by connecting different two stable small
social networks (A small stable network is consisted on four agents.).
Secondly, the authors analyze the network (eight agents) shape which
is added one agent to a stable network (seven agents). Thirdly, the
authors analyze interpersonal comparison of utility. The “star-network
"was not found on the result of interaction among stable two small
networks. On the other hand, “decentralized network" was formed
from several combination. In case of added one agent to a stable
network (seven agents), if the value of “c"(maintenance cost of per
a link) was larger, the number of patterns of stable network was
also larger. In this case, the authors identified the characteristics of a
large stable network. The authors discovered the cases of decreasing
personal utility under condition increasing total utility.
Abstract: This paper discusses a new, systematic approach to
the synthesis of a NP-hard class of non-regenerative Boolean
networks, described by FON[FOFF]={mi}[{Mi}], where for every
mj[Mj]∈{mi}[{Mi}], there exists another mk[Mk]∈{mi}[{Mi}], such
that their Hamming distance HD(mj, mk)=HD(Mj, Mk)=O(n), (where
'n' represents the number of distinct primary inputs). The method
automatically ensures exact minimization for certain important selfdual
functions with 2n-1 points in its one-set. The elements meant for
grouping are determined from a newly proposed weighted incidence
matrix. Then the binary value corresponding to the candidate pair is
correlated with the proposed binary value matrix to enable direct
synthesis. We recommend algebraic factorization operations as a post
processing step to enable reduction in literal count. The algorithm
can be implemented in any high level language and achieves best
cost optimization for the problem dealt with, irrespective of the
number of inputs. For other cases, the method is iterated to
subsequently reduce it to a problem of O(n-1), O(n-2),.... and then
solved. In addition, it leads to optimal results for problems exhibiting
higher degree of adjacency, with a different interpretation of the
heuristic, and the results are comparable with other methods.
In terms of literal cost, at the technology independent stage, the
circuits synthesized using our algorithm enabled net savings over
AOI (AND-OR-Invert) logic, AND-EXOR logic (EXOR Sum-of-
Products or ESOP forms) and AND-OR-EXOR logic by 45.57%,
41.78% and 41.78% respectively for the various problems.
Circuit level simulations were performed for a wide variety of
case studies at 3.3V and 2.5V supply to validate the performance of
the proposed method and the quality of the resulting synthesized
circuits at two different voltage corners. Power estimation was
carried out for a 0.35micron TSMC CMOS process technology. In
comparison with AOI logic, the proposed method enabled mean
savings in power by 42.46%. With respect to AND-EXOR logic, the
proposed method yielded power savings to the tune of 31.88%, while
in comparison with AND-OR-EXOR level networks; average power
savings of 33.23% was obtained.
Abstract: By using the method of coincidence degree theory and constructing suitable Lyapunov functional, several sufficient conditions are established for the existence and global exponential stability of anti-periodic solutions for Cohen-Grossberg shunting inhibitory neural networks with delays. An example is given to illustrate our feasible results.
Abstract: Sensor network applications are often data centric and
involve collecting data from a set of sensor nodes to be delivered
to various consumers. Typically, nodes in a sensor network are
resource-constrained, and hence the algorithms operating in these
networks must be efficient. There may be several algorithms available
implementing the same service, and efficient considerations may
require a sensor application to choose the best suited algorithm. In
this paper, we present a systematic evaluation of a set of algorithms
implementing the data gathering service. We propose a modular
infrastructure for implementing such algorithms in TOSSIM with
separate configurable modules for various tasks such as interest
propagation, data propagation, aggregation, and path maintenance.
By appropriately configuring these modules, we propose a number
of data gathering algorithms, each of which incorporates a different
set of heuristics for optimizing performance. We have performed
comprehensive experiments to evaluate the effectiveness of these
heuristics, and we present results from our experimentation efforts.
Abstract: Optical burst switching (OBS) has been proposed to
realize the next generation Internet based on the wavelength division
multiplexing (WDM) network technologies. In the OBS, the burst
contention is one of the major problems. The deflection routing has
been designed for resolving the problem. However, the deflection
routing becomes difficult to prevent from the burst contentions as the
network load becomes high. In this paper, we introduce a flow rate
control methods to reduce burst contentions. We propose new flow
rate control methods based on the leaky bucket algorithm and
deflection routing, i.e. separate leaky bucket deflection method, and
dynamic leaky bucket deflection method. In proposed methods, edge
nodes which generate data bursts carry out the flow rate control
protocols. In order to verify the effectiveness of the flow rate control in
OBS networks, we show that the proposed methods improve the
network utilization and reduce the burst loss probability through
computer simulations.
Abstract: The analysis of electromagnetic environment using
deterministic mathematical models is characterized by the
impossibility of analyzing a large number of interacting network
stations with a priori unknown parameters, and this is characteristic,
for example, of mobile wireless communication networks. One of the
tasks of the tools used in designing, planning and optimization of
mobile wireless network is to carry out simulation of electromagnetic
environment based on mathematical modelling methods, including
computer experiment, and to estimate its effect on radio
communication devices. This paper proposes the development of a
statistical model of electromagnetic environment of a mobile
wireless communication network by describing the parameters and
factors affecting it including the propagation channel and their
statistical models.
Abstract: This article presents the results of researchrelated to the assessment protocol weightedcumulative expected transmission time (WCETT)applied to cognitive radio networks.The development work was based on researchdone by different authors, we simulated a network,which communicates wirelessly, using a licensedchannel, through which other nodes are notlicensed, try to transmit during a given time nodeuntil the station's owner begins its transmission.
Abstract: Wireless sensor networks can be used to measure and monitor many challenging problems and typically involve in monitoring, tracking and controlling areas such as battlefield monitoring, object tracking, habitat monitoring and home sentry systems. However, wireless sensor networks pose unique security challenges including forgery of sensor data, eavesdropping, denial of service attacks, and the physical compromise of sensor nodes. Node in a sensor networks may be vanished due to power exhaustion or malicious attacks. To expand the life span of the sensor network, a new node deployment is needed. In military scenarios, intruder may directly organize malicious nodes or manipulate existing nodes to set up malicious new nodes through many kinds of attacks. To avoid malicious nodes from joining the sensor network, a security is required in the design of sensor network protocols. In this paper, we proposed a security framework to provide a complete security solution against the known attacks in wireless sensor networks. Our framework accomplishes node authentication for new nodes with recognition of a malicious node. When deployed as a framework, a high degree of security is reachable compared with the conventional sensor network security solutions. A proposed framework can protect against most of the notorious attacks in sensor networks, and attain better computation and communication performance. This is different from conventional authentication methods based on the node identity. It includes identity of nodes and the node security time stamp into the authentication procedure. Hence security protocols not only see the identity of each node but also distinguish between new nodes and old nodes.
Abstract: In large Internet backbones, Service Providers
typically have to explicitly manage the traffic flows in order to
optimize the use of network resources. This process is often referred
to as Traffic Engineering (TE). Common objectives of traffic
engineering include balance traffic distribution across the network
and avoiding congestion hot spots. Raj P H and SVK Raja designed
the Bayesian network approach to identify congestion hors pots in
MPLS. In this approach for every node in the network the
Conditional Probability Distribution (CPD) is specified. Based on
the CPD the congestion hot spots are identified. Then the traffic can
be distributed so that no link in the network is either over utilized or
under utilized. Although the Bayesian network approach has been
implemented in operational networks, it has a number of well known
scaling issues.
This paper proposes a new approach, which we call the Pragati
(means Progress) Node Popularity (PNP) approach to identify the
congestion hot spots with the network topology alone. In the new
Pragati Node Popularity approach, IP routing runs natively over the
physical topology rather than depending on the CPD of each node as
in Bayesian network. We first illustrate our approach with a simple
network, then present a formal analysis of the Pragati Node
Popularity approach. Our PNP approach shows that for any given
network of Bayesian approach, it exactly identifies the same result
with minimum efforts. We further extend the result to a more
generic one: for any network topology and even though the network
is loopy. A theoretical insight of our result is that the optimal routing
is always shortest path routing with respect to some considerations of
hot spots in the networks.
Abstract: The stability of a software system is one of the most
important quality attributes affecting the maintenance effort. Many
techniques have been proposed to support the analysis of software
stability at the architecture, file, and class level of software systems,
but little effort has been made for that at the feature (i.e., method and
attribute) level. And the assumptions the existing techniques based
on always do not meet the practice to a certain degree. Considering
that, in this paper, we present a novel metric, Stability of Software
(SoS), to measure the stability of object-oriented software systems
by software change propagation analysis using a simulation way
in software dependency networks at feature level. The approach is
evaluated by case studies on eight open source Java programs using
different software structures (one employs design patterns versus one
does not) for the same object-oriented program. The results of the
case studies validate the effectiveness of the proposed metric. The
approach has been fully automated by a tool written in Java.
Abstract: Wireless Mesh Networks (WMNs) are an emerging
technology for last-mile broadband access. In WMNs, similar to ad
hoc networks, each user node operates not only as a host but also as a
router. User packets are forwarded to and from an Internet-connected
gateway in multi-hop fashion. The WMNs can be integrated with
other networking technologies i.e. ad hoc networks, to implement a
smooth network extension. The meshed topology provides good
reliability and scalability, as well as low upfront investments. Despite
the recent start-up surge in WMNs, much research remains to be
done in standardizing the functional parameters of WMNs to fully
exploit their full potential. An edifice of the security concerns of
these networks is authentication of a new client joining an integrated
ad hoc network and such a scenario will require execution of a multihop
authentication technique. Our endeavor in this paper is to
introduce a secure authentication technique, with light over-heads
that can be conveniently implemented for the ad-hoc nodes forming
clients of an integrated WMN, thus facilitating their inter-operability.
Abstract: This paper proposes an analytical method for the
dynamics of generating firms- alliance networks along with business
phases. Dynamics in network developments have previously been
discussed in the research areas of organizational strategy rather than in
the areas of regional cluster, where the static properties of the
networks are often discussed. The analytical method introduces the
concept of business phases into innovation processes and uses
relationships called prior experiences; this idea was developed in
organizational strategy to investigate the state of networks from the
viewpoints of tradeoffs between link stabilization and node
exploration. This paper also discusses the results of the analytical
method using five cases of the network developments of firms. The
idea of Embeddedness helps interpret the backgrounds of the
analytical results. The analytical method is useful for policymakers of
regional clusters to establish concrete evaluation targets and a
viewpoint for comparisons of policy programs.
Abstract: Bandwidth allocation in wired network is less complex
and to allocate bandwidth in wireless networks is complex and
challenging, due to the mobility of source end system.This paper
proposes a new approach to bandwidth allocation to higher and lower
priority mobile nodes.In our proposal bandwidth allocation to new
mobile node is based on bandwidth utilization of existing mobile
nodes.The first section of the paper focuses on introduction to
bandwidth allocation in wireless networks and presents the existing
solutions available for allocation of bandwidth. The second section
proposes the new solution for the bandwidth allocation to higher and
lower priority nodes. Finally this paper ends with the analytical
evaluation of the proposed solution.
Abstract: Estimating the lifetime distribution of computer networks in which nodes and links exist in time and are bound for failure is very useful in various applications. This problem is known to be NP-hard. In this paper we present efficient combinatorial approaches to Monte Carlo estimation of network lifetime distribution. We also present some simulation results.
Abstract: We consider a two-way relay network where two sources exchange information. A relay helps the two sources exchange information using the decode-and-XOR-forward protocol. We investigate the power minimization problem with minimum rate constraints. The system needs two time slots and in each time slot the required rate pair should be achievable. The power consumption is minimized in each time slot and we obtained the closed form solution. The simulation results confirm that the proposed power allocation scheme consumes lower total power than the conventional schemes.
Abstract: This paper presents the use of a newly created network
structure known as a Self-Delaying Dynamic Network (SDN) to
create a high resolution image from a set of time stepped input
frames. These SDNs are non-recurrent temporal neural networks
which can process time sampled data. SDNs can store input data
for a lifecycle and feature dynamic logic based connections between
layers. Several low resolution images and one high resolution image
of a scene were presented to the SDN during training by a Genetic
Algorithm. The SDN was trained to process the input frames in order
to recreate the high resolution image. The trained SDN was then used
to enhance a number of unseen noisy image sets. The quality of high
resolution images produced by the SDN is compared to that of high
resolution images generated using Bi-Cubic interpolation. The SDN
produced images are superior in several ways to the images produced
using Bi-Cubic interpolation.
Abstract: Embedded systems need to respect stringent real
time constraints. Various hardware components included in such
systems such as cache memories exhibit variability and therefore
affect execution time. Indeed, a cache memory access from an
embedded microprocessor might result in a cache hit where the
data is available or a cache miss and the data need to be fetched
with an additional delay from an external memory. It is therefore
highly desirable to predict future memory accesses during
execution in order to appropriately prefetch data without incurring
delays. In this paper, we evaluate the potential of several artificial
neural networks for the prediction of instruction memory
addresses. Neural network have the potential to tackle the nonlinear
behavior observed in memory accesses during program
execution and their demonstrated numerous hardware
implementation emphasize this choice over traditional forecasting
techniques for their inclusion in embedded systems. However,
embedded applications execute millions of instructions and
therefore millions of addresses to be predicted. This very
challenging problem of neural network based prediction of large
time series is approached in this paper by evaluating various neural
network architectures based on the recurrent neural network
paradigm with pre-processing based on the Self Organizing Map
(SOM) classification technique.
Abstract: In modern human computer interaction systems
(HCI), emotion recognition is becoming an imperative characteristic.
The quest for effective and reliable emotion recognition in HCI has
resulted in a need for better face detection, feature extraction and
classification. In this paper we present results of feature space analysis
after briefly explaining our fully automatic vision based emotion
recognition method. We demonstrate the compactness of the feature
space and show how the 2d/3d based method achieves superior features
for the purpose of emotion classification. Also it is exposed that
through feature normalization a widely person independent feature
space is created. As a consequence, the classifier architecture has
only a minor influence on the classification result. This is particularly
elucidated with the help of confusion matrices. For this purpose
advanced classification algorithms, such as Support Vector Machines
and Artificial Neural Networks are employed, as well as the simple k-
Nearest Neighbor classifier.
Abstract: Latvia is the fourth in the world by means of broadband internet speed. The total number of internet users in Latvia exceeds 70% of its population. The number of active mailboxes of the local internet e-mail service Inbox.lv accounts for 68% of the population and 97.6% of the total number of internet users. The Latvian portal Draugiem.lv is a phenomenon of social media, because 58.4 % of the population and 83.5% of internet users use it. A majority of Latvian company profiles are available on social networks, the most popular being Twitter.com. These and other parameters prove the fact consumers and companies are actively using the Internet.
However, after the authors in a number of studies analyzed how enterprises are employing the e-environment, namely, e-environment tools, they arrived to the conclusions that are not as flattering as the aforementioned statistics. There is an obvious contradiction between the statistical data and the actual studies. As a result, the authors have posed a question: Why are entrepreneurs resistant to e-tools? In order to answer this question, the authors have addressed the Technology Acceptance Model (TAM). The authors analyzed each phase and determined several factors affecting the use of e-environment, reaching the main conclusion that entrepreneurs do not have a sufficient level of e-literacy (digital literacy).
The authors employ well-established quantitative and qualitative methods of research: grouping, analysis, statistic method, factor analysis in SPSS 20 environment etc.
The theoretical and methodological background of the research is formed by, scientific researches and publications, that from the mass media and professional literature, statistical information from legal institutions as well as information collected by the author during the survey.