Abstract: Recent advancement in wireless internetworking has presented a number of dynamic routing protocols based on sensor networks. At present, a number of revisions are made based on their energy efficiency, lifetime and mobility. However, to the best of our knowledge no extensive survey of this special type has been prepared. At present, review is needed in this area where cluster-based structures for dynamic wireless networks are to be discussed. In this paper, we examine and compare several aspects and characteristics of some extensively explored hierarchical dynamic clustering protocols in wireless sensor networks. This document also presents a discussion on the future research topics and the challenges of dynamic hierarchical clustering in wireless sensor networks.
Abstract: This paper describes an approach to detect the
transmitted signals for 2×2 Multiple Input Multiple Output (MIMO)
setup using roulette wheel based ant colony optimization technique.
The results obtained are compared with classical zero forcing and
least mean square techniques. The detection rates achieved using
this technique are consistently larger than the one achieved using
classical methods for 50 number of attempts with two different
antennas transmitting the input stream from a user. This paves the
path to use alternative techniques to improve the throughput achieved
in advanced networks like Long Term Evolution (LTE) networks.
Abstract: This paper presents a technique for compact three
dimensional (3D) object model reconstruction using wavelet
networks. It consists to transform an input surface vertices
into signals,and uses wavelet network parameters for signal
approximations. To prove this, we use a wavelet network architecture
founded on several mother wavelet families. POLYnomials
WindOwed with Gaussians (POLYWOG) wavelet families are used
to maximize the probability to select the best wavelets which
ensure the good generalization of the network. To achieve a better
reconstruction, the network is trained several iterations to optimize the
wavelet network parameters until the error criterion is small enough.
Experimental results will shown that our proposed technique can
effectively reconstruct an irregular 3D object models when using
the optimized wavelet network parameters. We will prove that an
accurateness reconstruction depends on the best choice of the mother
wavelets.
Abstract: This paper demonstrates the use of a method of synthesizing process flowsheets using a graphical tool called the GH-plot and in particular, to look at how it can be used to compare the reactions of a combined simultaneous process with regard to their thermodynamics. The technique uses fundamental thermodynamic principles to allow the mass, energy and work balances locate the attainable region for chemical processes in a reactor. This provides guidance on what design decisions would be best suited to developing new processes that are more effective and make lower demands on raw material and energy usage.
Abstract: In this paper, we study the Minimum Latency Broadcast
Scheduling (MLBS) problem in wireless sensor networks (WSNs).
The main issue of the MLBS problem is to compute schedules
with the minimum number of timeslots such that a base station can
broadcast data to all other sensor nodes with no collisions. Unlike
existing works that utilize the traditional omni-directional WSNs,
we target the directional WSNs where nodes can collaboratively
determine and orientate their antenna directions. We first develop
a 7-approximation algorithm, adopting directional WSNs. Our ratio
is currently the best, to the best of our knowledge. We then validate
the performance of the proposed algorithm through simulation.
Abstract: Ubiquity of natural disasters during last few decades
have risen serious questions towards the prediction of such events
and human safety. Every disaster regardless its proportion has a
precursor which is manifested as a disruption of some environmental
parameter such as temperature, humidity, pressure, vibrations and
etc. In order to anticipate and monitor those changes, in this paper
we propose an overall system for disaster prediction and monitoring,
based on wireless sensor network (WSN). Furthermore, we introduce
a modified and simplified WSN routing protocol built on the top
of the trickle routing algorithm. Routing algorithm was deployed
using the bluetooth low energy protocol in order to achieve low
power consumption. Performance of the WSN network was analyzed
using a real life system implementation. Estimates of the WSN
parameters such as battery life time, network size and packet delay are
determined. Based on the performance of the WSN network, proposed
system can be utilized for disaster monitoring and prediction due to
its low power profile and mesh routing feature.
Abstract: Efficient utilization of spectrum resources is a
fundamental issue of wireless communications due to its scarcity.
To improve the efficiency of spectrum utilization, the spectrum
sharing for unlicensed bands is being regarded as one of key
technologies in the next generation wireless networks. A number
of schemes such as Listen-Before-Talk(LBT) and carrier sensor
adaptive transmission (CSAT) have been suggested from this aspect,
but more efficient sharing schemes are required for improving
spectrum utilization efficiency. This work considers an opportunistic
transmission approach and a dynamic Contention Window (CW)
adjustment scheme for LTE-U users sharing the unlicensed spectrum
with Wi-Fi, in order to enhance the overall system throughput. The
decision criteria for the dynamic adjustment of CW are based on
the collision evaluation, derived from the collision probability of the
system. The overall performance can be improved due to the adaptive
adjustment of the CW. Simulation results show that our proposed
scheme outperforms the Distributed Coordination Function (DCF)
mechanism of IEEE 802.11 MAC.
Abstract: Wireless Sensor Network (WSN) clustering architecture enables features like network scalability, communication overhead reduction, and fault tolerance. After clustering, aggregated data is transferred to data sink and reducing unnecessary, redundant data transfer. It reduces nodes transmitting, and so saves energy consumption. Also, it allows scalability for many nodes, reduces communication overhead, and allows efficient use of WSN resources. Clustering based routing methods manage network energy consumption efficiently. Building spanning trees for data collection rooted at a sink node is a fundamental data aggregation method in sensor networks. The problem of determining Cluster Head (CH) optimal number is an NP-Hard problem. In this paper, we combine cluster based routing features for cluster formation and CH selection and use Minimum Spanning Tree (MST) for intra-cluster communication. The proposed method is based on optimizing MST using Simulated Annealing (SA). In this work, normalized values of mobility, delay, and remaining energy are considered for finding optimal MST. Simulation results demonstrate the effectiveness of the proposed method in improving the packet delivery ratio and reducing the end to end delay.
Abstract: Wireless Sensor Networks (WSNs) have many advantages. Their deployment is easier and faster than wired sensor networks or other wireless networks, as they do not need fixed infrastructure. Nodes are partitioned into many small groups named clusters to aggregate data through network organization. WSN clustering guarantees performance achievement of sensor nodes. Sensor nodes energy consumption is reduced by eliminating redundant energy use and balancing energy sensor nodes use over a network. The aim of such clustering protocols is to prolong network life. Low Energy Adaptive Clustering Hierarchy (LEACH) is a popular protocol in WSN. LEACH is a clustering protocol in which the random rotations of local cluster heads are utilized in order to distribute energy load among all sensor nodes in the network. This paper proposes Connected Dominant Set (CDS) based cluster formation. CDS aggregates data in a promising approach for reducing routing overhead since messages are transmitted only within virtual backbone by means of CDS and also data aggregating lowers the ratio of responding hosts to the hosts existing in virtual backbones. CDS tries to increase networks lifetime considering such parameters as sensors lifetime, remaining and consumption energies in order to have an almost optimal data aggregation within networks. Experimental results proved CDS outperformed LEACH regarding number of cluster formations, average packet loss rate, average end to end delay, life computation, and remaining energy computation.
Abstract: In this paper, a thorough review about dual-cubes, DCn,
the related studies and their variations are given. DCn was introduced
to be a network which retains the pleasing properties of hypercube Qn
but has a much smaller diameter. In fact, it is so constructed that the
number of vertices of DCn is equal to the number of vertices of Q2n
+1. However, each vertex in DCn is adjacent to n + 1 neighbors and
so DCn has (n + 1) × 2^2n edges in total, which is roughly half the
number of edges of Q2n+1. In addition, the diameter of any DCn is 2n
+2, which is of the same order of that of Q2n+1. For selfcompleteness,
basic definitions, construction rules and symbols are
provided. We chronicle the results, where eleven significant theorems
are presented, and include some open problems at the end.
Abstract: The complexity of scavenging by ports and its impact on engine efficiency create the need to understand and to model it as realistically as possible. However, there are few empirical scavenging models and these are highly specialized. In a design optimization process, they appear very restricted and their field of use is limited. This paper presents a comparison of two methods to establish and reduce a model of the scavenging process in 2-stroke diesel engines. To solve the lack of scavenging models, a CFD model has been developed and is used as the referent case. However, its large size requires a reduction. Two techniques have been tested depending on their fields of application: The NTF method and neural networks. They both appear highly appropriate drastically reducing the model’s size (over 90% reduction) with a low relative error rate (under 10%). Furthermore, each method produces a reduced model which can be used in distinct specialized fields of application: the distribution of a quantity (mass fraction for example) in the cylinder at each time step (pseudo-dynamic model) or the qualification of scavenging at the end of the process (pseudo-static model).
Abstract: Fiber-Wireless (FiWi) networks are a promising candidate for future broadband access networks. These networks combine the optical network as the back end where different passive optical network (PON) technologies are realized and the wireless network as the front end where different wireless technologies are adopted, e.g. LTE, WiMAX, Wi-Fi, and Wireless Mesh Networks (WMNs). The convergence of both optical and wireless technologies requires designing architectures with robust efficient and effective bandwidth allocation schemes. Different bandwidth allocation algorithms have been proposed in FiWi networks aiming to enhance the different segments of FiWi networks including wireless and optical subnetworks. In this survey, we focus on the differentiating between the different bandwidth allocation algorithms according to their enhancement segment of FiWi networks. We classify these techniques into wireless, optical and Hybrid bandwidth allocation techniques.
Abstract: One of the leading problems in Cyber Security today
is the emergence of targeted attacks conducted by adversaries with
access to sophisticated tools. These attacks usually steal senior level
employee system privileges, in order to gain unauthorized access to
confidential knowledge and valuable intellectual property. Malware
used for initial compromise of the systems are sophisticated and
may target zero-day vulnerabilities. In this work we utilize common
behaviour of malware called ”beacon”, which implies that infected
hosts communicate to Command and Control servers at regular
intervals that have relatively small time variations. By analysing
such beacon activity through passive network monitoring, it is
possible to detect potential malware infections. So, we focus on
time gaps as indicators of possible C2 activity in targeted enterprise
networks. We represent DNS log files as a graph, whose vertices
are destination domains and edges are timestamps. Then by using
four periodicity detection algorithms for each pair of internal-external
communications, we check timestamp sequences to identify the
beacon activities. Finally, based on the graph structure, we infer the
existence of other infected hosts and malicious domains enrolled in
the attack activities.
Abstract: Ionic liquids are finding a wide range of applications from reaction media to separations and materials processing. In these applications, Vapor–Liquid equilibrium (VLE) is the most important one. VLE for six systems at 353 K and activity coefficients at infinite dilution [(γ)_i^∞] for various solutes (alkanes, alkenes, cycloalkanes, cycloalkenes, aromatics, alcohols, ketones, esters, ethers, and water) in the ionic liquids (1-ethyl-3-methylimidazolium bis (trifluoromethylsulfonyl)imide [EMIM][BTI], 1-hexyl-3-methyl imidazolium bis (trifluoromethylsulfonyl) imide [HMIM][BTI], 1-octyl-3-methylimidazolium bis(trifluoromethylsulfonyl) imide [OMIM][BTI], and 1-butyl-1-methylpyrrolidinium bis (trifluoromethylsulfonyl) imide [BMPYR][BTI]) have been used to train neural networks in the temperature range from (303 to 333) K. Densities of the ionic liquids, Hildebrant constant of substances, and temperature were selected as input of neural networks. The networks with different hidden layers were examined. Networks with seven neurons in one hidden layer have minimum error and good agreement with experimental data.
Abstract: Social networks have recently gained a growing
interest on the web. Traditional formalisms for representing social
networks are static and suffer from the lack of semantics. In this
paper, we will show how semantic web technologies can be used to
model social data. The SemTemp ontology aligns and extends
existing ontologies such as FOAF, SIOC, SKOS and OWL-Time to
provide a temporal and semantically rich description of social data.
We also present a modeling scenario to illustrate how our ontology
can be used to model social networks.
Abstract: Mumbai, being traditionally the epicenter of India's
trade and commerce, the existing major ports such as Mumbai and
Jawaharlal Nehru Ports (JN) situated in Thane estuary are also
developing its waterfront facilities. Various developments over the
passage of decades in this region have changed the tidal flux
entering/leaving the estuary. The intake at Pir-Pau is facing the
problem of shortage of water in view of advancement of shoreline,
while jetty near Ulwe faces the problem of ship scheduling due to
existence of shallower depths between JN Port and Ulwe Bunder. In
order to solve these problems, it is inevitable to have information
about tide levels over a long duration by field measurements.
However, field measurement is a tedious and costly affair;
application of artificial intelligence was used to predict water levels
by training the network for the measured tide data for one lunar tidal
cycle. The application of two layered feed forward Artificial Neural
Network (ANN) with back-propagation training algorithms such as
Gradient Descent (GD) and Levenberg-Marquardt (LM) was used to
predict the yearly tide levels at waterfront structures namely at Ulwe
Bunder and Pir-Pau. The tide data collected at Apollo Bunder, Ulwe,
and Vashi for a period of lunar tidal cycle (2013) was used to train,
validate and test the neural networks. These trained networks having
high co-relation coefficients (R= 0.998) were used to predict the tide
at Ulwe, and Vashi for its verification with the measured tide for the
year 2000 & 2013. The results indicate that the predicted tide levels
by ANN give reasonably accurate estimation of tide. Hence, the
trained network is used to predict the yearly tide data (2015) for
Ulwe. Subsequently, the yearly tide data (2015) at Pir-Pau was
predicted by using the neural network which was trained with the
help of measured tide data (2000) of Apollo and Pir-Pau. The analysis of measured data and study reveals that: The
measured tidal data at Pir-Pau, Vashi and Ulwe indicate that there is
maximum amplification of tide by about 10-20 cm with a phase lag
of 10-20 minutes with reference to the tide at Apollo Bunder
(Mumbai). LM training algorithm is faster than GD and with increase
in number of neurons in hidden layer and the performance of the
network increases. The predicted tide levels by ANN at Pir-Pau and
Ulwe provides valuable information about the occurrence of high and
low water levels to plan the operation of pumping at Pir-Pau and
improve ship schedule at Ulwe.
Abstract: Energy has a prominent role for development of
nations. Countries which have energy resources also have strategic
power in the international trade of energy since it is essential for all
stages of production in the economy. Thus, it is important for
countries to analyze the weaknesses and strength of the system. On
the other side, international trade is one of the fields that are analyzed
as a complex network via network analysis. Complex network is one
of the tools to analyze complex systems with heterogeneous agents
and interaction between them. A complex network consists of nodes
and the interactions between these nodes. Total properties which
emerge as a result of these interactions are distinct from the sum of
small parts (more or less) in complex systems. Thus, standard
approaches to international trade are superficial to analyze these
systems. Network analysis provides a new approach to analyze
international trade as a network. In this network, countries constitute
nodes and trade relations (export or import) constitute edges. It
becomes possible to analyze international trade network in terms of
high degree indicators which are specific to complex networks such
as connectivity, clustering, assortativity/disassortativity, centrality,
etc. In this analysis, international trade of crude oil and coal which
are types of fossil fuel has been analyzed from 2005 to 2014 via
network analysis. First, it has been analyzed in terms of some
topological parameters such as density, transitivity, clustering etc.
Afterwards, fitness to Pareto distribution has been analyzed via
Kolmogorov-Smirnov test. Finally, weighted HITS algorithm has
been applied to the data as a centrality measure to determine the real
prominence of countries in these trade networks. Weighted HITS
algorithm is a strong tool to analyze the network by ranking countries
with regards to prominence of their trade partners. We have
calculated both an export centrality and an import centrality by
applying w-HITS algorithm to the data. As a result, impacts of the
trading countries have been presented in terms of high-degree
indicators.
Abstract: Companies face increasing challenges in research due
to higher costs and risks. The intensifying technology complexity and
interdisciplinarity require unique know-how. Therefore, companies
need to decide whether research shall be conducted internally or
externally with partners. On the other hand, research institutes meet
increasing efforts to achieve good financing and to maintain high
research reputation. Therefore, relevant research topics need to be
identified and specialization of competency is necessary. However,
additional competences for solving interdisciplinary research projects
are also often required. Secured financing can be achieved by
bonding industry partners as well as public fundings. The realization
of faster and better research drives companies and research institutes
to cooperate in organized research networks, which are managed by
an administrative organization. For an effective and efficient
cooperation, necessary processes, roles, tools and a set of rules need
to be determined. Goal of this paper is to show the state-of-art
research and to propose a governance framework for organized
research networks.
Abstract: Small-size and low-power sensors with sensing, signal
processing and wireless communication capabilities is suitable for the
wireless sensor networks. Due to the limited resources and battery
constraints, complex routing algorithms used for the ad-hoc networks
cannot be employed in sensor networks. In this paper, we propose
node-disjoint multi-path hexagon-based routing algorithms in wireless
sensor networks. We suggest the details of the algorithm and compare
it with other works. Simulation results show that the proposed scheme
achieves better performance in terms of efficiency and message
delivery ratio.
Abstract: Customer’ needs, quality, and value creation while
reducing costs through supply chain management provides challenges
and opportunities for companies and researchers. In the light of these
challenges, modern ideas must contribute to counter these challenges
and exploit opportunities. Therefore, this paper discusses the impact
of the quality cost on revenue sharing as a most important incentive
to configure business networks. This paper develops the quality cost approach to align with the
modern era. It develops a model to measure quality costs which
might enable firms to manage revenue sharing in a supply chain. The
developed model includes five categories; besides the well-known
four categories (namely prevention costs, appraisal costs, internal
failure costs, and external failure costs), a new category has been
developed in this research as a new vision of the relationship between
quality costs and innovations in industry. This new category is
Recycle Cost. This paper also examines whether such quality costs in
supply chains influence the revenue sharing between partners. Using the author's quality cost model, the relationship between
quality costs and revenue sharing among partners is examined using a
case study in an Egyptian manufacturing company which is a part of
a supply chain. This paper argues that the revenue-sharing proportion
allocated to supplier increases as the recycle cost of supplier
increases, and the revenue-sharing proportion allocated to
manufacturer increases as the prevention and appraisal costs increase,
as well as the failure costs, the recycle costs of manufacturer, and the
recycle costs of suppliers decrease. However, the results present
surprising findings. The purposes of this study are developing quality cost approach
and understanding the relationships between quality costs and
revenue sharing in supply chains. Therefore, the present study
contributes to theory and practice by explaining how the cost of
recycling can be combined in quality cost model to better
understanding the revenue sharing among partners in supply chains.