Abstract: Heat transfer due to forced convection of copper water
based nanofluid has been predicted by Artificial Neural network
(ANN). The present nanofluid is formed by mixing copper
nanoparticles in water and the volume fractions are considered here
are 0% to 15% and the Reynolds number are kept constant at 100.
The back propagation algorithm is used to train the network. The
present ANN is trained by the input and output data which has been
obtained from the numerical simulation, performed in finite volume
based Computational Fluid Dynamics (CFD) commercial software
Ansys Fluent. The numerical simulation based results are compared
with the back propagation based ANN results. It is found that the
forced convection heat transfer of water based nanofluid can be
predicted correctly by ANN. It is also observed that the back
propagation ANN can predict the heat transfer characteristics of
nanofluid very quickly compared to standard CFD method.
Abstract: Margin-Based Principle has been proposed for a long
time, it has been proved that this principle could reduce the
structural risk and improve the performance in both theoretical
and practical aspects. Meanwhile, feed-forward neural network is
a traditional classifier, which is very hot at present with a deeper
architecture. However, the training algorithm of feed-forward neural
network is developed and generated from Widrow-Hoff Principle that
means to minimize the squared error. In this paper, we propose
a new training algorithm for feed-forward neural networks based
on Margin-Based Principle, which could effectively promote the
accuracy and generalization ability of neural network classifiers
with less labelled samples and flexible network. We have conducted
experiments on four UCI open datasets and achieved good results
as expected. In conclusion, our model could handle more sparse
labelled and more high-dimension dataset in a high accuracy while
modification from old ANN method to our method is easy and almost
free of work.
Abstract: An efficient remanufacturing network lead to an
efficient design of sustainable manufacturing enterprise. In
remanufacturing network, products are collected from the customer
zone, disassembled and remanufactured at a suitable remanufacturing
facility. In this respect, another issue to consider is how the returned
product to be remanufactured, in other words, what is the best layout
for such facility. In order to achieve a sustainable manufacturing
system, Cellular Manufacturing System (CMS) designs are highly
recommended, CMSs combine high throughput rates of line layouts
with the flexibility offered by functional layouts (job shop).
Introducing the CMS while designing a remanufacturing network will
benefit the utilization of such a network. This paper presents and
analyzes a comprehensive mathematical model for the design of
Dynamic Cellular Remanufacturing Systems (DCRSs). In this paper,
the proposed model is the first one to date that considers CMS and
remanufacturing system simultaneously. The proposed DCRS model
considers several manufacturing attributes such as multi period
production planning, dynamic system reconfiguration, duplicate
machines, machine capacity, available time for workers, worker
assignments, and machine procurement, where the demand is totally
satisfied from a returned product. A numerical example is presented
to illustrate the proposed model.
Abstract: The Scheduling and mapping of tasks on a set of
processors is considered as a critical problem in parallel and
distributed computing system. This paper deals with the problem of
dynamic scheduling on a special type of multiprocessor architecture
known as Linear Crossed Cube (LCQ) network. This proposed
multiprocessor is a hybrid network which combines the features of
both linear types of architectures as well as cube based architectures.
Two standard dynamic scheduling schemes namely Minimum
Distance Scheduling (MDS) and Two Round Scheduling (TRS)
schemes are implemented on the LCQ network. Parallel tasks are
mapped and the imbalance of load is evaluated on different set of
processors in LCQ network. The simulations results are evaluated
and effort is made by means of through analysis of the results to
obtain the best solution for the given network in term of load
imbalance left and execution time. The other performance matrices
like speedup and efficiency are also evaluated with the given
dynamic algorithms.
Abstract: The following article presents Technology Centre of
Ostrava (TCO) in the Czech Republic describing the structure and
main research areas realized by the project ENET - Energy Units for
Utilization of non Traditional Energy Sources. More details are
presented from the research program dealing with transformation,
accumulation and distribution of electric energy. Technology Centre
has its own energy mix consisting of alternative sources of fuel
sources that use of process gases from the storage part and also the
energy from distribution network. The article will be focus on the
properties and application possibilities SiC semiconductor devices for
power semiconductor converter for photovoltaic systems.
Abstract: Wavelength Division Multiplexing (WDM) is the dominant transport technology used in numerous high capacity backbone networks, based on optical infrastructures. Given the importance of costs (CapEx and OpEx) associated to these networks, resource management is becoming increasingly important, especially how the optical circuits, called “lightpaths”, are routed throughout the network. This requires the use of efficient algorithms which provide routing strategies with the lowest cost. We focus on the lightpath routing and wavelength assignment problem, known as the RWA problem, while optimizing wavelength fragmentation over the network. Wavelength fragmentation poses a serious challenge for network operators since it leads to the misuse of the wavelength spectrum, and then to the refusal of new lightpath requests. In this paper, we first establish a new Integer Linear Program (ILP) for the problem based on a node-link formulation. This formulation is based on a multilayer approach where the original network is decomposed into several network layers, each corresponding to a wavelength. Furthermore, we propose an efficient heuristic for the problem based on a greedy algorithm followed by a post-treatment procedure. The obtained results show that the optimal solution is often reached. We also compare our results with those of other RWA heuristic methods
Abstract: Typically, virtual communities exhibit the well-known
phenomenon of participation inequality, which means that only a
small percentage of users is responsible of the majority of
contributions. However, the sustainability of the community requires
that the group of active users must be continuously nurtured with new
users that gain expertise through a participation process. This paper
analyzes the time evolution of Open Source Software (OSS)
communities, considering users that join/abandon the community
over time and several topological properties of the network when
modeled as a social network. More specifically, the paper analyzes
the role of those users rejoining the community and their influence in
the global characteristics of the network.
Abstract: Cloud computing is a new technology in industry and
academia. The technology has grown and matured in last half decade
and proven their significant role in changing environment of IT
infrastructure where cloud services and resources are offered over the
network. Cloud technology enables users to use services and
resources without being concerned about the technical implications of
technology. There are substantial research work has been performed
for the usage of cloud computing in educational institutes and
majority of them provides cloud services over high-end blade servers
or other high-end CPUs. However, this paper proposes a new stack
called “CiCKAStack” which provide cloud services over unutilized
computing resources, named as commodity computers.
“CiCKAStack” provides IaaS and PaaS using underlying commodity
computers. This will not only increasing the utilization of existing
computing resources but also provide organize file system, on
demand computing resource and design and development
environment.
Abstract: In a multi-cultural learning context, where ties are
weak and dynamic, combining qualitative with quantitative research
methods may be more effective. Such a combination may also allow
us to answer different types of question, such as about people’s
perception of the network. In this study the use of observation,
interviews and photos were explored as ways of enhancing data from
social network questionnaires. Integrating all of these methods was
found to enhance the quality of data collected and its accuracy, also
providing a richer story of the network dynamics and the factors that
shaped these changes over time.
Abstract: IEEE 802.11a/b/g standards provide multiple
transmission rates, which can be changed dynamically according to the
channel condition. Cooperative communications were introduced to
improve the overall performance of wireless LANs with the help of
relay nodes with higher transmission rates. The cooperative
communications are based on the fact that the transmission is much
faster when sending data packets to a destination node through a relay
node with higher transmission rate, rather than sending data directly to
the destination node at low transmission rate. To apply the cooperative
communications in wireless LAN, several MAC protocols have been
proposed. Some of them can result in collisions among relay nodes in a
dense network. In order to solve this problem, we propose a new
protocol. Relay nodes are grouped based on their transmission rates.
And then, relay nodes only in the highest group try to get channel
access. Performance evaluation is conducted using simulation, and
shows that the proposed protocol significantly outperforms the
previous protocol in terms of throughput and collision probability.
Abstract: The Cone Penetration Test (CPT) is a common in-situ
test which generally investigates a much greater volume of soil more
quickly than possible from sampling and laboratory tests. Therefore,
it has the potential to realize both cost savings and assessment of soil
properties rapidly and continuously. The principle objective of this
paper is to demonstrate the feasibility and efficiency of using
artificial neural networks (ANNs) to predict the soil angle of internal
friction (Φ) and the soil modulus of elasticity (E) from CPT results
considering the uncertainties and non-linearities of the soil. In
addition, ANNs are used to study the influence of different
parameters and recommend which parameters should be included as
input parameters to improve the prediction. Neural networks discover
relationships in the input data sets through the iterative presentation
of the data and intrinsic mapping characteristics of neural topologies.
General Regression Neural Network (GRNN) is one of the powerful
neural network architectures which is utilized in this study. A large
amount of field and experimental data including CPT results, plate
load tests, direct shear box, grain size distribution and calculated data
of overburden pressure was obtained from a large project in the
United Arab Emirates. This data was used for the training and the
validation of the neural network. A comparison was made between
the obtained results from the ANN's approach, and some common
traditional correlations that predict Φ and E from CPT results with
respect to the actual results of the collected data. The results show
that the ANN is a very powerful tool. Very good agreement was
obtained between estimated results from ANN and actual measured
results with comparison to other correlations available in the
literature. The study recommends some easily available parameters
that should be included in the estimation of the soil properties to
improve the prediction models. It is shown that the use of friction
ration in the estimation of Φ and the use of fines content in the
estimation of E considerable improve the prediction models.
Abstract: IEEE 802.16 (WiMAX) aims to present high speed
wireless access to cover wide range coverage. The base station (BS)
and the subscriber station (SS) are the main parts of WiMAX.
WiMAX uses either Point-to-Multipoint (PMP) or mesh topologies.
In the PMP mode, the SSs connect to the BS to gain access to the
network. However, in the mesh mode, the SSs connect to each other
to gain access to the BS.
The main components of QoS management in the 802.16 standard
are the admission control, buffer management and packet scheduling.
In this paper, we use QualNet 5.0.2 to study the performance of
different scheduling schemes, such as WFQ, SCFQ, RR and SP when
the numbers of SSs increase. We find that when the number of SSs
increases, the average jitter and average end-to-end delay is increased
and the throughput is reduced.
Abstract: Every machine plays roles of client and server
simultaneously in a peer-to-peer (P2P) network. Though a P2P
network has many advantages over traditional client-server models
regarding efficiency and fault-tolerance, it also faces additional
security threats. Users/IT administrators should be aware of risks
from malicious code propagation, downloaded content legality, and
P2P software’s vulnerabilities. Security and preventative measures
are a must to protect networks from potential sensitive information
leakage and security breaches. Bit Torrent is a popular and scalable
P2P file distribution mechanism which successfully distributes large
files quickly and efficiently without problems for origin server. Bit
Torrent achieved excellent upload utilization according to
measurement studies, but it also raised many questions as regards
utilization in settings, than those measuring, fairness, and Bit
Torrent’s mechanisms choice. This work proposed a block selection
technique using Fuzzy ACO with optimal rules selected using ACO.
Abstract: Estimation of model parameters is necessary to predict
the behavior of a system. Model parameters are estimated using
optimization criteria. Most algorithms use historical data to estimate
model parameters. The known target values (actual) and the output
produced by the model are compared. The differences between the
two form the basis to estimate the parameters. In order to compare
different models developed using the same data different criteria are
used. The data obtained for short scale projects are used here. We
consider software effort estimation problem using radial basis
function network. The accuracy comparison is made using various
existing criteria for one and two predictors. Then, we propose a new
criterion based on linear least squares for evaluation and compared
the results of one and two predictors. We have considered another
data set and evaluated prediction accuracy using the new criterion.
The new criterion is easy to comprehend compared to single statistic.
Although software effort estimation is considered, this method is
applicable for any modeling and prediction.
Abstract: In this paper, we have compared and analyzed the
electroabsorption properties between with and without excitonic
effect bulk in high purity GaAs spatial light modulator for optical
fiber communication network. The eletroabsorption properties such
as absorption spectra, change in absorption spectra, change in
refractive index and extinction ration has been calculated. We have
also compared the result of absorption spectra and change in
absorption spectra with the experimental results and found close
agreement with experimental results.
Abstract: Load modeling is one of the central functions in
power systems operations. Electricity cannot be stored, which means
that for electric utility, the estimate of the future demand is necessary
in managing the production and purchasing in an economically
reasonable way. A majority of the recently reported approaches are
based on neural network. The attraction of the methods lies in the
assumption that neural networks are able to learn properties of the
load. However, the development of the methods is not finished, and
the lack of comparative results on different model variations is a
problem. This paper presents a new approach in order to predict the
Tunisia daily peak load. The proposed method employs a
computational intelligence scheme based on the Fuzzy neural
network (FNN) and support vector regression (SVR). Experimental
results obtained indicate that our proposed FNN-SVR technique gives
significantly good prediction accuracy compared to some classical
techniques.
Abstract: The distribution networks are often exposed to harmful
incidents which can halt the electricity supply of the customer. In this
context, we studied a real case of a critical zone of the Tunisian
network which is currently characterized by the dysfunction of its
plan of protection. In this paper, we were interested in the
harmonization of the protection plan settings in order to ensure a
perfect selectivity and a better continuity of service on the whole of
the network.
Abstract: A Distributed Denial of Service (DDoS) attack is a
major threat to cyber security. It originates from the network layer or
the application layer of compromised/attacker systems which are
connected to the network. The impact of this attack ranges from the
simple inconvenience to use a particular service to causing major
failures at the targeted server. When there is heavy traffic flow to a
target server, it is necessary to classify the legitimate access and
attacks. In this paper, a novel method is proposed to detect DDoS
attacks from the traces of traffic flow. An access matrix is created
from the traces. As the access matrix is multi dimensional, Principle
Component Analysis (PCA) is used to reduce the attributes used for
detection. Two classifiers Naive Bayes and K-Nearest neighborhood
are used to classify the traffic as normal or abnormal. The
performance of the classifier with PCA selected attributes and actual
attributes of access matrix is compared by the detection rate and
False Positive Rate (FPR).
Abstract: Wireless mesh networking is rapidly gaining in
popularity with a variety of users: from municipalities to enterprises,
from telecom service providers to public safety and military
organizations. This increasing popularity is based on two basic facts:
ease of deployment and increase in network capacity expressed in
bandwidth per footage; WMNs do not rely on any fixed
infrastructure. Many efforts have been used to maximizing
throughput of the network in a multi-channel multi-radio wireless
mesh network. Current approaches are purely based on either static or
dynamic channel allocation approaches. In this paper, we use a
hybrid multichannel multi radio wireless mesh networking
architecture, where static and dynamic interfaces are built in the
nodes. Dynamic Adaptive Channel Allocation protocol (DACA), it
considers optimization for both throughput and delay in the channel
allocation. The assignment of the channel has been allocated to be codependent
with the routing problem in the wireless mesh network and
that should be based on passage flow on every link. Temporal and
spatial relationship rises to re compute the channel assignment every
time when the pattern changes in mesh network, channel assignment
algorithms assign channels in network. In this paper a computing
path which captures the available path bandwidth is the proposed
information and the proficient routing protocol based on the new path
which provides both static and dynamic links. The consistency
property guarantees that each node makes an appropriate packet
forwarding decision and balancing the control usage of the network,
so that a data packet will traverse through the right path.
Abstract: The star network is one of the promising
interconnection networks for future high speed parallel computers, it
is expected to be one of the future-generation networks. The star
network is both edge and vertex symmetry, it was shown to have
many gorgeous topological proprieties also it is owns hierarchical
structure framework. Although much of the research work has been
done on this promising network in literature, it still suffers from
having enough algorithms for load balancing problem. In this paper
we try to work on this issue by investigating and proposing an
efficient algorithm for load balancing problem for the star network.
The proposed algorithm is called Star Clustered Dimension Exchange
Method SCDEM to be implemented on the star network. The
proposed algorithm is based on the Clustered Dimension Exchange
Method (CDEM). The SCDEM algorithm is shown to be efficient in
redistributing the load balancing as evenly as possible among all
nodes of different factor networks.