Abstract: MultiProtocol Label Switching (MPLS) is an
emerging technology that aims to address many of the existing issues
associated with packet forwarding in today-s Internetworking
environment. It provides a method of forwarding packets at a high
rate of speed by combining the speed and performance of Layer 2
with the scalability and IP intelligence of Layer 3. In a traditional IP
(Internet Protocol) routing network, a router analyzes the destination
IP address contained in the packet header. The router independently
determines the next hop for the packet using the destination IP
address and the interior gateway protocol. This process is repeated at
each hop to deliver the packet to its final destination. In contrast, in
the MPLS forwarding paradigm routers on the edge of the network
(label edge routers) attach labels to packets based on the forwarding
Equivalence class (FEC). Packets are then forwarded through the
MPLS domain, based on their associated FECs , through swapping
the labels by routers in the core of the network called label switch
routers. The act of simply swapping the label instead of referencing
the IP header of the packet in the routing table at each hop provides
a more efficient manner of forwarding packets, which in turn allows
the opportunity for traffic to be forwarded at tremendous speeds and
to have granular control over the path taken by a packet. This paper
deals with the process of MPLS forwarding mechanism,
implementation of MPLS datapath , and test results showing the
performance comparison of MPLS and IP routing. The discussion
will focus primarily on MPLS IP packet networks – by far the
most common application of MPLS today.
Abstract: Image coding based on clustering provides immediate
access to targeted features of interest in a high quality decoded
image. This approach is useful for intelligent devices, as well as for
multimedia content-based description standards. The result of image
clustering cannot be precise in some positions especially on pixels
with edge information which produce ambiguity among the clusters.
Even with a good enhancement operator based on PDE, the quality of
the decoded image will highly depend on the clustering process. In
this paper, we introduce an ambiguity cluster in image coding to
represent pixels with vagueness properties. The presence of such
cluster allows preserving some details inherent to edges as well for
uncertain pixels. It will also be very useful during the decoding phase
in which an anisotropic diffusion operator, such as Perona-Malik,
enhances the quality of the restored image. This work also offers a
comparative study to demonstrate the effectiveness of a fuzzy
clustering technique in detecting the ambiguity cluster without losing
lot of the essential image information. Several experiments have been
carried out to demonstrate the usefulness of ambiguity concept in
image compression. The coding results and the performance of the
proposed algorithms are discussed in terms of the peak signal-tonoise
ratio and the quantity of ambiguous pixels.
Abstract: Basic ingredients of concrete are cement, fine aggregate, coarse aggregate and water. To produce a concrete of certain specific properties, optimum proportion of these ingredients are mixed. The important factors which govern the mix design are grade of concrete, type of cement and size, shape and grading of aggregates. Concrete mix design method is based on experimentally evolved empirical relationship between the factors in the choice of mix design. Basic draw backs of this method are that it does not produce desired strength, calculations are cumbersome and a number of tables are to be referred for arriving at trial mix proportion moreover, the variation in attainment of desired strength is uncertain below the target strength and may even fail. To solve this problem, a lot of cubes of standard grades were prepared and attained 28 days strength determined for different combination of cement, fine aggregate, coarse aggregate and water. An artificial neural network (ANN) was prepared using these data. The input of ANN were grade of concrete, type of cement, size, shape and grading of aggregates and output were proportions of various ingredients. With the help of these inputs and outputs, ANN was trained using feed forward back proportion model. Finally trained ANN was validated, it was seen that it gave the result with/ error of maximum 4 to 5%. Hence, specific type of concrete can be prepared from given material properties and proportions of these materials can be quickly evaluated using the proposed ANN.
Abstract: In this paper, the feasibility study of using a hybrid
system of ground heat exchangers (GHE) and direct evaporative
cooling system in arid weather condition has been performed. The
model is applied for Yazd and Kerman, two cities with arid weather
condition in Iran. The system composed of three sections: Ground-
Coupled-Circuit (GCC), Direct Evaporative Cooler (DEC) and
Cooling Coil Unite (CCU). The GCC provides the necessary precooling
for DEC. The GCC includes four vertical GHE which are
designed in series configuration. Simulation results show that
hybridization of GCC and DEC could provide comfort condition
whereas DEC alone did not. Based on the results the cooling
effectiveness of a hybrid system is more than unity. Thus, this novel
hybrid system could decrease the air temperature below the ambient
wet-bulb temperature. This environmentally clean and energy
efficient system can be considered as an alternative to the mechanical
vapor compression systems.
Abstract: Ventilation is a fundamental requirement for
occupant health and indoor air quality in buildings. Natural
ventilation can be used as a design strategy in free-running
buildings to:
• Renew indoor air with fresh outside air and lower room
temperatures at times when the outdoor air is cooler.
• Promote air flow to cool down the building structure
(structural cooling).
• Promote occupant physiological cooling processes
(comfort cooling).
This paper focuses on ways in which ventilation can
provide the mechanism for heat dissipation and cooling of the
building structure..It also discusses use of ventilation as a
means of increasing air movement to improve comfort when
indoor air temperatures are too high. The main influencing
factors and design considerations and quantitative guidelines
to help meet the design objectives are also discussed.
Abstract: Wireless Sensor Networks (WSN) are emerging
because of the developments in wireless communication technology and miniaturization of the hardware. WSN consists of a large number of low-cost, low-power, multifunctional sensor nodes to monitor physical conditions, such as temperature, sound, vibration, pressure,
motion, etc. The MAC protocol to be used in the sensor networks must be energy efficient and this should aim at conserving the energy during its operation. In this paper, with the focus of analyzing the
MAC protocols used in wireless Adhoc networks to WSN, simulation
experiments were conducted in Global Mobile Simulator
(GloMoSim) software. Number of packets sent by regular nodes, and received by sink node in different deployment strategies, total energy
spent, and the network life time have been chosen as the metric for comparison. From the results of simulation, it is evident that the IEEE 802.11 protocol performs better compared to CSMA and MACA protocols.
Abstract: The localized corrosion behavior of laser surface
melted 304L austenitic stainless steel was studied by
potentiodynamic polarization test. The extent of improvement in
corrosion resistance was governed by the preferred orientation and
the percentage of delta ferrite present on the surface of the laser
melted sample. It was established by orientation imaging microscopy
that the highest pitting potential value was obtained when grains were
oriented in the most close- packed [101] direction compared to the
random distribution of the base metal and other laser surface melted
samples oriented in [001] direction. The sample with lower
percentage of ferrite had good pitting resistance.
Abstract: The proof of concept experiments were conducted to
determine the feasibility of using small amounts of Dissolved
Sulphur (DS) from the gaseous phase to precipitate platinum ions in
chloride media. Two sets of precipitation experiments were
performed in which the source of sulphur atoms was either a
thiosulphate solution (Na2S2O3) or a sulphur dioxide gas (SO2). In
liquid-liquid (L-L) system, complete precipitation of Pt was achieved
at small dosages of Na2S2O3 (0.01 – 1.0 M) in a time interval of 3-5
minutes. On the basis of this result, gas absorption tests were carried
out mainly to achieve sulphur solubility equivalent to 0.018 M. The
idea that huge amounts of precious metals could be recovered
selectively from their dilute solutions by utilizing the waste SO2
streams at low pressure seemed attractive from the economic and
environmental point of views. Therefore, mass transfer characteristics
of SO2 gas associated with reactive absorption across the gas-liquid
(G-L) interface were evaluated under different conditions of pressure
(0.5 – 2 bar), solution temperature ranges from 20 – 50 oC and acid
strength (1 – 4 M, HCl). This paper concludes with information about
selective precipitation of Pt in the presence of cations (Fe2+, Co2+,
and Cr3+) in a CSTR and recommendation to scale up laboratory data
to industrial pilot scale operations.
Abstract: The aeration process via injectors is used to combat
the lack of oxygen in lakes due to eutrophication. A 3D numerical
simulation of the resulting flow using a simplified model is presented.
In order to generate the best dynamic in the fluid with respect to
the aeration purpose, the optimization of the injectors location is
considered. We propose to adapt to this problem the topological
sensitivity analysis method which gives the variation of a criterion
with respect to the creation of a small hole in the domain. The main
idea is to derive the topological sensitivity analysis of the physical
model with respect to the insertion of an injector in the fluid flow
domain. We propose in this work a topological optimization algorithm
based on the studied asymptotic expansion. Finally we present some
numerical results, showing the efficiency of our approach
Abstract: In this paper, a pipelined version of genetic algorithm,
called PLGA, and a corresponding hardware platform are described.
The basic operations of conventional GA (CGA) are made pipelined
using an appropriate selection scheme. The selection operator, used
here, is stochastic in nature and is called SA-selection. This helps
maintaining the basic generational nature of the proposed pipelined
GA (PLGA). A number of benchmark problems are used to compare
the performances of conventional roulette-wheel selection and the
SA-selection. These include unimodal and multimodal functions with
dimensionality varying from very small to very large. It is seen that
the SA-selection scheme is giving comparable performances with
respect to the classical roulette-wheel selection scheme, for all the
instances, when quality of solutions and rate of convergence are considered.
The speedups obtained by PLGA for different benchmarks
are found to be significant. It is shown that a complete hardware
pipeline can be developed using the proposed scheme, if parallel
evaluation of the fitness expression is possible. In this connection
a low-cost but very fast hardware evaluation unit is described.
Results of simulation experiments show that in a pipelined hardware
environment, PLGA will be much faster than CGA. In terms of
efficiency, PLGA is found to outperform parallel GA (PGA) also.
Abstract: In this paper the neural network-based controller is
designed for motion control of a mobile robot. This paper treats the
problems of trajectory following and posture stabilization of the
mobile robot with nonholonomic constraints. For this purpose the
recurrent neural network with one hidden layer is used. It learns
relationship between linear velocities and error positions of the
mobile robot. This neural network is trained on-line using the
backpropagation optimization algorithm with an adaptive learning
rate. The optimization algorithm is performed at each sample time to
compute the optimal control inputs. The performance of the proposed
system is investigated using a kinematic model of the mobile robot.
Abstract: As the Computed Tomography(CT) requires normally
hundreds of projections to reconstruct the image, patients are exposed
to more X-ray energy, which may cause side effects such as cancer.
Even when the variability of the particles in the object is very less,
Computed Tomography requires many projections for good quality
reconstruction. In this paper, less variability of the particles in an
object has been exploited to obtain good quality reconstruction.
Though the reconstructed image and the original image have same
projections, in general, they need not be the same. In addition
to projections, if a priori information about the image is known,
it is possible to obtain good quality reconstructed image. In this
paper, it has been shown by experimental results why conventional
algorithms fail to reconstruct from a few projections, and an efficient
polynomial time algorithm has been given to reconstruct a bi-level
image from its projections along row and column, and a known sub
image of unknown image with smoothness constraints by reducing the
reconstruction problem to integral max flow problem. This paper also
discusses the necessary and sufficient conditions for uniqueness and
extension of 2D-bi-level image reconstruction to 3D-bi-level image
reconstruction.
Abstract: Wireless Mesh Networking is a promising proposal
for broadband data transmission in a large area with low cost and
acceptable QoS. These features- trade offs in WMNs is a hot research
field nowadays. In this paper a mathematical optimization framework
has been developed to maximize throughput according to upper
bound delay constraints. IEEE 802.11 based infrastructure
backhauling mode of WMNs has been considered to formulate the
MINLP optimization problem. Proposed method gives the full
routing and scheduling procedure in WMN in order to obtain
mentioned goals.
Abstract: Dhaka, the capital city of Bangladesh, is one of the
densely populated cities in the world. Due to rapid urbanization 60%
of its population lives in slum and squatter settlements. The reason
behind this poverty is low economic growth, inequitable distribution
of income, unequal distribution of productive assets, unemployment
and underemployment, high rate of population growth, low level of
human resource development, natural disasters, and limited access to
public services. Along with poverty, creating pressure on urban land,
shelter, plots, open spaces this creates environmental and ecological
degradation. These constraints are mostly resulted from the failures
of the government policies and measures and only Government can
solve this problem. This is now prime time to establish planning and
environmental management policy and sustainable urban
development for the city and for the urban slum dwellers which are
free from eviction, criminals, rent seekers and other miscreants.
Abstract: This paper presents a genetic algorithm based
approach for solving security constrained optimal power flow
problem (SCOPF) including FACTS devices. The optimal location of
FACTS devices are identified using an index called overload index
and the optimal values are obtained using an enhanced genetic
algorithm. The optimal allocation by the proposed method optimizes
the investment, taking into account its effects on security in terms of
the alleviation of line overloads. The proposed approach has been
tested on IEEE-30 bus system to show the effectiveness of the
proposed algorithm for solving the SCOPF problem.
Abstract: This is a study on numerical simulation of the convection-diffusion transport of a chemical species in steady flow through a small-diameter tube, which is lined with a very thin layer made up of retentive and absorptive materials. The species may be subject to a first-order kinetic reversible phase exchange with the wall material and irreversible absorption into the tube wall. Owing to the velocity shear across the tube section, the chemical species may spread out axially along the tube at a rate much larger than that given by the molecular diffusion; this process is known as dispersion. While the long-time dispersion behavior, well described by the Taylor model, has been extensively studied in the literature, the early development of the dispersion process is by contrast much less investigated. By early development, that means a span of time, after the release of the chemical into the flow, that is shorter than or comparable to the diffusion time scale across the tube section. To understand the early development of the dispersion, the governing equations along with the reactive boundary conditions are solved numerically using the Flux Corrected Transport Algorithm (FCTA). The computation has enabled us to investigate the combined effects on the early development of the dispersion coefficient due to the reversible and irreversible wall reactions. One of the results is shown that the dispersion coefficient may approach its steady-state limit in a short time under the following conditions: (i) a high value of Damkohler number (say Da ≥ 10); (ii) a small but non-zero value of absorption rate (say Γ* ≤ 0.5).
Abstract: Li1.5Al0.5Ti1.5 (PO4)3(LATP) has received much
attention as a solid electrolyte for lithium batteries. In this study, the
LATP solid electrolyte is prepared by the co-precipitation method
using Li3PO4 as a Li source. The LATP is successfully prepared and
the Li ion conductivities of bulk (inner crystal) and total (inner crystal
and grain boundary) are 1.1 × 10-3 and 1.1 × 10-4 S cm-1, respectively.
These values are comparable to the reported values, in which Li2C2O4
is used as the Li source. It is conclude that the LATP solid electrolyte
can be prepared by the co-precipitation method using Li3PO4 as the Li
source and this procedure has an advantage in mass production over
previous procedure using Li2C2O4 because Li3PO4 is lower price
reagent compared with Li2C2O4.
Abstract: Computed tomography and laminography are heavily investigated in a compressive sensing based image reconstruction framework to reduce the dose to the patients as well as to the radiosensitive devices such as multilayer microelectronic circuit boards. Nowadays researchers are actively working on optimizing the compressive sensing based iterative image reconstruction algorithm to obtain better quality images. However, the effects of the sampled data’s properties on reconstructed the image’s quality, particularly in an insufficient sampled data conditions have not been explored in computed laminography. In this paper, we investigated the effects of two data properties i.e. sampling density and data incoherence on the reconstructed image obtained by conventional computed laminography and a recently proposed method called spherical sinusoidal scanning scheme. We have found that in a compressive sensing based image reconstruction framework, the image quality mainly depends upon the data incoherence when the data is uniformly sampled.
Abstract: Most file systems overwrite modified file data and
metadata in their original locations, while the Log-structured File
System (LFS) dynamically relocates them to other locations. We
design and implement the Evergreen file system that can select
between overwriting or relocation for each block of a file or metadata.
Therefore, the Evergreen file system can achieve superior write
performance by sequentializing write requests (similar to LFS-style
relocation) when space utilization is low and overwriting when
utilization is high. Another challenging issue is identifying
performance benefits of LFS-style relocation over overwriting on a
newly introduced SSD (Solid State Drive) which has only
Flash-memory chips and control circuits without mechanical parts.
Our experimental results measured on a SSD show that relocation
outperforms overwriting when space utilization is below 80% and vice
versa.
Abstract: The objective of this research is parameters optimized
of the stair shape workpiece which is cut by CNC Wire-Cut EDM
(WEDW). The experiment material is SKD-11 steel of stair-shaped
with variable height workpiece 10, 20, 30 and 40 mm. with the same
10 mm. thickness are cut by Sodick's CNC Wire-Cut EDM model
AD325L.
The experiments are designed by 3k full factorial experimental
design at 3 level 2 factors and 9 experiments with 2 replicate. The
selected two factor are servo voltage (SV) and servo feed rate (SF)
and the response is cutting thickness error. The experiment is divided
in two experiments. The first experiment determines the significant
effective factor at confidential interval 95%. The SV factor is the
significant effective factor from first result. In order to result smallest
cutting thickness error of workpieces is 17 micron with the SV value
is 46 volt. Also show that the lower SV value, the smaller different
thickness error of workpiece. Then the second experiment is done to
reduce different cutting thickness error of workpiece as small as
possible by lower SV. The second experiment result show the
significant effective factor at confidential interval 95% is the SV
factor and the smallest cutting thickness error of workpieces reduce
to 11 micron with the experiment SV value is 36 volt.