Abstract: Phase equilibria of AZ91D Mg alloys for
nonflammable use, containing Ca and Y, were carried out by using
FactSage® and FTLite database, which revealed that solid solution
treatment could be performed at temperatures from 400 to 450oC.
Solid solution treatment of AZ91D Mg alloy without Ca and Y was
successfully conducted at 420oC and supersaturated microstructure
with all beta phase resolved into matrix was obtained. In the case of
AZ91D Mg alloy with some Ca and Y; however, a little amount of
intermetallic particles were observed after solid solution treatment.
After solid solution treatment, each alloy was annealed at temperatures
of 180 and 200oC for time intervals from 1 min to 48 hrs and hardness
of each condition was measured by micro-Vickers method. Peak aging
conditions were deduced as at the temperature of 200oC for 10 hrs.
Abstract: Photoacoustic imaging (PAI) is a non-invasive and
non-ionizing imaging modality that combines the absorption contrast
of light with ultrasound resolution. Laser is used to deposit optical
energy into a target (i.e., optical fluence). Consequently, the target
temperature rises, and then thermal expansion occurs that leads to
generating a PA signal. In general, most image reconstruction
algorithms for PAI assume uniform fluence within an imaging object.
However, it is known that optical fluence distribution within the
object is non-uniform. This could affect the reconstruction of PA
images. In this study, we have investigated the influence of optical
fluence distribution on PA back-propagation imaging using finite
element method. The uniform fluence was simulated as a triangular
waveform within the object of interest. The non-uniform fluence
distribution was estimated by solving light propagation within a
tissue model via Monte Carlo method. The results show that the PA
signal in the case of non-uniform fluence is wider than the uniform
case by 23%. The frequency spectrum of the PA signal due to the
non-uniform fluence has missed some high frequency components in
comparison to the uniform case. Consequently, the reconstructed
image with the non-uniform fluence exhibits a strong smoothing
effect.
Abstract: At present, it is widely-known that free radicals are the causes of illness such as cancers, coronary heart disease, Alzheimer’s disease and aging. One method of protection from free radical is the consumption of antioxidant-containing foods or herbs. Several analytical methods have been used for qualitative and quantitative determination of antioxidants. This project aimed to evaluate antioxidant activity of ethanolic and aqueous extracts from cabbage (Brassicca oleracea L. var. capitata L.) measured by DPPH and Hydroxyl radical scavenging method. The results show that averaged antioxidant activity measured in ethanolic extract (µmol Ascorbic acid equivalent/g fresh mass) were 7.316 ± 0.715 and 4.66 ± 1.029 as determined by DPPH and Hydroxyl radical scavenging activity assays respectively. Averaged antioxidant activity measured in aqueous extract (µmol Ascorbic acid equivalent/g fresh mass) were 15.141 ± 2.092 and 4.955 ± 1.975 as determined by DPPH and Hydroxyl radical scavenging activity assays respectively.
Abstract: Building code-related literature provides
recommendations on normalizing approaches to the calculation of
the dynamic properties of structures. Most building codes make a
distinction among types of structural systems, construction material,
and configuration through a numerical coefficient in the
expression for the fundamental period. The period is then used in
normalized response spectra to compute base shear. The typical
parameter used in simplified code formulas for the fundamental
period is overall building height raised to a power determined from
analytical and experimental results. However, reinforced concrete
buildings which constitute the majority of built space in less
developed countries pose additional challenges to the ones built with
homogeneous material such as steel, or with concrete under stricter
quality control. In the present paper, the particularities of reinforced
concrete buildings are explored and related to current methods of
equivalent static analysis. A comparative study is presented between
the Uniform Building Code, commonly used for buildings within
and outside the USA, and data from the Middle East used to model
151 reinforced concrete buildings of varying number of bays, number
of floors, overall building height, and individual story height. The
fundamental period was calculated using eigenvalue matrix
computation. The results were also used in a separate regression
analysis where the computed period serves as dependent variable,
while five building properties serve as independent variables. The
statistical analysis shed light on important parameters that simplified
code formulas need to account for including individual story height,
overall building height, floor plan, number of bays, and concrete
properties. Such inclusions are important for reinforced concrete
buildings of special conditions due to the level of concrete damage,
aging, or materials quality control during construction.
Overall results of the present analysis show that simplified code
formulas for fundamental period and base shear may be applied but
they require revisions to account for multiple parameters. The
conclusion above is confirmed by the analytical model where
fundamental periods were computed using numerical techniques and
eigenvalue solutions. This recommendation is particularly relevant
to code upgrades in less developed countries where it is customary to
adopt, and mildly adapt international codes.
We also note the necessity of further research using empirical data
from buildings in Lebanon that were subjected to severe damage due
to impulse loading or accelerated aging. However, we excluded this
study from the present paper and left it for future research as it has its
own peculiarities and requires a different type of analysis.
Abstract: The conventional routing protocol designed for MANET fail to handle dynamic movement and self-starting behavior of the node effectively. Every node in MANET is considered as forward as well receiver node and all of them participate in routing the packet from source to the destination. While the interconnection topology is highly dynamic, the performance of the most of the routing protocol is not encouraging. In this paper, a reliable broadcast approach for MANET is proposed for improving the transmission rate. The MANET is considered with asymmetric characteristics and the properties of the source and destination nodes are different. The non-forwarding node list is generated with a downstream node and they do not participate in the routing. While the forwarding and non-forwarding node is constructed in a conventional way, the number of nodes in non-forwarding list is more and increases the load. In this work, we construct the forwarding and non-forwarding node optimally so that the flooding and broadcasting is reduced to certain extent. The forwarded packet is considered as acknowledgements and the non-forwarding nodes explicitly send the acknowledgements to the source. The performance of the proposed approach is evaluated in NS2 environment. Since the proposed approach reduces the flooding, we have considered functionality of the proposed approach with AODV variants. The effect of network density on the overhead and collision rate is considered for performance evaluation. The performance is compared with the AODV variants found that the proposed approach outperforms all the variants.
Abstract: Various fairness models and criteria proposed by academia and industries for wired networks can be applied for ad hoc wireless network. The end-to-end fairness in an ad hoc wireless network is a challenging task compared to wired networks, which has not been addressed effectively. Most of the traffic in an ad hoc network are transport layer flows and thus the fairness of transport layer flows has attracted the interest of the researchers. The factors such as MAC protocol, routing protocol, the length of a route, buffer size, active queue management algorithm and the congestion control algorithms affects the fairness of transport layer flows. In this paper, we have considered the rate of data transmission, the queue management and packet scheduling technique. The ad hoc network is dynamic in nature due to various parameters such as transmission of control packets, multihop nature of forwarding packets, changes in source and destination nodes, changes in the routing path influences determining throughput and fairness among the concurrent flows. In addition, the effect of interaction between the protocol in the data link and transport layers has also plays a role in determining the rate of the data transmission. We maintain queue for each flow and the delay information of each flow is maintained accordingly. The pre-processing of flow is done up to the network layer only. The source and destination address information is used for separating the flow and the transport layer information is not used. This minimizes the delay in the network. Each flow is attached to a timer and is updated dynamically. Finite State Machine (FSM) is proposed for queue and transmission control mechanism. The performance of the proposed approach is evaluated in ns-2 simulation environment. The throughput and fairness based on mobility for different flows used as performance metrics. We have compared the performance of the proposed approach with ATP and the transport layer information is used. This minimizes the delay in the network. Each flow is attached to a timer and is updated dynamically. Finite State Machine (FSM) is proposed for queue and transmission control mechanism. The performance of the proposed approach is evaluated in ns-2 simulation environment. The throughput and fairness based on not mobility for different flows used as performance metrics. We have compared the performance of the proposed approach with ATP and MC-MLAS and the performance of the proposed approach is encouraging.
Abstract: In pattern clustering, nearest neighborhood point computation is a challenging issue for many applications in the area of research such as Remote Sensing, Computer Vision, Pattern Recognition and Statistical Imaging. Nearest neighborhood
computation is an essential computation for providing sufficient classification among the volume of pixels (voxels) in order to localize the active-region-of-interests (AROI). Furthermore, it is needed to compute spatial metric relationships of diverse area of imaging based on the applications of pattern recognition. In this paper, we propose a new methodology for finding the nearest neighbor point, depending on making a virtually grid of a hexagon cells, then locate every point beneath them. An algorithm is suggested for minimizing the computation and increasing the turnaround time of the process. The nearest neighbor query points Φ are fetched by seeking fashion of hexagon holistic. Seeking will be repeated until an AROI Φ is to be expected. If any point Υ is located then searching starts in the nearest hexagons in a circular way. The First hexagon is considered be level 0 (L0) and the surrounded hexagons is level 1 (L1). If Υ is located in L1, then search starts in the next level (L2) to ensure that Υ is the nearest neighbor for Φ. Based on the result and experimental results, we found that the proposed method has an advantage over the traditional methods in terms of minimizing the time complexity required for searching the neighbors, in turn, efficiency of classification will be improved sufficiently.
Abstract: This paper will discuss flip chip methodology, in which I/O pads, standard cells, macros and bump cells array are placed in the floorplan, then routed using Astro place and route tool. Final DRC and LVS checking is done using Calibre verification tool. The design vehicle to run this methodology is an OpenRISC design targeted to Silterra 0.18 micrometer technology with 6 metal layers for routing. Astro has extensive support for flip chip placement and routing. Astro tool commands for flip chip are straightforward approach like the conventional standard wire bond packaging. However since we do not have flip chip commands in our Astro tool, no LEF file for bump cell and no LEF file for flip chip I/O pad, we create our own methodology to prepare for future flip chip tapeout.
Abstract: In this study tree types of multilayer gas barrier plastic packaging films were compared using life cycle assessment as a tool for resource efficient and environmentally low-impact materials selection. The first type of multilayer packaging film (PET-AlOx/LDPE) consists of polyethylene terephthalate with barrier layer AlOx (PET-AlOx) and low density polyethylene (LDPE). The second type of polymer film (PET/PE-EVOH-PE) is made of polyethylene terephthalate (PET) and co-extrusion film PE-EVOH-PE as barrier layer. And the third one type of multilayer packaging film (PET-PVOH/LDPE) is formed from polyethylene terephthalate with barrier layer PVOH (PET-PVOH) and low density polyethylene (LDPE).
All of analyzed packaging has significant impact to resource depletion, because of raw materials extraction and energy use and production of different kind of plastics. Nevertheless the impact generated during life cycle of functional unit of II type of packaging (PET/PE-EVOH-PE) was about 25% lower than impact generated by I type (PET-AlOx/LDPE) and III type (PET-PVOH/LDPE) of packaging.
Result revealed that the contribution of different gas barrier type to the overall environmental problem of packaging is not significant. The impact are mostly generated by using energy and materials during raw material extraction and production of different plastic materials as plastic polymers material as PE, LDPE and PET, but not gas barrier materials as AlOx, PVOH and EVOH.
The LCA results could be useful in different decision-making processes, for selecting resource efficient and environmentally low-impact materials.
Abstract: Structural interpretation of aeromagnetic data and Landsat imagery over the Middle Benue Trough was carried out to determine the depth to basement, delineate the basement morphology and relief, and the structural features within the basin. The aeromagnetic and Landsat data were subjected to various image and data enhancement and transformation routines. Results of the study revealed lineaments with trend directions in the N-S, NE-SW, NWSE and E-W directions, with the NE-SW trends been dominant. The depths to basement within the trough were established to be at 1.8, 0.3 and 0.8km, as shown from the spectral analysis plot. The Source Parameter Imaging (SPI) plot generated showed the centralsouth/ eastern portion of the study area as being deeper in contrast to the western-south-west portion. The basement morphology of the trough was interpreted as having parallel sets of micro-basins which could be considered as grabens and horsts in agreement with the general features interpreted by early workers.
Abstract: In this paper, we present a comparative study of the
genetic algorithms and Hessian-s methods for optimal research of the
active powers in an electric network of power. The objective function
which is the performance index of production of electrical energy is
minimized by satisfying the constraints of the equality type and
inequality type initially by the Hessian-s methods and in the second
time by the genetic Algorithms. The results found by the application
of AG for the minimization of the electric production costs of power
are very encouraging. The algorithms seem to be an effective
technique to solve a great number of problems and which are in
constant evolution. Nevertheless it should be specified that the
traditional binary representation used for the genetic algorithms
creates problems of optimization of management of the large-sized
networks with high numerical precision.
Abstract: The paper presents the multi-element synthetic
transmit aperture (MSTA) method with a small number of elements
transmitting and all elements apertures in medical ultrasound
imaging. As compared to the other methods MSTA allows to
increase the system frame rate and provides the best compromise
between penetration depth and lateral resolution.
In the experiments a 128-element linear transducer array with
0.3 mm pitch excited by a burst pulse of 125 ns duration were used.
The comparison of 2D ultrasound images of tissue mimicking
phantom obtained using the STA and the MSTA methods is
presented to demonstrate the benefits of the second approach. The
results were obtained using SA algorithm with transmit and receive
signals correction based on a single element directivity function.
Abstract: In this work we present an efficient approach for face
recognition in the infrared spectrum. In the proposed approach
physiological features are extracted from thermal images in order to
build a unique thermal faceprint. Then, a distance transform is used
to get an invariant representation for face recognition. The obtained
physiological features are related to the distribution of blood vessels
under the face skin. This blood network is unique to each individual
and can be used in infrared face recognition. The obtained results are
promising and show the effectiveness of the proposed scheme.
Abstract: Script identification is one of the challenging steps in the development of optical character recognition system for bilingual or multilingual documents. In this paper an attempt is made for identification of English numerals at word level from Punjabi documents by using Gabor features. The support vector machine (SVM) classifier with five fold cross validation is used to classify the word images. The results obtained are quite encouraging. Average accuracy with RBF kernel, Polynomial and Linear Kernel functions comes out to be greater than 99%.
Abstract: Ant colony optimization is an ant algorithm framework that took inspiration from foraging behavior of ant colonies. Indeed, ACO algorithms use a chemical communication, represented by pheromone trails, to build good solutions. However, ants involve different communication channels to interact. Thus, this paper introduces the acoustic communication between ants while they are foraging. This process allows fine and local exploration of search space and permits optimal solution to be improved.
Abstract: We present an integration approach of a CMOS biosensor into a polymer based microfluidic environment suitable for mass production. It consists of a wafer-level-package for the silicon die and laser bonding process promoted by an intermediate hot melt foil to attach the sensor package to the microfluidic chip, without the need for dispensing of glues or underfiller. A very good condition of the sensing area was obtained after introducing a protection layer during packaging. A microfluidic flow cell was fabricated and shown to withstand pressures up to Δp = 780 kPa without leakage. The employed biosensors were electrically characterized in a dry environment.
Abstract: Wireless LAN technologies have picked up
momentum in the recent years due to their ease of deployment, cost
and availability. The era of wireless LAN has also given rise to
unique applications like VOIP, IPTV and unified messaging.
However, these real-time applications are very sensitive to network
and handoff latencies. To successfully support these applications,
seamless roaming during the movement of mobile station has become
crucial. Nowadays, centralized architecture models support roaming
in WLANs. They have the ability to manage, control and
troubleshoot large scale WLAN deployments. This model is managed
by Control and Provision of Wireless Access Point protocol
(CAPWAP). This paper covers the CAPWAP architectural solution
along with its proposals that have emerged. Based on the literature
survey conducted in this paper, we found that the proposed
algorithms to reduce roaming latency in CAPWAP architecture do
not support seamless roaming. Additionally, they are not sufficient
during the initial period of the network. This paper also suggests
important design consideration for mobility support in future
centralized IEEE 802.11 networks.
Abstract: Brain Computer Interface (BCI) has been recently
increased in research. Functional Near Infrared Spectroscope (fNIRs)
is one the latest technologies which utilize light in the near-infrared
range to determine brain activities. Because near infrared technology
allows design of safe, portable, wearable, non-invasive and wireless
qualities monitoring systems, fNIRs monitoring of brain
hemodynamics can be value in helping to understand brain tasks. In
this paper, we present results of fNIRs signal analysis indicating that
there exist distinct patterns of hemodynamic responses which
recognize brain tasks toward developing a BCI. We applied two
different mathematics tools separately, Wavelets analysis for
preprocessing as signal filters and feature extractions and Neural
networks for cognition brain tasks as a classification module. We
also discuss and compare with other methods while our proposals
perform better with an average accuracy of 99.9% for classification.
Abstract: Speckled images arise when coherent microwave,
optical, and acoustic imaging techniques are used to image an object, surface or scene. Examples of coherent imaging systems include synthetic aperture radar, laser imaging systems, imaging sonar
systems, and medical ultrasound systems. Speckle noise is a form of object or target induced noise that results when the surface of the object is Rayleigh rough compared to the wavelength of the illuminating radiation. Detection and estimation in images corrupted
by speckle noise is complicated by the nature of the noise and is not
as straightforward as detection and estimation in additive noise. In
this work, we derive stochastic models for speckle noise, with an emphasis on speckle as it arises in medical ultrasound images. The
motivation for this work is the problem of segmentation and tissue classification using ultrasound imaging. Modeling of speckle in this
context involves partially developed speckle model where an underlying Poisson point process modulates a Gram-Charlier series
of Laguerre weighted exponential functions, resulting in a doubly
stochastic filtered Poisson point process. The statistical distribution of partially developed speckle is derived in a closed canonical form.
It is observed that as the mean number of scatterers in a resolution cell is increased, the probability density function approaches an
exponential distribution. This is consistent with fully developed speckle noise as demonstrated by the Central Limit theorem.
Abstract: The work describes the use of a synthetic transmit
aperture (STA) with a single element transmitting and all elements
receiving in medical ultrasound imaging. STA technique is a novel
approach to today-s commercial systems, where an image is acquired
sequentially one image line at a time that puts a strict limit on the
frame rate and the amount of data needed for high image quality. The
STA imaging allows to acquire data simultaneously from all
directions over a number of emissions, and the full image can be
reconstructed.
In experiments a 32-element linear transducer array with 0.48 mm
inter-element spacing was used. Single element transmission aperture
was used to generate a spherical wave covering the full image region.
The 2D ultrasound images of wire phantom are presented obtained
using the STA and commercial ultrasound scanner Antares to
demonstrate the benefits of the SA imaging.