Abstract: The present study has been carried out with a view to calculate the coastal vulnerability index (CVI) to know the high and low sensitive areas and area of inundation due to future SLR. Both conventional and remotely sensed data were used and analyzed through the modelling technique. Out of the total study area, 8.26% is very high risk, 14.21% high, 9.36% medium, 22.46% low and 7.35% in the very low vulnerable category, due to costal components. Results of the inundation analysis indicate that 225.2 km² and 397 km² of the land area will be submerged by flooding at 1m and 10m inundation levels. The most severely affected sectors are expected to be the residential, industrial and recreational areas. As this coast is planned for future coastal developmental activities, measures such as industrializations, building regulation, urban growth planning and agriculture, development of an integrated coastal zone management, strict enforcement of the Coastal Regulation Zone (CRZ) Act, monitoring of impacts and further research in this regard are recommended for the study area.
Abstract: In projects like waterpower, transportation and
mining, etc., proving up the rock-mass structure and hidden tectonic
to estimate the geological body-s activity is very important.
Integrating the seismic results, drilling and trenching data,
CSAMT method was carried out at a planning dame site in southwest
China to evaluate the stability of a deformation. 2D and imitated 3D
inversion resistivity results of CSAMT method were analyzed. The
results indicated that CSAMT was an effective method for defining
an outline of deformation body to several hundred meters deep; the
Lung Pan Deformation was stable in natural conditions; but uncertain
after the future reservoir was impounded.
This research presents a good case study of the fine surveying and
research on complex geological structure and hidden tectonic in
engineering project.
Abstract: As the trend of manufacturing is being dominated depending on services, products and processes are more and more related with sophisticated services. Thus, this research starts with the discussion about integration of the product, process, and service in the innovation process. In particular, this paper sets out some foundations for a theory of service innovation in the field of manufacturing, and proposes the dynamic model of service innovation related to product and process. Two dynamic models of service innovation are suggested to investigate major tendencies and dynamic variations during the innovation cycle: co-innovation and sequential innovation. To structure dynamic models of product, process, and service innovation, the innovation stages in which two models are mainly achieved are identified. The research would encourage manufacturers to formulate strategy and planning for service development with product and process.
Abstract: Bone remodeling occurs by the balanced action of
bone resorbing osteoclasts (OC) and bone-building osteoblasts.
Increased bone resorption by excessive OC activity contributes
to malignant and non-malignant diseases including osteoporosis.
To study OC differentiation and function, OC formed in
in vitro cultures are currently counted manually, a tedious
procedure which is prone to inter-observer differences. Aiming
for an automated OC-quantification system, classification of
OC and precursor cells was done on fluorescence microscope
images based on the distinct appearance of fluorescent nuclei.
Following ellipse fitting to nuclei, a combination of eight
features enabled clustering of OC and precursor cell nuclei.
After evaluating different machine-learning techniques, LOGREG
achieved 74% correctly classified OC and precursor cell
nuclei, outperforming human experts (best expert: 55%). In
combination with the automated detection of total cell areas,
this system allows to measure various cell parameters and most
importantly to quantify proteins involved in osteoclastogenesis.
Abstract: In this paper, we propose a novel improvement for the generalized Lloyd Algorithm (GLA). Our algorithm makes use of an M-tree index built on the codebook which makes it possible to reduce the number of distance computations when the nearest code words are searched. Our method does not impose the use of any specific distance function, but works with any metric distance, making it more general than many other fast GLA variants. Finally, we present the positive results of our performance experiments.
Abstract: Artificial Neural Network (ANN) has been
extensively used for classification of heart sounds for its
discriminative training ability and easy implementation. However, it
suffers from overparameterization if the number of nodes is not
chosen properly. In such cases, when the dataset has redundancy
within it, ANN is trained along with this redundant information that
results in poor validation. Also a larger network means more
computational expense resulting more hardware and time related
cost. Therefore, an optimum design of neural network is needed
towards real-time detection of pathological patterns, if any from heart
sound signal. The aims of this work are to (i) select a set of input
features that are effective for identification of heart sound signals and
(ii) make certain optimum selection of nodes in the hidden layer for a
more effective ANN structure. Here, we present an optimization
technique that involves Singular Value Decomposition (SVD) and
QR factorization with column pivoting (QRcp) methodology to
optimize empirically chosen over-parameterized ANN structure.
Input nodes present in ANN structure is optimized by SVD followed
by QRcp while only SVD is required to prune undesirable hidden
nodes. The result is presented for classifying 12 common
pathological cases and normal heart sound.
Abstract: A wireless Ad-hoc network consists of wireless nodes
communicating without the need for a centralized administration, in
which all nodes potentially contribute to the routing process.In this
paper, we report the simulation results of four different scenarios for
wireless ad hoc networks having thirty nodes. The performances of
proposed networks are evaluated in terms of number of hops per
route, delay and throughput with the help of OPNET simulator.
Channel speed 1 Mbps and simulation time 600 sim-seconds were
taken for all scenarios. For the above analysis DSR routing protocols
has been used. The throughput obtained from the above analysis
(four scenario) are compared as shown in Figure 3. The average
media access delay at node_20 for two routes and at node_20 for four
different scenario are compared as shown in Figures 4 and 5. It is
observed that the throughput will degrade when it will follow
different hops for same source to destination (i.e. it has dropped from
1.55 Mbps to 1.43 Mbps which is around 9.7%, and then dropped to
0.48Mbps which is around 35%).
Abstract: Glaucoma diagnosis involves extracting three features
of the fundus image; optic cup, optic disc and vernacular. Present
manual diagnosis is expensive, tedious and time consuming. A
number of researches have been conducted to automate this process.
However, the variability between the diagnostic capability of an
automated system and ophthalmologist has yet to be established. This
paper discusses the efficiency and variability between
ophthalmologist opinion and digital technique; threshold. The
efficiency and variability measures are based on image quality
grading; poor, satisfactory or good. The images are separated into
four channels; gray, red, green and blue. A scientific investigation
was conducted on three ophthalmologists who graded the images
based on the image quality. The images are threshold using multithresholding
and graded as done by the ophthalmologist. A
comparison of grade from the ophthalmologist and threshold is made.
The results show there is a small variability between result of
ophthalmologists and digital threshold.
Abstract: In this paper we present a generic approach for the problem of the blind estimation of the parameters of linear and convolutional error correcting codes. In a non-cooperative context, an adversary has only access to the noised transmission he has intercepted. The intercepter has no knowledge about the parameters used by the legal users. So, before having acess to the information he has first to blindly estimate the parameters of the error correcting code of the communication. The presented approach has the main advantage that the problem of reconstruction of such codes can be expressed in a very simple way. This allows us to evaluate theorical bounds on the complexity of the reconstruction process but also bounds on the estimation rate. We show that some classical reconstruction techniques are optimal and also explain why some of them have theorical complexities greater than these experimentally observed.
Abstract: The effect of antifungal compound from Bacillus
subtilis strain LB5 was tested against conidial germination of
Colletotrichum gloeosporioides and Pestalotiopsis eugeniae, causal
agent of anthracnose and fruit rot of wax apple, respectively.
Observation under scanning electron microscope and light compound
microscope revealed that conidial germination was completely
inhibited when treated with culture broth, culture filtrate, or crude
extract from strain LB5. Identification of purified antifungal
compound produced by strain LB5 in cell-free supernatant by nuclear
magnetic resonance and fast atom bombardment showed that the
active compound was iturin A-2.
Abstract: In this study we tried to replicate the unconscious
thought advantage (UTA), which states that complex decisions are
better handled by unconscious thinking. We designed an experiment
in e-prime using similar material as the original study (choosing
between four different apartments, each described by 12 attributes).
A total of 73 participants (52 women (71.2%); 18 to 62 age:
M=24.63; SD=8.7) took part in the experiment. We did not replicate
the results suggested by UTT. However, from the present study we
cannot conclude whether this was the case of flaws in the theory or
flaws in our experiment and we discuss several ways in which the
issue of UTA could be examined further.
Abstract: Development of cities and villages, agricultural farms
and industrial regions in abutment and/or in the course of streams and
rivers or in prone flood lands has been caused more notations in
hydrology problems and city planning topics. In order to protection
of cities against of flood damages, embankment construction is a
desired and scientific method. The cities that located in arid zones
may damage by floods periodically. Zavvareh city in Ardestan
township(Isfahan province) with 7704 people located in Ardestan
plain that has been damaged by floods that have flowed from
dominant mountainous watersheds in past years with regard to return
period. In this study, according to flowed floods toward Zavvareh
city, was attempt to plan suitable hydraulic structures such as canals,
bridges and collectors in order to collection, conduction and
depletion of city surface runoff.
Abstract: The present study was designed to test the influence
of intrinsic ICT-motivation, perceived usefulness and ease of use on
business students- willingness to use a particular software package. A
questionnaire was completed by 196 business students in Norway.
We found that 34% of the variance in the students- willingness to use
the software could be explained by the three proposed antecedents.
Intrinsic ICT-motivation seems to be the most important predictor of
students- satisfaction willingness to use the software package.
Abstract: In this paper, the experimental design of using the
Taguchi method is employed to optimize the processing parameters in
the plasma arc surface hardening process. The processing parameters
evaluated are arc current, scanning velocity and carbon content of
steel. In addition, other significant effects such as the relation between
processing parameters are also investigated. An orthogonal array,
signal-to-noise (S/N) ratio and analysis of variance (ANOVA) are
employed to investigate the effects of these processing parameters.
Through this study, not only the hardened depth increased and surface
roughness improved, but also the parameters that significantly affect
the hardening performance are identified. Experimental results are
provided to verify the effectiveness of this approach.
Abstract: In this paper, hybrid FDMA-TDMA access technique in a cooperative distributive fashion introducing and implementing a modified protocol introduced in [1] is analyzed termed as Power and Cooperation Diversity Gain Protocol (PCDGP). A wireless network consists of two users terminal , two relays and a destination terminal equipped with two antennas. The relays are operating in amplify-and-forward (AF) mode with a fixed gain. Two operating modes: cooperation-gain mode and powergain mode are exploited from source terminals to relays, as it is working in a best channel selection scheme. Vertical BLAST (Bell Laboratories Layered Space Time) or V-BLAST with minimum mean square error (MMSE) nulling is used at the relays to perfectly detect the joint signals from multiple source terminals. The performance is analyzed using binary phase shift keying (BPSK) modulation scheme and investigated over independent and identical (i.i.d) Rayleigh, Ricean-K and Nakagami-m fading environments. Subsequently, simulation results show that the proposed scheme can provide better signal quality of uplink users in a cooperative communication system using hybrid FDMATDMA technique.
Abstract: Fast delay estimation methods, as opposed to
simulation techniques, are needed for incremental performance
driven layout synthesis. On-chip inductive effects are becoming
predominant in deep submicron interconnects due to increasing clock
speed and circuit complexity. Inductance causes noise in signal
waveforms, which can adversely affect the performance of the circuit
and signal integrity. Several approaches have been put forward which
consider the inductance for on-chip interconnect modelling. But for
even much higher frequency, of the order of few GHz, the shunt
dielectric lossy component has become comparable to that of other
electrical parameters for high speed VLSI design. In order to cope up
with this effect, on-chip interconnect has to be modelled as
distributed RLCG line. Elmore delay based methods, although
efficient, cannot accurately estimate the delay for RLCG interconnect
line. In this paper, an accurate analytical delay model has been
derived, based on first and second moments of RLCG
interconnection lines. The proposed model considers both the effect
of inductance and conductance matrices. We have performed the
simulation in 0.18μm technology node and an error of as low as less
as 5% has been achieved with the proposed model when compared to
SPICE. The importance of the conductance matrices in interconnect
modelling has also been discussed and it is shown that if G is
neglected for interconnect line modelling, then it will result an delay
error of as high as 6% when compared to SPICE.