Abstract: The work describes the use of a synthetic transmit
aperture (STA) with a single element transmitting and all elements
receiving in medical ultrasound imaging. STA technique is a novel
approach to today-s commercial systems, where an image is acquired
sequentially one image line at a time that puts a strict limit on the
frame rate and the amount of data needed for high image quality. The
STA imaging allows to acquire data simultaneously from all
directions over a number of emissions, and the full image can be
reconstructed.
In experiments a 32-element linear transducer array with 0.48 mm
inter-element spacing was used. Single element transmission aperture
was used to generate a spherical wave covering the full image region.
The 2D ultrasound images of wire phantom are presented obtained
using the STA and commercial ultrasound scanner Antares to
demonstrate the benefits of the SA imaging.
Abstract: Recently, genetic algorithms (GA) and particle swarm optimization (PSO) technique have attracted considerable attention among various modern heuristic optimization techniques. Since the two approaches are supposed to find a solution to a given objective function but employ different strategies and computational effort, it is appropriate to compare their performance. This paper presents the application and performance comparison of PSO and GA optimization techniques, for Thyristor Controlled Series Compensator (TCSC)-based controller design. The design objective is to enhance the power system stability. The design problem of the FACTS-based controller is formulated as an optimization problem and both the PSO and GA optimization techniques are employed to search for optimal controller parameters. The performance of both optimization techniques in terms of computational time and convergence rate is compared. Further, the optimized controllers are tested on a weakly connected power system subjected to different disturbances, and their performance is compared with the conventional power system stabilizer (CPSS). The eigenvalue analysis and non-linear simulation results are presented and compared to show the effectiveness of both the techniques in designing a TCSC-based controller, to enhance power system stability.
Abstract: The absolute Cu atoms density in Cu(2S1/22P1/2)
ground state has been measured by Resonance Optical Absorption
(ROA) technique in a DC magnetron sputtering deposition with
argon. We measured these densities under variety of operation
conditions: pressure from 0.6 μbar to 14 μbar, input power from
10W to 200W and N2 mixture from 0% to 100%. For measuring the
gas temperature, we used the simulation of N2 rotational spectra
with a special computer code. The absolute number density of Cu
atoms decreases with increasing the N2 percentage of buffer gas at
any conditions of this work. But the deposition rate, is not decreased
with the same manner. The deposition rate variation is very small
and in the limit of quartz balance measuring equipment accuracy. So
we conclude that decrease in the absolute number density of Cu
atoms in magnetron plasma has not a big effect on deposition rate,
because the diffusion of Cu atoms to the chamber volume and
deviation of Cu atoms from direct path (towards the substrate)
decreases with increasing of N2 percentage of buffer gas. This is
because of the lower mass of N2 atoms compared to the argon ones.
Abstract: In this paper a combination approach of two heuristic-based algorithms: genetic algorithm and tabu search is proposed. It has been developed to obtain the least cost based on the split-pipe design of looped water distribution network. The proposed combination algorithm has been applied to solve the three well-known water distribution networks taken from the literature. The development of the combination of these two heuristic-based algorithms for optimization is aimed at enhancing their strengths and compensating their weaknesses. Tabu search is rather systematic and deterministic that uses adaptive memory in search process, while genetic algorithm is probabilistic and stochastic optimization technique in which the solution space is explored by generating candidate solutions. Split-pipe design may not be realistic in practice but in optimization purpose, optimal solutions are always achieved with split-pipe design. The solutions obtained in this study have proved that the least cost solutions obtained from the split-pipe design are always better than those obtained from the single pipe design. The results obtained from the combination approach show its ability and effectiveness to solve combinatorial optimization problems. The solutions obtained are very satisfactory and high quality in which the solutions of two networks are found to be the lowest-cost solutions yet presented in the literature. The concept of combination approach proposed in this study is expected to contribute some useful benefits in diverse problems.
Abstract: Academics and researchers are interested in the effects of social media on college students, with a specific focus on the most popular social media website; Facebook. Previous studied have found contradictory result on the relationship between Facebook usage and the student engagement with positive, detrimental and no significant relationships. However, these studies were limited to western higher education system. This paper fills a gap in the literature by using a sample (300) of Sri Lankan management undergraduates to examine the relationship between Facebook usage and student engagement. Student engagement was measured 35 item scale based on the National Survey of Student Engagement and Facebook usage by Facebook intensity scale. Descriptive statistics, path analysis and structural equation modeling were applied as statistical tools and techniques. Results indicate that student engagement scale was significantly negatively related with the Facebook usage with the influence from student engagement on Facebook usage.
Abstract: Student-s movements have been going increasing in
last decades. International students can have different psychological
and sociological problems in their adaptation process. Depression is
one of the most important problems in this procedure. This research
purposed to reveal level of foreign students- depression, kinds of
interpersonal communication networks (host/ethnic interpersonal
communication) and media usage (host/ethnic media usage).
Additionally study aimed to display the relationship between
depression and communication (host/ethnic interpersonal
communication and host/ethnic media usage) among foreign
university students. A field research was performed among 283
foreign university students who have been attending 8 different
universities in Turkey. A purposeful sampling technique was used in
this research cause of data collect facilities. Results indicated that
58.3% of foreign students- depression stage was “intermediate" while
33.2% of foreign students- depression level was “low". Add to this,
host interpersonal communication behaviors and Turkish web sites
usages were negatively and significantly correlated with depression.
Abstract: This paper proposes a novel approach to the question of lithofacies classification based on an assessment of the uncertainty in the classification results. The proposed approach has multiple neural networks (NN), and interval neutrosophic sets (INS) are used to classify the input well log data into outputs of multiple classes of lithofacies. A pair of n-class neural networks are used to predict n-degree of truth memberships and n-degree of false memberships. Indeterminacy memberships or uncertainties in the predictions are estimated using a multidimensional interpolation method. These three memberships form the INS used to support the confidence in results of multiclass classification. Based on the experimental data, our approach improves the classification performance as compared to an existing technique applied only to the truth membership. In addition, our approach has the capability to provide a measure of uncertainty in the problem of multiclass classification.
Abstract: The passive electrical properties of a tissue depends
on the intrinsic constituents and its structure, therefore by measuring
the complex electrical impedance of the tissue it might be possible to
obtain indicators of the tissue state or physiological activity [1].
Complete bio-impedance information relative to physiology and
pathology of a human body and functional states of the body tissue or
organs can be extracted by using a technique containing a fourelectrode
measurement setup. This work presents the estimation
measurement setup based on the four-electrode technique. First, the
complex impedance is estimated by three different estimation
techniques: Fourier, Sine Correlation and Digital De-convolution and
then estimation errors for the magnitude, phase, reactance and
resistance are calculated and analyzed for different levels of
disturbances in the observations. The absolute values of relative
errors are plotted and the graphical performance of each technique is
compared.
Abstract: A higher order spline interpolated contour obtained
with up-sampling of homogenously distributed coordinates for
segmentation of kidney region in different classes of ultrasound
kidney images has been developed and presented in this paper. The
performance of the proposed method is measured and compared with
modified snake model contour, Markov random field contour and
expert outlined contour. The validation of the method is made in
correspondence with expert outlined contour using maximum coordinate
distance, Hausdorff distance and mean radial distance
metrics. The results obtained reveal that proposed scheme provides
optimum contour that agrees well with expert outlined contour.
Moreover this technique helps to preserve the pixels-of-interest
which in specific defines the functional characteristic of kidney. This
explores various possibilities in implementing computer-aided
diagnosis system exclusively for US kidney images.
Abstract: Taxation as a potent fiscal policy instrument through which infrastructures and social services that drive the development process of any society has been ineffective in Nigeria. The adoption of appropriate measures is, however, a requirement for the generation of adequate tax revenue. This study set out to investigates efficiency and effectiveness in the administration of tax in Nigeria, using Cross River State as a case-study. The methodology to achieve this objective is a qualitative technique using structured questionnaires to survey the three senatorial districts in the state; the central limit theory is adopted as our analytical technique. Result showed a significant degree of inefficiency in the administration of taxes. It is recommended that periodic review and update of tax policy will bring innovation and effectiveness in the administration of taxes. Also proper appropriation of tax revenue will drive development in needed infrastructural and social services.
Abstract: Predicting protein-protein interactions represent a key step in understanding proteins functions. This is due to the fact that proteins usually work in context of other proteins and rarely function alone. Machine learning techniques have been applied to predict protein-protein interactions. However, most of these techniques address this problem as a binary classification problem. Although it is easy to get a dataset of interacting proteins as positive examples, there are no experimentally confirmed non-interacting proteins to be considered as negative examples. Therefore, in this paper we solve this problem as a one-class classification problem using one-class support vector machines (SVM). Using only positive examples (interacting protein pairs) in training phase, the one-class SVM achieves accuracy of about 80%. These results imply that protein-protein interaction can be predicted using one-class classifier with comparable accuracy to the binary classifiers that use artificially constructed negative examples.
Abstract: Question answering (QA) aims at retrieving precise information from a large collection of documents. Most of the Question Answering systems composed of three main modules: question processing, document processing and answer processing. Question processing module plays an important role in QA systems to reformulate questions. Moreover answer processing module is an emerging topic in QA systems, where these systems are often required to rank and validate candidate answers. These techniques aiming at finding short and precise answers are often based on the semantic relations and co-occurrence keywords. This paper discussed about a new model for question answering which improved two main modules, question processing and answer processing which both affect on the evaluation of the system operations. There are two important components which are the bases of the question processing. First component is question classification that specifies types of question and answer. Second one is reformulation which converts the user's question into an understandable question by QA system in a specific domain. The objective of an Answer Validation task is thus to judge the correctness of an answer returned by a QA system, according to the text snippet given to support it. For validating answers we apply candidate answer filtering, candidate answer ranking and also it has a final validation section by user voting. Also this paper described new architecture of question and answer processing modules with modeling, implementing and evaluating the system. The system differs from most question answering systems in its answer validation model. This module makes it more suitable to find exact answer. Results show that, from total 50 asked questions, evaluation of the model, show 92% improving the decision of the system.
Abstract: During the process of compaction in Hot-Mix Asphalt
(HMA) mixtures, the distance between aggregate particles decreases
as they come together and eliminate air-voids. By measuring the
inter-particle distances in a cut-section of a HMA sample the degree
of compaction can be estimated. For this, a calibration curve is
generated by computer simulation technique when the gradation and
asphalt content of the HMA mixture are known. A two-dimensional
cross section of HMA specimen was simulated using the mixture
design information (gradation, asphalt content and air-void content).
Nearest neighbor distance methods such as Delaunay triangulation
were used to study the changes in inter-particle distance and area
distribution during the process of compaction in HMA. Such
computer simulations would enable making several hundreds of
repetitions in a short period of time without the necessity to compact
and analyze laboratory specimens in order to obtain good statistics on
the parameters defined. The distributions for the statistical
parameters based on computer simulations showed similar trends as
those of laboratory specimens.
Abstract: In this paper three basic approaches and different
methods under each of them for extracting region of interest (ROI)
from stationary images are explored. The results obtained for each of
the proposed methods are shown, and it is demonstrated where each
method outperforms the other. Two main problems in ROI
extraction: the channel selection problem and the saliency reversal
problem are discussed and how best these two are addressed by
various methods is also seen. The basic approaches are 1) Saliency
based approach 2) Wavelet based approach 3) Clustering based
approach. The saliency approach performs well on images containing
objects of high saturation and brightness. The wavelet based
approach performs well on natural scene images that contain regions
of distinct textures. The mean shift clustering approach partitions the
image into regions according to the density distribution of pixel
intensities. The experimental results of various methodologies show
that each technique performs at different acceptable levels for
various types of images.
Abstract: Entrepreneurship has become an important and
extensively researched concept in business studies. Research on
foreign direct investment (FDI) has become widespread due to the
growth of FDI and its importance in globalization. Most
entrepreneurship studies examined the importance and influence of
entrepreneurial orientation in a micro-level context. On the other
hand, studies and research concerning FDI used statistical techniques
to analyze the effect, determinants, and motives of FDI on a
macroeconomic level, ignoring empirical studies on other noneconomic
determinants. In order to bridge the gap between the theory
and empirical evidence on FDI and the theory and research on
entrepreneurship, this study examines the impact of entrepreneurship
on inward foreign direct investment. The relationship between
entrepreneurship and foreign direct investment is investigated
through regression analysis of pooled time-series and cross-sectional
data. The results suggest that entrepreneurship has a significant effect
on FDI.
Abstract: In this paper, an automatic determination algorithm for nuclear magnetic resonance (NMR) spectra of the metabolites in the living body by magnetic resonance spectroscopy (MRS) without human intervention or complicated calculations is presented. In such method, the problem of NMR spectrum determination is transformed into the determination of the parameters of a mathematical model of the NMR signal. To calculate these parameters efficiently, a new model called modified Hopfield neural network is designed. The main achievement of this paper over the work in literature [30] is that the speed of the modified Hopfield neural network is accelerated. This is done by applying cross correlation in the frequency domain between the input values and the input weights. The modified Hopfield neural network can accomplish complex dignals perfectly with out any additinal computation steps. This is a valuable advantage as NMR signals are complex-valued. In addition, a technique called “modified sequential extension of section (MSES)" that takes into account the damping rate of the NMR signal is developed to be faster than that presented in [30]. Simulation results show that the calculation precision of the spectrum improves when MSES is used along with the neural network. Furthermore, MSES is found to reduce the local minimum problem in Hopfield neural networks. Moreover, the performance of the proposed method is evaluated and there is no effect on the performance of calculations when using the modified Hopfield neural networks.
Abstract: This paper describes the design concepts and
implementation of a 5-Joint mechanical arm for a rescue robot named
CEO Mission II. The multi-joint arm is a five degree of freedom
mechanical arm with a four bar linkage, which can be stretched to
125 cm. long. It is controlled by a teleoperator via the user-friendly
control and monitoring GUI program. With Inverse Kinematics
principle, we developed the method to control the servo angles of all
arm joints to get the desired tip position. By clicking the determined
tip position or dragging the tip of the mechanical arm on the
computer screen to the desired target point, the robot will compute
and move its multi-joint arm to the pose as seen on the GUI screen.
The angles of each joint are calculated and sent to all joint servos
simultaneously in order to move the mechanical arm to the desired
pose at once. The operator can also use a joystick to control the
movement of this mechanical arm and the locomotion of the robot.
Many sensors are installed at the tip of this mechanical arm for
surveillance from the high level and getting the vital signs of victims
easier and faster in the urban search and rescue tasks. It works very
effectively and easy to control. This mechanical arm and its software
were developed as a part of the CEO Mission II Rescue Robot that
won the First Runner Up award and the Best Technique award from
the Thailand Rescue Robot Championship 2006. It is a low cost,
simple, but functioning 5-Jiont mechanical arm which is built from
scratch, and controlled via wireless LAN 802.11b/g. This 5-Jiont
mechanical arm hardware concept and its software can also be used
as the basic mechatronics to many real applications.
Abstract: Designing and implementing intelligent systems has become a crucial factor for the innovation and development of better products of space technologies. A neural network is a parallel system, capable of resolving paradigms that linear computing cannot. Field programmable gate array (FPGA) is a digital device that owns reprogrammable properties and robust flexibility. For the neural network based instrument prototype in real time application, conventional specific VLSI neural chip design suffers the limitation in time and cost. With low precision artificial neural network design, FPGAs have higher speed and smaller size for real time application than the VLSI and DSP chips. So, many researchers have made great efforts on the realization of neural network (NN) using FPGA technique. In this paper, an introduction of ANN and FPGA technique are briefly shown. Also, Hardware Description Language (VHDL) code has been proposed to implement ANNs as well as to present simulation results with floating point arithmetic. Synthesis results for ANN controller are developed using Precision RTL. Proposed VHDL implementation creates a flexible, fast method and high degree of parallelism for implementing ANN. The implementation of multi-layer NN using lookup table LUT reduces the resource utilization for implementation and time for execution.
Abstract: In this work, we present a comparison between
different techniques of image compression. First, the image is
divided in blocks which are organized according to a certain scan.
Later, several compression techniques are applied, combined or
alone. Such techniques are: wavelets (Haar's basis), Karhunen-Loève
Transform, etc. Simulations show that the combined versions are the
best, with minor Mean Squared Error (MSE), and higher Peak Signal
to Noise Ratio (PSNR) and better image quality, even in the presence
of noise.
Abstract: Appeared toward 1986, the object-oriented databases
management systems had not known successes knew five years after
their birth. One of the major difficulties is the query optimization.
We propose in this paper a new approach that permits to enrich
techniques of query optimization existing in the object-oriented
databases. Seen success that knew the query optimization in the
relational model, our approach inspires itself of these optimization
techniques and enriched it so that they can support the new concepts
introduced by the object databases.