Abstract: Processing the data by computers and performing
reasoning tasks is an important aim in Computer Science. Semantic
Web is one step towards it. The use of ontologies to enhance the
information by semantically is the current trend. Huge amount of
domain specific, unstructured on-line data needs to be expressed in
machine understandable and semantically searchable format.
Currently users are often forced to search manually in the results
returned by the keyword-based search services. They also want to use
their native languages to express what they search. In this paper, an
ontology-based automated question answering system on software
test documents domain is presented. The system allows users to enter
a question about the domain by means of natural language and
returns exact answer of the questions. Conversion of the natural
language question into the ontology based query is the challenging
part of the system. To be able to achieve this, a new algorithm
regarding free text to ontology based search engine query conversion
is proposed. The algorithm is based on investigation of suitable
question type and parsing the words of the question sentence.
Abstract: In this study, a classification-based video
super-resolution method using artificial neural network (ANN) is
proposed to enhance low-resolution (LR) to high-resolution (HR)
frames. The proposed method consists of four main steps:
classification, motion-trace volume collection, temporal adjustment,
and ANN prediction. A classifier is designed based on the edge
properties of a pixel in the LR frame to identify the spatial information.
To exploit the spatio-temporal information, a motion-trace volume is
collected using motion estimation, which can eliminate unfathomable
object motion in the LR frames. In addition, temporal lateral process is
employed for volume adjustment to reduce unnecessary temporal
features. Finally, ANN is applied to each class to learn the complicated
spatio-temporal relationship between LR and HR frames. Simulation
results show that the proposed method successfully improves both
peak signal-to-noise ratio and perceptual quality.
Abstract: The main purpose of the research is to address the role of psychological harassment behaviors (mobbing) to which employees are exposed and personality characteristics over work alienation. Research population was composed of the employees of Provincial Special Administration. A survey with four sections was created to measure variables and reach out the basic goals of the research. Correlation and step-wise regression analyses were performed to investigate the separate and overall effects of sub-dimensions of psychological harassment behaviors and personality characteristic on work alienation of employees. Correlation analysis revealed significant but weak relationships between work alienation and psychological harassment and personality characteristics. Step-wise regression analysis revealed also significant relationships between work alienation variable and assault to personality, direct negative behaviors (sub dimensions of mobbing) and openness (sub-dimension of personality characteristics). Each variable was introduced into the model step by step to investigate the effects of significant variables in explaining the variations in work alienation. While the explanation ratio of the first model was 13%, the last model including three variables had an explanation ratio of 24%.
Abstract: Let T and S be a subspace of Cn and Cm, respectively.
Then for A ∈ Cm×n satisfied AT ⊕ S = Cm, the generalized
inverse A(2)
T,S is given by A(2)
T,S = (PS⊥APT )†. In this paper, a
finite formulae is presented to compute generalized inverse A(2)
T,S
under the concept of restricted inner product, which defined as <
A,B >T,S=< PS⊥APT,B > for the A,B ∈ Cm×n. By this
iterative method, when taken the initial matrix X0 = PTA∗PS⊥, the
generalized inverse A(2)
T,S can be obtained within at most mn iteration
steps in absence of roundoff errors. Finally given numerical example
is shown that the iterative formulae is quite efficient.
Abstract: The world wide web coupled with the ever-increasing
sophistication of online technologies and software applications puts
greater emphasis on the need of even more sophisticated and
consistent quality requirements modeling than traditional software
applications. Web sites and Web applications (WebApps) are
becoming more information driven and content-oriented raising the
concern about their information quality (InQ). The consistent and
consolidated modeling of InQ requirements for WebApps at different
stages of the life cycle still poses a challenge. This paper proposes an
approach to specify InQ requirements for WebApps by reusing and
extending the ISO 25012:2008(E) data quality model. We also
discuss learnability aspect of information quality for the WebApps.
The proposed ISO 25012 based InQ framework is a step towards a
standardized approach to evaluate WebApps InQ.
Abstract: Fatigue life prediction and evaluation are the key
technologies to assure the safety and reliability of automotive rubber
components. The objective of this study is to develop the fatigue
analysis process for vulcanized rubber components, which is
applicable to predict fatigue life at initial product design step. Fatigue
life prediction methodology of vulcanized natural rubber was
proposed by incorporating the finite element analysis and fatigue
damage parameter of maximum strain appearing at the critical location
determined from fatigue test. In order to develop an appropriate
fatigue damage parameter of the rubber material, a series of
displacement controlled fatigue test was conducted using threedimensional
dumbbell specimen with different levels of mean
displacement. It was shown that the maximum strain was a proper
damage parameter, taking the mean displacement effects into account.
Nonlinear finite element analyses of three-dimensional dumbbell
specimens were performed based on a hyper-elastic material model
determined from the uni-axial tension, equi-biaxial tension and planar
test. Fatigue analysis procedure employed in this study could be used
approximately for the fatigue design.
Abstract: The objective of the present paper is a numerical
analysis of the flow forces acting on spool surfaces of a pressure
regulated valve. The transient, compressible and turbulent flow
structures inside the valve are simulated using ANSYS FLUENT
coupled with a special UDF. Here, valve inlet pressure is varied in a
stepwise manner. For every value of inlet pressure, transient analysis
leads to a quasi-static flow through the valve. Spool forces are
calculated based on different pressures at inlet. From this information
of spool forces, pressure characteristic of the passive control circuit
has been derived.
Abstract: Reduction of Single Input Single Output (SISO) continuous systems into Reduced Order Model (ROM), using a conventional and an evolutionary technique is presented in this paper. In the conventional technique, the mixed advantages of Mihailov stability criterion and continued fraction expansions (CFE) technique is employed where the reduced denominator polynomial is derived using Mihailov stability criterion and the numerator is obtained by matching the quotients of the Cauer second form of Continued fraction expansions. In the evolutionary technique method Particle Swarm Optimization (PSO) is employed to reduce the higher order model. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example.
Abstract: In this paper we propose segmentation approach based
on Vector Quantization technique. Here we have used Kekre-s fast
codebook generation algorithm for segmenting low-altitude aerial
image. This is used as a preprocessing step to form segmented
homogeneous regions. Further to merge adjacent regions color
similarity and volume difference criteria is used. Experiments
performed with real aerial images of varied nature demonstrate that
this approach does not result in over segmentation or under
segmentation. The vector quantization seems to give far better results
as compared to conventional on-the-fly watershed algorithm.
Abstract: Over last two decades, due to hostilities of environment
over the internet the concerns about confidentiality of information
have increased at phenomenal rate. Therefore to safeguard the information
from attacks, number of data/information hiding methods have
evolved mostly in spatial and transformation domain.In spatial domain
data hiding techniques,the information is embedded directly on
the image plane itself. In transform domain data hiding techniques the
image is first changed from spatial domain to some other domain and
then the secret information is embedded so that the secret information
remains more secure from any attack. Information hiding algorithms
in time domain or spatial domain have high capacity and relatively
lower robustness. In contrast, the algorithms in transform domain,
such as DCT, DWT have certain robustness against some multimedia
processing.In this work the authors propose a novel steganographic
method for hiding information in the transform domain of the gray
scale image.The proposed approach works by converting the gray
level image in transform domain using discrete integer wavelet
technique through lifting scheme.This approach performs a 2-D
lifting wavelet decomposition through Haar lifted wavelet of the cover
image and computes the approximation coefficients matrix CA and
detail coefficients matrices CH, CV, and CD.Next step is to apply the
PMM technique in those coefficients to form the stego image. The
aim of this paper is to propose a high-capacity image steganography
technique that uses pixel mapping method in integer wavelet domain
with acceptable levels of imperceptibility and distortion in the cover
image and high level of overall security. This solution is independent
of the nature of the data to be hidden and produces a stego image
with minimum degradation.
Abstract: Planning of economic activities development has various dimensions one of which determines adequate capacity of economic activities in provinces regarding the government-s goals. Paralleling planning goals of economic activities development including subjects being focused on the view statement is effective to better realize the statement's goals. Current paper presents a native framework for economic activities development in provincial level. Triple steps within the framework are concordant with the view statement-s goals achievement. At first step of the proposed framework, economic activities are being prioritized in terms of employment indices, and secondly economic activities regarding to the province's relative advantages are being recognized. In the third step, desirable capacity of economic activities is determined with regards to the government's goals and effective constraints in economic activities development. Development of economic activities related to the provinces- relative advantages, contributes on regional balance and on equal development of economic activities. Furthermore, results of the framework enable more confident investment, affect employment development and remove unemployment concern as the main goals of the view statement.
Abstract: This paper proposes new enhancement models to the
methods of nonlinear anisotropic diffusion to greatly reduce speckle
and preserve image features in medical ultrasound images. By
incorporating local physical characteristics of the image, in this case
scatterer density, in addition to the gradient, into existing tensorbased
image diffusion methods, we were able to greatly improve the
performance of the existing filtering methods, namely edge
enhancing (EE) and coherence enhancing (CE) diffusion. The new
enhancement methods were tested using various ultrasound images,
including phantom and some clinical images, to determine the
amount of speckle reduction, edge, and coherence enhancements.
Scatterer density weighted nonlinear anisotropic diffusion
(SDWNAD) for ultrasound images consistently outperformed its
traditional tensor-based counterparts that use gradient only to weight
the diffusivity function. SDWNAD is shown to greatly reduce
speckle noise while preserving image features as edges, orientation
coherence, and scatterer density. SDWNAD superior performances
over nonlinear coherent diffusion (NCD), speckle reducing
anisotropic diffusion (SRAD), adaptive weighted median filter
(AWMF), wavelet shrinkage (WS), and wavelet shrinkage with
contrast enhancement (WSCE), make these methods ideal
preprocessing steps for automatic segmentation in ultrasound
imaging.
Abstract: Automatic segmentation of skin lesions is the first step
towards the automated analysis of malignant melanoma. Although
numerous segmentation methods have been developed, few studies
have focused on determining the most effective color space for
melanoma application. This paper proposes an automatic segmentation
algorithm based on color space analysis and clustering-based histogram
thresholding, a process which is able to determine the optimal
color channel for detecting the borders in dermoscopy images. The
algorithm is tested on a set of 30 high resolution dermoscopy images.
A comprehensive evaluation of the results is provided, where borders
manually drawn by four dermatologists, are compared to automated
borders detected by the proposed algorithm, applying three previously
used metrics of accuracy, sensitivity, and specificity and a new metric
of similarity. By performing ROC analysis and ranking the metrics,
it is demonstrated that the best results are obtained with the X and
XoYoR color channels, resulting in an accuracy of approximately
97%. The proposed method is also compared with two state-of-theart
skin lesion segmentation methods.
Abstract: The paper describes the futures trading and aims to
design the speculators trading strategy. The problem is formulated as
the decision making task and such as is solved. The solution of the
task leads to complex mathematical problems and the approximations
of the decision making is demanded. Two kind of approximation are
used in the paper: Monte Carlo for the multi-step prediction and
iteration spread in time for the optimization. The solution is applied to the real-market data and the results of the off-line experiments are
presented.
Abstract: Cloud computing is becoming more and more matured over the last few years and consequently the demands for better cloud services is increasing rapidly. One of the research topics to improve cloud services is the desktop computing in virtualized environment. This paper aims at the development of an adaptive virtual desktop service in cloud computing platform based on our previous research on the virtualization technology. We implement cloud virtual desktop and application software streaming technology that make it possible for providing Virtual Desktop as a Service (VDaaS). Given the development of remote desktop virtualization, it allows shifting the user’s desktop from the traditional PC environment to the cloud-enabled environment, which is stored on a remote virtual machine rather than locally. This proposed effort has the potential to positively provide an efficient, resilience and elastic environment for online cloud service. Users no longer need to burden the platform maintenances and drastically reduces the overall cost of hardware and software licenses. Moreover, this flexible remote desktop service represents the next significant step to the mobile workplace, and it lets users access their desktop environments from virtually anywhere.
Abstract: The present paper deals with the analysis and development of noise-reduction transformer that has a filter function for conductive noise transmission. Two types of prototype noise-reduction transformers with two different output voltages are proposed. To determine an optimum design for the noise-reduction transformer, noise attenuation characteristics are discussed based on the experiments and the equivalent circuit analysis. The analysis gives a relation between the circuit parameters and the noise attenuation. High performance step-down noise-reduction transformer for direct power supply to electronics equipment is developed. The input voltage of the transformer is 100 V and the output voltage is 5 V. Frequency characteristics of noise attenuation are discussed, and prevention of pulse noise transmission is demonstrated. Normal mode noise attenuation of this transformer is –80 dB, and common mode exceeds –90 dB. The step-down noise-reduction transformer eliminates pulse noise efficiently.
Abstract: The aim of this paper is to rank the impact of Object
Oriented(OO) metrics in fault prediction modeling using Artificial
Neural Networks(ANNs). Past studies on empirical validation of
object oriented metrics as fault predictors using ANNs have focused
on the predictive quality of neural networks versus standard
statistical techniques. In this empirical study we turn our attention to
the capability of ANNs in ranking the impact of these explanatory
metrics on fault proneness. In ANNs data analysis approach, there is
no clear method of ranking the impact of individual metrics. Five
ANN based techniques are studied which rank object oriented
metrics in predicting fault proneness of classes. These techniques are
i) overall connection weights method ii) Garson-s method iii) The
partial derivatives methods iv) The Input Perturb method v) the
classical stepwise methods. We develop and evaluate different
prediction models based on the ranking of the metrics by the
individual techniques. The models based on overall connection
weights and partial derivatives methods have been found to be most
accurate.
Abstract: We demonstrate a nonfaradaic electrochemical impedance spectroscopy measurement of biochemically modified gold plated electrodes using a two-electrode system. The absence of any redox indicator in the impedance measurements provide more precise and accurate characterization of the measured bioanalyte at molecular resolution. An equivalent electrical circuit of the electrodeelectrolyte interface was deduced from the observed impedance data of saline solution at low and high concentrations. The detection of biomolecular interactions was fundamentally correlated to electrical double-layer variation at modified interface. The investigations were done using 20mer deoxyribonucleic acid (DNA) strands without any label. Surface modification was performed by creating mixed monolayer of the thiol-modified single-stranded DNA and a spacer thiol (mercaptohexanol) by a two-step self-assembly method. The results clearly distinguish between the noncomplementary and complementary hybridization of DNA, at low frequency region below several hundreds Hertz.
Abstract: This research intends to introduce a new usage of Artificial Intelligent (AI) approaches in Stepping Stone Detection (SSD) fields of research. By using Self-Organizing Map (SOM) approaches as the engine, through the experiment, it is shown that SOM has the capability to detect the number of connection chains that involved in a stepping stones. Realizing that by counting the number of connection chain is one of the important steps of stepping stone detection and it become the research focus currently, this research has chosen SOM as the AI techniques because of its capabilities. Through the experiment, it is shown that SOM can detect the number of involved connection chains in Network-based Stepping Stone Detection (NSSD).
Abstract: The vertex connectivity of a graph is the smallest number of vertices whose deletion separates the graph or makes it trivial. This work is devoted to the problem of vertex connectivity test of graphs in a distributed environment based on a general and a constructive approach. The contribution of this paper is threefold. First, using a preconstructed spanning tree of the considered graph, we present a protocol to test whether a given graph is 2-connected using only local knowledge. Second, we present an encoding of this protocol using graph relabeling systems. The last contribution is the implementation of this protocol in the message passing model. For a given graph G, where M is the number of its edges, N the number of its nodes and Δ is its degree, our algorithms need the following requirements: The first one uses O(Δ×N2) steps and O(Δ×logΔ) bits per node. The second one uses O(Δ×N2) messages, O(N2) time and O(Δ × logΔ) bits per node. Furthermore, the studied network is semi-anonymous: Only the root of the pre-constructed spanning tree needs to be identified.