Abstract: In this paper the use of sequential machines for recognizing actions taken by the objects detected by a general tracking algorithm is proposed. The system may deal with the uncertainty inherent in medium-level vision data. For this purpose, fuzzification of input data is performed. Besides, this transformation allows to manage data independently of the tracking application selected and enables adding characteristics of the analyzed scenario. The representation of actions by means of an automaton and the generation of the input symbols for finite automaton depending on the object and action compared are described. The output of the comparison process between an object and an action is a numerical value that represents the membership of the object to the action. This value is computed depending on how similar the object and the action are. The work concludes with the application of the proposed technique to identify the behavior of vehicles in road traffic scenes.
Abstract: Reinforced Concrete (RC) structures strengthened
with fiber reinforced polymer (FRP) lack in thermal resistance under
elevated temperatures in the event of fire. This phenomenon led to
the lining of strengthened concrete with thin high performance
cementitious composites (THPCC) to protect the substrate against
elevated temperature. Elevated temperature effects on THPCC, based
on different cementitious materials have been studied in the past but
high-alumina cement (HAC)-based THPCC have not been well
characterized. This research study will focus on the THPCC based on
HAC replaced by 60%, 70%, 80% and 85% of ground granulated
blast furnace slag (GGBS). Samples were evaluated by the
measurement of their mechanical strength (28 & 56 days of curing)
after exposed to 400°C, 600°C and 28°C of room temperature for
comparison and corroborated by their microstructure study. Results
showed that among all mixtures, the mix containing only HAC
showed the highest compressive strength after exposed to 600°C as
compared to other mixtures. However, the tensile strength of THPCC
made of HAC and 60% GGBS content was comparable to the
THPCC with HAC only after exposed to 600°C. Field emission
scanning electron microscopy (FESEM) images of THPCC
accompanying Energy Dispersive X-ray (EDX) microanalysis
revealed that the microstructure deteriorated considerably after
exposure to elevated temperatures which led to the decrease in
mechanical strength.
Abstract: Alpfa-fetoprotein and its fragments may be an important vehicle for targeted delivery of radionuclides to the tumor. We investigated the effect of conditions on the labeling of biologically active synthetic peptide based on the (F-afp) with technetium-99m. The influence of the nature of the buffer solution, pH, concentration of reductant, concentration of the peptide and the reaction temperature on the yield of labeling was examined. As a result, the following optimal conditions for labeling of (F-afp) are found: pH 8.5 (phosphate and bicarbonate buffers) and pH from 1.7 to 7.0 (citrate buffer). The reaction proceeds with sufficient yield at room temperature for 30 min at the concentration of SnCl2 and (Fafp) (F-afp) is to be less than 10 mkg/ml and 25 mkg/ml, respectively. Investigations of the test drug accumulation in the tumor cells of human breast cancer were carried out. Results can be assumed that the in vivo study of the (F-afp) in experimental tumor lesions will show concentrations sufficient for imaging these lesions by SPECT.
Abstract: In this paper authors presented the research of textile electroconductive materials, which can be used to construction
sensory textronic shirt to breath frequency measurement.
The full paper also will present results of measurements carried
out on unique measurement stands.
Abstract: As the web continues to grow exponentially, the idea
of crawling the entire web on a regular basis becomes less and less
feasible, so the need to include information on specific domain,
domain-specific search engines was proposed. As more information
becomes available on the World Wide Web, it becomes more difficult
to provide effective search tools for information access. Today,
people access web information through two main kinds of search
interfaces: Browsers (clicking and following hyperlinks) and Query
Engines (queries in the form of a set of keywords showing the topic
of interest) [2]. Better support is needed for expressing one's
information need and returning high quality search results by web
search tools. There appears to be a need for systems that do reasoning
under uncertainty and are flexible enough to recover from the
contradictions, inconsistencies, and irregularities that such reasoning
involves. In a multi-view problem, the features of the domain can be
partitioned into disjoint subsets (views) that are sufficient to learn the
target concept. Semi-supervised, multi-view algorithms, which
reduce the amount of labeled data required for learning, rely on the
assumptions that the views are compatible and uncorrelated. This
paper describes the use of semi-structured machine learning approach
with Active learning for the “Domain Specific Search Engines". A
domain-specific search engine is “An information access system that
allows access to all the information on the web that is relevant to a
particular domain. The proposed work shows that with the help of
this approach relevant data can be extracted with the minimum
queries fired by the user. It requires small number of labeled data and
pool of unlabelled data on which the learning algorithm is applied to
extract the required data.
Abstract: We report a computational study of the spreading
dynamics of a viral infection in a complex (scale-free) network. The
final epidemic size distribution (FESD) was found to be unimodal or
bimodal depending on the value of the basic reproductive
number R0 . The FESDs occurred on time-scales long enough for
intermediate-time epidemic size distributions (IESDs) to be important
for control measures. The usefulness of R0 for deciding on the
timeliness and intensity of control measures was found to be limited
by the multimodal nature of the IESDs and by its inability to inform
on the speed at which the infection spreads through the population. A
reduction of the transmission probability at the hubs of the scale-free
network decreased the occurrence of the larger-sized epidemic events
of the multimodal distributions. For effective epidemic control, an
early reduction in transmission at the index cell and its neighbors was
essential.
Abstract: In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.
Abstract: Along with the advances in medicine, providing medical information to individual patient is becoming more important. In Japan such information via Braille is hardly provided to blind and partially sighted people. Thus we are researching and developing a Web-based automatic translation program “eBraille" to translate Japanese text into Japanese Braille. First we analyzed the Japanese transcription rules to implement them on our program. We then added medical words to the dictionary of the program to improve its translation accuracy for medical text. Finally we examined the efficacy of statistical learning models (SLMs) for further increase of word segmentation accuracy in braille translation. As a result, eBraille had the highest translation accuracy in the comparison with other translation programs, improved the accuracy for medical text and is utilized to make hospital brochures in braille for outpatients and inpatients.
Abstract: Graph coloring is an important problem in computer
science and many algorithms are known for obtaining reasonably
good solutions in polynomial time. One method of comparing
different algorithms is to test them on a set of standard graphs where
the optimal solution is already known. This investigation analyzes a
set of 50 well known graph coloring instances according to a set of
complexity measures. These instances come from a variety of
sources some representing actual applications of graph coloring
(register allocation) and others (mycieleski and leighton graphs) that
are theoretically designed to be difficult to solve. The size of the
graphs ranged from ranged from a low of 11 variables to a high of
864 variables. The method used to solve the coloring problem was
the square of the adjacency (i.e., correlation) matrix. The results
show that the most difficult graphs to solve were the leighton and the
queen graphs. Complexity measures such as density, mobility,
deviation from uniform color class size and number of block
diagonal zeros are calculated for each graph. The results showed that
the most difficult problems have low mobility (in the range of .2-.5)
and relatively little deviation from uniform color class size.
Abstract: IEEE has designed 802.11i protocol to address the
security issues in wireless local area networks. Formal analysis is
important to ensure that the protocols work properly without having
to resort to tedious testing and debugging which can only show the
presence of errors, never their absence. In this paper, we present
the formal verification of an abstract protocol model of 802.11i.
We translate the 802.11i protocol into the Strand Space Model and
then prove the authentication property of the resulting model using
the Strand Space formalism. The intruder in our model is imbued
with powerful capabilities and repercussions to possible attacks are
evaluated. Our analysis proves that the authentication of 802.11i is
not compromised in the presented model. We further demonstrate
how changes in our model will yield a successful man-in-the-middle
attack.
Abstract: Nozzle is the main part of various spinning systems
such as air-jet and Murata air vortex systems. Recently, many
researchers worked on the usage of the nozzle on different spinning
systems such as conventional ring and compact spinning systems. In
these applications, primary purpose is to improve the yarn quality. In
present study, it was produced the yarns with two different nozzle
types and determined the changes in yarn properties. In order to
explain the effect of the nozzle, airflow structure in the nozzle was
modelled and airflow variables were determined. In numerical
simulation, ANSYS 12.1 package program and Fluid Flow (CFX)
analysis method was used. As distinct from the literature, Shear
Stress Turbulent (SST) model is preferred. And also air pressure at
the nozzle inlet was measured by electronic mass flow meter and
these values were used for the simulation of the airflow. At last, the
yarn was modelled and the area from where the yarn is passing was
included to the numerical analysis.
Abstract: This paper deals with wireless relay communication
systems in which multiple sources transmit information to the
destination node by the help of multiple relays. We consider a
signal forwarding technique based on the minimum mean-square
error (MMSE) approach with multiple antennas for each relay. A
source-relay-destination joint design strategy is proposed with power
constraints at the destination and the source nodes. Simulation results
confirm that the proposed joint design method improves the average
MSE performance compared with that of conventional MMSE relaying
schemes.
Abstract: There have been various methods created based on the regression ideas to resolve the problem of data set containing censored observations, i.e. the Buckley-James method, Miller-s method, Cox method, and Koul-Susarla-Van Ryzin estimators. Even though comparison studies show the Buckley-James method performs better than some other methods, it is still rarely used by researchers mainly because of the limited diagnostics analysis developed for the Buckley-James method thus far. Therefore, a diagnostic tool for the Buckley-James method is proposed in this paper. It is called the renovated Cook-s Distance, (RD* i ) and has been developed based on the Cook-s idea. The renovated Cook-s Distance (RD* i ) has advantages (depending on the analyst demand) over (i) the change in the fitted value for a single case, DFIT* i as it measures the influence of case i on all n fitted values Yˆ∗ (not just the fitted value for case i as DFIT* i) (ii) the change in the estimate of the coefficient when the ith case is deleted, DBETA* i since DBETA* i corresponds to the number of variables p so it is usually easier to look at a diagnostic measure such as RD* i since information from p variables can be considered simultaneously. Finally, an example using Stanford Heart Transplant data is provided to illustrate the proposed diagnostic tool.
Abstract: In this study, the effects of machining parameters on
specific energy during surface grinding of 6061Al-SiC35P
composites are investigated. Vol% of SiC, feed and depth of cut were
chosen as process variables. The power needed for the calculation of
the specific energy is measured from the two watt meter method.
Experiments are conducted using standard RSM design called Central
composite design (CCD). A second order response surface model was
developed for specific energy. The results identify the significant
influence factors to minimize the specific energy. The confirmation
results demonstrate the practicability and effectiveness of the
proposed approach.
Abstract: In this paper we propose an intelligent agent approach
to control the electric power grid at a smaller granularity in order to
give it self-healing capabilities. We develop a method using the
influence model to transform transmission substations into
information processing, analyzing and decision making (intelligent
behavior) units. We also develop a wireless communication method
to deliver real-time uncorrupted information to an intelligent
controller in a power system environment. A combined networking
and information theoretic approach is adopted in meeting both the
delay and error probability requirements. We use a mobile agent
approach in optimizing the achievable information rate vector and in
the distribution of rates to users (sensors). We developed the concept
and the quantitative tools require in the creation of cooperating semiautonomous
subsystems which puts the electric grid on the path
towards intelligent and self-healing system.
Abstract: This paper shows that some properties of the decision
rules in the literature do not hold by presenting a counterexample. We
give sufficient and necessary conditions under which these properties
are valid. These results will be helpful when one tries to choose the
right decision rules in the research of rough set theory.
Abstract: We measured the major and trace element contents
and Rb-Sr isotopic compositions of 12 tektites from the Maoming
area, Guandong province (south China). All the samples studied are
splash-form tektites which show pitted or grooved surfaces with
schlieren structures on some surfaces. The trace element ratios Ba/Rb
(avg. 4.33), Th/Sm (avg. 2.31), Sm/Sc (avg. 0.44), Th/Sc (avg. 1.01) ,
La/Sc (avg. 2.86), Th/U (avg. 7.47), Zr/Hf (avg. 46.01) and the rare
earth elements (REE) contents of tektites of this study are similar to the
average upper continental crust. From the chemical composition, it is
suggested that tektites in this study are derived from similar parental
terrestrial sedimentary deposit which may be related to post-Archean
upper crustal rocks. The tektites from the Maoming area have high
positive εSr(0) values-ranging from 176.9~190.5 which indicate that
the parental material for these tektites have similar Sr isotopic
compositions to old terrestrial sedimentary rocks and they were not
dominantly derived from recent young sediments (such as soil or
loess). The Sr isotopic data obtained by the present study support the
conclusion proposed by Blum et al. (1992)[1] that the depositional age
of sedimentary target materials is close to 170Ma (Jurassic). Mixing
calculations based on the model proposed by Ho and Chen (1996)[2]
for various amounts and combinations of target rocks indicate that the
best fit for tektites from the Maoming area is a mixture of 40% shale,
30% greywacke, 30% quartzite.
Abstract: Automated discovery of hierarchical structures in
large data sets has been an active research area in the recent past.
This paper focuses on the issue of mining generalized rules with crisp
hierarchical structure using Genetic Programming (GP) approach to
knowledge discovery. The post-processing scheme presented in this
work uses flat rules as initial individuals of GP and discovers
hierarchical structure. Suitable genetic operators are proposed for the
suggested encoding. Based on the Subsumption Matrix(SM), an
appropriate fitness function is suggested. Finally, Hierarchical
Production Rules (HPRs) are generated from the discovered
hierarchy. Experimental results are presented to demonstrate the
performance of the proposed algorithm.
Abstract: As a tool for human spatial cognition and thinking, the map has been playing an important role. Maps are perhaps as fundamental to society as language and the written word. Economic and social development requires extensive and in-depth understanding of their own living environment, from the scope of the overall global to urban housing. This has brought unprecedented opportunities and challenges for traditional cartography . This paper first proposed the concept of scaleless-map and its basic characteristics, through the analysis of the existing multi-scale representation techniques. Then some strategies are presented for automated mapping compilation. Taking into account the demand of automated map compilation, detailed proposed the software - WJ workstation must have four technical features, which are generalization operators, symbol primitives, dynamically annotation and mapping process template. This paper provides a more systematic new idea and solution to improve the intelligence and automation of the scaleless cartography.
Abstract: We propose an enhanced key management scheme
based on Key Infection, which is lightweight scheme for tiny sensors.
The basic scheme, Key Infection, is perfectly secure against node
capture and eavesdropping if initial communications after node
deployment is secure. If, however, an attacker can eavesdrop on
the initial communications, they can take the session key. We use
common neighbors for each node to generate the session key. Each
node has own secret key and shares it with its neighbor nodes. Then
each node can establish the session key using common neighbors-
secret keys and a random number. Our scheme needs only a few
communications even if it uses neighbor nodes- information. Without
losing the lightness of basic scheme, it improves the resistance against
eavesdropping on the initial communications more than 30%.