Abstract: In this paper, a two-dimensional (2D) numerical
model for the tidal currents simulation in Persian Gulf is presented.
The model is based on the depth averaged equations of shallow water
which consider hydrostatic pressure distribution. The continuity
equation and two momentum equations including the effects of bed
friction, the Coriolis effects and wind stress have been solved. To
integrate the 2D equations, the Alternative Direction Implicit (ADI)
technique has been used. The base of equations discritization was
finite volume method applied on rectangular mesh. To evaluate the
model validation, a dam break case study including analytical
solution is selected and the comparison is done. After that, the
capability of the model in simulation of tidal current in a real field is
represented by modeling the current behavior in Persian Gulf. The
tidal fluctuations in Hormuz Strait have caused the tidal currents in
the area of study. Therefore, the water surface oscillations data at
Hengam Island on Hormoz Strait are used as the model input data.
The check point of the model is measured water surface elevations at
Assaluye port. The comparison between the results and the
acceptable agreement of them showed the model ability for modeling
marine hydrodynamic.
Abstract: As a part of evaluation system for R&D program, the
Korean government has applied feasibility analysis since 2008.
Various professionals put forth a great effort in order to catch up the
high degree of freedom of R&D programs, and make contributions to
evolving the feasibility analysis. We analyze diverse R&D programs
from various viewpoints, such as technology, policy, and Economics,
integrate the separate analysis, and finally arrive at a definite result;
whether a program is feasible or unfeasible. This paper describes the
concept and method of the feasibility analysis as a decision making
tool. The analysis unit and content of each criterion, which are key
elements in a comprehensive decision making structure, are examined
Abstract: Data mining, which is the exploration of
knowledge from the large set of data, generated as a result of
the various data processing activities. Frequent Pattern Mining
is a very important task in data mining. The previous
approaches applied to generate frequent set generally adopt
candidate generation and pruning techniques for the
satisfaction of the desired objective. This paper shows how
the different approaches achieve the objective of frequent
mining along with the complexities required to perform the
job. This paper will also look for hardware approach of cache
coherence to improve efficiency of the above process. The
process of data mining is helpful in generation of support
systems that can help in Management, Bioinformatics,
Biotechnology, Medical Science, Statistics, Mathematics,
Banking, Networking and other Computer related
applications. This paper proposes the use of both upward and
downward closure property for the extraction of frequent item
sets which reduces the total number of scans required for the
generation of Candidate Sets.
Abstract: We propose a method for discrimination and
classification of ovarian with benign, malignant and normal tissue
using independent component analysis and neural networks. The
method was tested for a proteomic patters set from A database, and
radial basis functions neural networks. The best performance was
obtained with probabilistic neural networks, resulting I 99% success
rate, with 98% of specificity e 100% of sensitivity.
Abstract: In this paper, we propose an effective relay
communication for layered video transmission as an alternative to
make the most of limited resources in a wireless communication
network where loss often occurs. Relaying brings stable multimedia
services to end clients, compared to multiple description coding
(MDC). Also, retransmission of only parity data about one or more
video layer using channel coder to the end client of the relay device is
paramount to the robustness of the loss situation. Using these
methods in resource-constrained environments, such as real-time user
created content (UCC) with layered video transmission, can provide
high-quality services even in a poor communication environment.
Minimal services are also possible. The mathematical analysis shows
that the proposed method reduced the probability of GOP loss rate
compared to MDC and raptor code without relay. The GOP loss rate
is about zero, while MDC and raptor code without relay have a GOP
loss rate of 36% and 70% in case of 10% frame loss rate.
Abstract: One of the mayor problems of programming a cruise
circuit is to decide which destinations to include and which don-t.
Thus a decision problem emerges, that might be solved using a linear
and goal programming approach. The problem becomes more
complex if several boats in the fleet must be programmed in a limited
schedule, trying their capacity matches best a seasonal demand and
also attempting to minimize the operation costs. Moreover, the
programmer of the company should consider the time of the
passenger as a limited asset, and would like to maximize its usage.
The aim of this work is to design a method in which, using linear and
goal programming techniques, a model to design circuits for the
cruise company decision maker can achieve an optimal solution
within the fleet schedule.
Abstract: A simple approach is demonstrated for growing large
scale, nearly vertically aligned ZnO nanowire arrays by thermal
oxidation method. To reveal effect of temperature on growth and
physical properties of the ZnO nanowires, gold coated zinc substrates
were annealed at 300 °C and 400 °C for 4 hours duration in air. Xray
diffraction patterns of annealed samples indicated a set of well
defined diffraction peaks, indexed to the wurtzite hexagonal phase of
ZnO. The scanning electron microscopy studies show formation of
ZnO nanowires having length of several microns and average of
diameter less than 500 nm. It is found that the areal density of wires
is relatively higher, when the annealing is carried out at higher
temperature i.e. at 400°C. From the field emission studies, the values
of the turn-on and threshold field, required to draw emission current
density of 10 μA/cm2 and 100 μA/cm2 are observed to be 1.2 V/μm
and 1.7 V/μm for the samples annealed at 300 °C and 2.9 V/μm and
3.7 V/μm for that annealed at 400 °C, respectively. The field
emission current stability, investigated over duration of more than 2
hours at the preset value of 1 μA, is found to be fairly good in both
cases. The simplicity of the synthesis route coupled with the
promising field emission properties offer unprecedented advantage
for the use of ZnO field emitters for high current density
applications.
Abstract: This paper considers inference under progressive type II censoring with a compound Rayleigh failure time distribution. The maximum likelihood (ML), and Bayes methods are used for estimating the unknown parameters as well as some lifetime parameters, namely reliability and hazard functions. We obtained Bayes estimators using the conjugate priors for two shape and scale parameters. When the two parameters are unknown, the closed-form expressions of the Bayes estimators cannot be obtained. We use Lindley.s approximation to compute the Bayes estimates. Another Bayes estimator has been obtained based on continuous-discrete joint prior for the unknown parameters. An example with the real data is discussed to illustrate the proposed method. Finally, we made comparisons between these estimators and the maximum likelihood estimators using a Monte Carlo simulation study.
Abstract: Medical applications are among the most impactful
areas of microrobotics. The ultimate goal of medical microrobots is
to reach currently inaccessible areas of the human body and carry out
a host of complex operations such as minimally invasive surgery
(MIS), highly localized drug delivery, and screening for diseases at
their very early stages. Miniature, safe and efficient propulsion
systems hold the key to maturing this technology but they pose
significant challenges. A new type of propulsion developed recently,
uses multi-flagella architecture inspired by the motility mechanism of
prokaryotic microorganisms. There is a lack of efficient methods for
designing this type of propulsion system. The goal of this paper is to
overcome the lack and this way, a numerical strategy is proposed to
design multi-flagella propulsion systems. The strategy is based on the
implementation of the regularized stokeslet and rotlet theory, RFT
theory and new approach of “local corrected velocity". The effects of
shape parameters and angular velocities of each flagellum on overall
flow field and on the robot net forces and moments are considered.
Then a multi-layer perceptron artificial neural network is designed
and employed to adjust the angular velocities of the motors for
propulsion control. The proposed method applied successfully on a
sample configuration and useful demonstrative results is obtained.
Abstract: Fault tree analysis is a well-known method for
reliability and safety assessment of engineering systems. In the last 3
decades, a number of methods have been introduced, in the literature,
for automatic construction of fault trees. The main difference between these methods is the starting model from which the tree is constructed. This paper presents a new methodology for the construction of static and dynamic fault trees from a system Simulink
model. The method is introduced and explained in detail, and its correctness and completeness is experimentally validated by using an example, taken from literature. Advantages of the method are also mentioned.
Abstract: Speeded-Up Robust Feature (SURF) is commonly used for feature matching in stereovision because of their robustness towards scale changes and rotational changes. However, SURF feature cannot cope with large viewpoint changes or skew distortion. This paper introduces a method which can help to improve the wide baseline-s matching performance in term of accuracy by rectifying the image using two vanishing points. Simplified orientation correction was used to remove the false matching..
Abstract: Cylindrical concrete reservoirs are appropriate choice
for storing liquids as water, oil and etc. By using of the pre-cast
concrete reservoirs instead of the in-situ constructed reservoirs, the
speed and precision of the construction would considerably increase.
In this construction method, wall and roof panels would make in
factory with high quality materials and precise controlling. Then,
pre-cast wall and roof panels would carry out to the construction site
for assembling. This method has a few faults such as: the existing
weeks in connection of wall panels together and wall panels to
foundation. Therefore, these have to be resisted under applied loads
such as seismic load. One of the innovative methods which was
successfully applied for seismic retrofitting of numerous pre-cast
cylindrical water reservoirs in New Zealand, using of the high tensile
cables around the reservoirs and post-tensioning them. In this paper,
analytical modeling of wall and roof panels and post-tensioned
cables are carried out with finite element method and the effect of
height to diameter ratio, post-tensioning force value, liquid level in
reservoir, installing position of tendons on seismic response of
reservoirs are investigated.
Abstract: Probability-based identity disclosure risk
measurement may give the same overall risk for different
anonymization strategy of the same dataset. Some entities in the
anonymous dataset may have higher identification risks than the
others. Individuals are more concerned about higher risks than the
average and are more interested to know if they have a possibility of
being under higher risk. A notation of overall risk in the above
measurement method doesn-t indicate whether some of the involved
entities have higher identity disclosure risk than the others. In this
paper, we have introduced an identity disclosure risk measurement
method that not only implies overall risk, but also indicates whether
some of the members have higher risk than the others. The proposed
method quantifies the overall risk based on the individual risk values,
the percentage of the records that have a risk value higher than the
average and how larger the higher risk values are compared to the
average. We have analyzed the disclosure risks for different
disclosure control techniques applied to original microdata and
present the results.
Abstract: In this research, we have developed a new efficient
heuristic algorithm for the dynamic facility layout problem with
budget constraint (DFLPB). This heuristic algorithm combines two
mathematical programming methods such as discrete event
simulation and linear integer programming (IP) to obtain a near
optimum solution. In the proposed algorithm, the non-linear model
of the DFLP has been changed to a pure integer programming (PIP)
model. Then, the optimal solution of the PIP model has been used in
a simulation model that has been designed in a similar manner as the
DFLP for determining the probability of assigning a facility to a
location. After a sufficient number of runs, the simulation model
obtains near optimum solutions. Finally, to verify the performance of
the algorithm, several test problems have been solved. The results
show that the proposed algorithm is more efficient in terms of speed
and accuracy than other heuristic algorithms presented in previous
works found in the literature.
Abstract: This paper proposes an easy-to-use instruction hiding
method to protect software from malicious reverse engineering
attacks. Given a source program (original) to be protected, the
proposed method (1) takes its modified version (fake) as an input,
(2) differences in assembly code instructions between original and
fake are analyzed, and, (3) self-modification routines are introduced
so that fake instructions become correct (i.e., original instructions)
before they are executed and that they go back to fake ones after
they are executed. The proposed method can add a certain amount
of security to a program since the fake instructions in the resultant
program confuse attackers and it requires significant effort to discover
and remove all the fake instructions and self-modification routines.
Also, this method is easy to use (with little effort) because all a user
(who uses the proposed method) has to do is to prepare a fake source
code by modifying the original source code.
Abstract: The purpose of this paper primarily intends to develop GIS interface for estimating sequences of stream-flows at ungauged stations based on known flows at gauged stations. The integrated GIS interface is composed of three major steps. The first, precipitation characteristics using statistical analysis is the procedure for making multiple linear regression equation to get the long term mean daily flow at ungauged stations. The independent variables in regression equation are mean daily flow and drainage area. Traditionally, mean flow data are generated by using Thissen polygon method. However, method for obtaining mean flow data can be selected by user such as Kriging, IDW (Inverse Distance Weighted), Spline methods as well as other traditional methods. At the second, flow duration curve (FDC) is computing at unguaged station by FDCs in gauged stations. Finally, the mean annual daily flow is computed by spatial interpolation algorithm. The third step is to obtain watershed/topographic characteristics. They are the most important factors which govern stream-flows. In summary, the simulated daily flow time series are compared with observed times series. The results using integrated GIS interface are closely similar and are well fitted each other. Also, the relationship between the topographic/watershed characteristics and stream flow time series is highly correlated.
Abstract: Computer based geostatistical methods can offer effective data analysis possibilities for agricultural areas by using
vectorial data and their objective informations. These methods will help to detect the spatial changes on different locations of the large
agricultural lands, which will lead to effective fertilization for optimal yield with reduced environmental pollution. In this study, topsoil (0-20 cm) and subsoil (20-40 cm) samples were taken from a
sugar beet field by 20 x 20 m grids. Plant samples were also collected
from the same plots. Some physical and chemical analyses for these
samples were made by routine methods. According to derived variation coefficients, topsoil organic matter (OM) distribution was more than subsoil OM distribution. The highest C.V. value of
17.79% was found for topsoil OM. The data were analyzed
comparatively according to kriging methods which are also used
widely in geostatistic. Several interpolation methods (Ordinary,Simple and Universal) and semivariogram models (Spherical,
Exponential and Gaussian) were tested in order to choose the suitable
methods. Average standard deviations of values estimated by simple
kriging interpolation method were less than average standard
deviations (topsoil OM ± 0.48, N ± 0.37, subsoil OM ± 0.18) of measured values. The most suitable interpolation method was simple
kriging method and exponantial semivariogram model for topsoil,
whereas the best optimal interpolation method was simple kriging
method and spherical semivariogram model for subsoil. The results
also showed that these computer based geostatistical methods should
be tested and calibrated for different experimental conditions and semivariogram models.
Abstract: XML is an important standard of data exchange and
representation. As a mature database system, using relational database
to support XML data may bring some advantages. But storing XML in
relational database has obvious redundancy that wastes disk space,
bandwidth and disk I/O when querying XML data. For the efficiency
of storage and query XML, it is necessary to use compressed XML
data in relational database. In this paper, a compressed relational
database technology supporting XML data is presented. Original
relational storage structure is adaptive to XPath query process. The
compression method keeps this feature. Besides traditional relational
database techniques, additional query process technologies on
compressed relations and for special structure for XML are presented.
In this paper, technologies for XQuery process in compressed
relational database are presented..
Abstract: Content-Based Image Retrieval (CBIR) has been
one on the most vivid research areas in the field of computer vision
over the last 10 years. Many programs and tools have been
developed to formulate and execute queries based on the visual or
audio content and to help browsing large multimedia repositories.
Still, no general breakthrough has been achieved with respect to
large varied databases with documents of difering sorts and with
varying characteristics. Answers to many questions with respect to
speed, semantic descriptors or objective image interpretations are
still unanswered. In the medical field, images, and especially
digital images, are produced in ever increasing quantities and used
for diagnostics and therapy. In several articles, content based
access to medical images for supporting clinical decision making
has been proposed that would ease the management of clinical data
and scenarios for the integration of content-based access methods
into Picture Archiving and Communication Systems (PACS) have
been created. This paper gives an overview of soft computing
techniques. New research directions are being defined that can
prove to be useful. Still, there are very few systems that seem to be
used in clinical practice. It needs to be stated as well that the goal
is not, in general, to replace text based retrieval methods as they
exist at the moment.
Abstract: This paper presents a novel method that allows an
agent host to delegate its signing power to an anonymous mobile
agent in such away that the mobile agent does not reveal any information about its host-s identity and, at the same time, can be authenticated by the service host, hence, ensuring fairness of service
provision. The solution introduces a verification server to verify the
signature generated by the mobile agent in such a way that even if colluding with the service host, both parties will not get more information than what they already have. The solution incorporates
three methods: Agent Signature Key Generation method, Agent
Signature Generation method, Agent Signature Verification method.
The most notable feature of the solution is that, in addition to allowing secure and anonymous signature delegation, it enables
tracking of malicious mobile agents when a service host is attacked. The security properties of the proposed solution are analyzed, and the solution is compared with the most related work.