Abstract: Software-as-a-Service (SaaS) is a form of cloud
computing that relieves the user of the burden of hardware and
software installation and management. SaaS can be used at the course
level to enhance curricula and student experience. When cloud
computing and SaaS are included in educational literature, the focus
is typically on implementing administrative functions. Yet, SaaS can
make more immediate and substantial contributions to the technical
course content in educational offerings. This paper explores cloud
computing and SaaS, provides examples, reports on experiences
using SaaS to offer specialized software in courses, and analyzes the
advantages and disadvantages of using SaaS at the course level. The
paper contributes to the literature in higher education by analyzing
the major technical concepts, potential, and constraints for using
SaaS to deliver specialized software at the course level. Further it
may enable more educators and students to benefit from this
emerging technology.
Abstract: Microarrays have become the effective, broadly used tools in biological and medical research to address a wide range of problems, including classification of disease subtypes and tumors. Many statistical methods are available for analyzing and systematizing these complex data into meaningful information, and one of the main goals in analyzing gene expression data is the detection of samples or genes with similar expression patterns. In this paper, we express and compare the performance of several clustering methods based on data preprocessing including strategies of normalization or noise clearness. We also evaluate each of these clustering methods with validation measures for both simulated data and real gene expression data. Consequently, clustering methods which are common used in microarray data analysis are affected by normalization and degree of noise and clearness for datasets.
Abstract: In this paper, a two-dimensional (2D) numerical
model for the tidal currents simulation in Persian Gulf is presented.
The model is based on the depth averaged equations of shallow water
which consider hydrostatic pressure distribution. The continuity
equation and two momentum equations including the effects of bed
friction, the Coriolis effects and wind stress have been solved. To
integrate the 2D equations, the Alternative Direction Implicit (ADI)
technique has been used. The base of equations discritization was
finite volume method applied on rectangular mesh. To evaluate the
model validation, a dam break case study including analytical
solution is selected and the comparison is done. After that, the
capability of the model in simulation of tidal current in a real field is
represented by modeling the current behavior in Persian Gulf. The
tidal fluctuations in Hormuz Strait have caused the tidal currents in
the area of study. Therefore, the water surface oscillations data at
Hengam Island on Hormoz Strait are used as the model input data.
The check point of the model is measured water surface elevations at
Assaluye port. The comparison between the results and the
acceptable agreement of them showed the model ability for modeling
marine hydrodynamic.
Abstract: As a part of evaluation system for R&D program, the
Korean government has applied feasibility analysis since 2008.
Various professionals put forth a great effort in order to catch up the
high degree of freedom of R&D programs, and make contributions to
evolving the feasibility analysis. We analyze diverse R&D programs
from various viewpoints, such as technology, policy, and Economics,
integrate the separate analysis, and finally arrive at a definite result;
whether a program is feasible or unfeasible. This paper describes the
concept and method of the feasibility analysis as a decision making
tool. The analysis unit and content of each criterion, which are key
elements in a comprehensive decision making structure, are examined
Abstract: Data mining, which is the exploration of
knowledge from the large set of data, generated as a result of
the various data processing activities. Frequent Pattern Mining
is a very important task in data mining. The previous
approaches applied to generate frequent set generally adopt
candidate generation and pruning techniques for the
satisfaction of the desired objective. This paper shows how
the different approaches achieve the objective of frequent
mining along with the complexities required to perform the
job. This paper will also look for hardware approach of cache
coherence to improve efficiency of the above process. The
process of data mining is helpful in generation of support
systems that can help in Management, Bioinformatics,
Biotechnology, Medical Science, Statistics, Mathematics,
Banking, Networking and other Computer related
applications. This paper proposes the use of both upward and
downward closure property for the extraction of frequent item
sets which reduces the total number of scans required for the
generation of Candidate Sets.
Abstract: We propose a method for discrimination and
classification of ovarian with benign, malignant and normal tissue
using independent component analysis and neural networks. The
method was tested for a proteomic patters set from A database, and
radial basis functions neural networks. The best performance was
obtained with probabilistic neural networks, resulting I 99% success
rate, with 98% of specificity e 100% of sensitivity.
Abstract: In this paper, we propose an effective relay
communication for layered video transmission as an alternative to
make the most of limited resources in a wireless communication
network where loss often occurs. Relaying brings stable multimedia
services to end clients, compared to multiple description coding
(MDC). Also, retransmission of only parity data about one or more
video layer using channel coder to the end client of the relay device is
paramount to the robustness of the loss situation. Using these
methods in resource-constrained environments, such as real-time user
created content (UCC) with layered video transmission, can provide
high-quality services even in a poor communication environment.
Minimal services are also possible. The mathematical analysis shows
that the proposed method reduced the probability of GOP loss rate
compared to MDC and raptor code without relay. The GOP loss rate
is about zero, while MDC and raptor code without relay have a GOP
loss rate of 36% and 70% in case of 10% frame loss rate.
Abstract: Microarrays technique allows the simultaneous measurements of the expression levels of thousands of mRNAs. By mining this data one can identify the dynamics of the gene expression time series. By recourse of principal component analysis, we uncover the circadian rhythmic patterns underlying the gene expression profiles from Cyanobacterium Synechocystis. We applied PCA to reduce the dimensionality of the data set. Examination of the components also provides insight into the underlying factors measured in the experiments. Our results suggest that all rhythmic content of data can be reduced to three main components.
Abstract: One of the mayor problems of programming a cruise
circuit is to decide which destinations to include and which don-t.
Thus a decision problem emerges, that might be solved using a linear
and goal programming approach. The problem becomes more
complex if several boats in the fleet must be programmed in a limited
schedule, trying their capacity matches best a seasonal demand and
also attempting to minimize the operation costs. Moreover, the
programmer of the company should consider the time of the
passenger as a limited asset, and would like to maximize its usage.
The aim of this work is to design a method in which, using linear and
goal programming techniques, a model to design circuits for the
cruise company decision maker can achieve an optimal solution
within the fleet schedule.
Abstract: We study the problem of decision making with Dempster-Shafer belief structure. We analyze the previous work developed by Yager about using the ordered weighted averaging (OWA) operator in the aggregation of the Dempster-Shafer decision process. We discuss the possibility of aggregating with an ascending order in the OWA operator for the cases where the smallest value is the best result. We suggest the introduction of the ordered weighted geometric (OWG) operator in the Dempster-Shafer framework. In this case, we also discuss the possibility of aggregating with an ascending order and we find that it is completely necessary as the OWG operator cannot aggregate negative numbers. Finally, we give an illustrative example where we can see the different results obtained by using the OWA, the Ascending OWA (AOWA), the OWG and the Ascending OWG (AOWG) operator.
Abstract: This paper considers inference under progressive type II censoring with a compound Rayleigh failure time distribution. The maximum likelihood (ML), and Bayes methods are used for estimating the unknown parameters as well as some lifetime parameters, namely reliability and hazard functions. We obtained Bayes estimators using the conjugate priors for two shape and scale parameters. When the two parameters are unknown, the closed-form expressions of the Bayes estimators cannot be obtained. We use Lindley.s approximation to compute the Bayes estimates. Another Bayes estimator has been obtained based on continuous-discrete joint prior for the unknown parameters. An example with the real data is discussed to illustrate the proposed method. Finally, we made comparisons between these estimators and the maximum likelihood estimators using a Monte Carlo simulation study.
Abstract: Medical applications are among the most impactful
areas of microrobotics. The ultimate goal of medical microrobots is
to reach currently inaccessible areas of the human body and carry out
a host of complex operations such as minimally invasive surgery
(MIS), highly localized drug delivery, and screening for diseases at
their very early stages. Miniature, safe and efficient propulsion
systems hold the key to maturing this technology but they pose
significant challenges. A new type of propulsion developed recently,
uses multi-flagella architecture inspired by the motility mechanism of
prokaryotic microorganisms. There is a lack of efficient methods for
designing this type of propulsion system. The goal of this paper is to
overcome the lack and this way, a numerical strategy is proposed to
design multi-flagella propulsion systems. The strategy is based on the
implementation of the regularized stokeslet and rotlet theory, RFT
theory and new approach of “local corrected velocity". The effects of
shape parameters and angular velocities of each flagellum on overall
flow field and on the robot net forces and moments are considered.
Then a multi-layer perceptron artificial neural network is designed
and employed to adjust the angular velocities of the motors for
propulsion control. The proposed method applied successfully on a
sample configuration and useful demonstrative results is obtained.
Abstract: Fault tree analysis is a well-known method for
reliability and safety assessment of engineering systems. In the last 3
decades, a number of methods have been introduced, in the literature,
for automatic construction of fault trees. The main difference between these methods is the starting model from which the tree is constructed. This paper presents a new methodology for the construction of static and dynamic fault trees from a system Simulink
model. The method is introduced and explained in detail, and its correctness and completeness is experimentally validated by using an example, taken from literature. Advantages of the method are also mentioned.
Abstract: Probability-based identity disclosure risk
measurement may give the same overall risk for different
anonymization strategy of the same dataset. Some entities in the
anonymous dataset may have higher identification risks than the
others. Individuals are more concerned about higher risks than the
average and are more interested to know if they have a possibility of
being under higher risk. A notation of overall risk in the above
measurement method doesn-t indicate whether some of the involved
entities have higher identity disclosure risk than the others. In this
paper, we have introduced an identity disclosure risk measurement
method that not only implies overall risk, but also indicates whether
some of the members have higher risk than the others. The proposed
method quantifies the overall risk based on the individual risk values,
the percentage of the records that have a risk value higher than the
average and how larger the higher risk values are compared to the
average. We have analyzed the disclosure risks for different
disclosure control techniques applied to original microdata and
present the results.
Abstract: The performances of small and medium enterprises
have stagnated in the last two decades. This has mainly been due to
the emergence of HIV / Aids. The disease has had a detrimental
effect on the general economy of the country leading to morbidity
and mortality of the Kenyan workforce in their primary age. The
present study sought to establish the economic impact of HIV / Aids
on the micro-enterprise development in Obunga slum – Kisumu, in
terms of production loss, increasing labor related cost and to establish
possible strategies to address the impact of HIV / Aids on microenterprises.
The study was necessitated by the observation that most
micro-enterprises in the slum are facing severe economic and social
crisis due to the impact of HIV / Aids, they get depleted and close
down within a short time due to death of skilled and experience
workforce. The study was carried out between June 2008 and June
2009 in Obunga slum. Data was subjected to computer aided
statistical analysis that included descriptive statistic, chi-squared and
ANOVA techniques. Chi-squared analysis on the micro-enterprise
owners opinion on the impact of HIV / Aids on depletion of microenterprise
compared to other diseases indicated high levels of the
negative effects of the disease at significance levels of P
Abstract: In this research, the flow pattern influence on
performance of a micro PEMFC was investigated
experimentally. The investigation focused on the impacts of
bend angels and rib/channel dimensions of serpentine flow
channel pattern on the performance and investigated how they
improve the performance. The fuel cell employed for these
experiments was a micro single PEMFC with a membrane of
1.44 cm2 Nafion NRE-212. The results show that 60° and 120°
bend angles can provide the better performances at 20 and 40
sccm inlet flow rates comparing to that the conventional design.
Additionally, wider channel with narrower rib spacing gives
better performance. These results may be applied to develop
universal heuristics for the design of flow pattern of micro
PEMFC.
Abstract: Axisymmetric vibration of an infinite Pyrocomposite
circular hollow cylinder made of inner and outer pyroelectric layer of
6mm-class bonded together by a Linear Elastic Material with Voids
(LEMV) layer is studied. The exact frequency equation is obtained
for the traction free surfaces with continuity condition at the
interfaces. Numerical results in the form of data and dispersion
curves for the first and second mode of the axisymmetric vibration of
the cylinder BaTio3 / Adhesive / BaTio3 by taking the Adhesive layer
as an existing Carbon Fibre Reinforced Polymer (CFRP) are
compared with a hypothetical LEMV layer with and without voids
and as well with a pyroelectric hollow cylinder. The damping is
analyzed through the imaginary parts of the complex frequencies.
Abstract: A high precision temperature insensitive current and voltage reference generator is presented. It is specifically developed for temperature compensated oscillator. The circuit, designed using MXIC 0.5um CMOS technology, has an operating voltage that ranges from 2.6V to 5V and generates a voltage of 1.21V and a current of 6.38 ӴA. It exhibits a variation of ±0.3nA for the current reference and a stable output for voltage reference as the temperature is varied from 0°C to 70°C. The power supply rejection ratio obtained without any filtering capacitor at 100Hz and 10MHz is -30dB and -12dB respectively.
Abstract: The purpose of this paper primarily intends to develop GIS interface for estimating sequences of stream-flows at ungauged stations based on known flows at gauged stations. The integrated GIS interface is composed of three major steps. The first, precipitation characteristics using statistical analysis is the procedure for making multiple linear regression equation to get the long term mean daily flow at ungauged stations. The independent variables in regression equation are mean daily flow and drainage area. Traditionally, mean flow data are generated by using Thissen polygon method. However, method for obtaining mean flow data can be selected by user such as Kriging, IDW (Inverse Distance Weighted), Spline methods as well as other traditional methods. At the second, flow duration curve (FDC) is computing at unguaged station by FDCs in gauged stations. Finally, the mean annual daily flow is computed by spatial interpolation algorithm. The third step is to obtain watershed/topographic characteristics. They are the most important factors which govern stream-flows. In summary, the simulated daily flow time series are compared with observed times series. The results using integrated GIS interface are closely similar and are well fitted each other. Also, the relationship between the topographic/watershed characteristics and stream flow time series is highly correlated.
Abstract: Computer based geostatistical methods can offer effective data analysis possibilities for agricultural areas by using
vectorial data and their objective informations. These methods will help to detect the spatial changes on different locations of the large
agricultural lands, which will lead to effective fertilization for optimal yield with reduced environmental pollution. In this study, topsoil (0-20 cm) and subsoil (20-40 cm) samples were taken from a
sugar beet field by 20 x 20 m grids. Plant samples were also collected
from the same plots. Some physical and chemical analyses for these
samples were made by routine methods. According to derived variation coefficients, topsoil organic matter (OM) distribution was more than subsoil OM distribution. The highest C.V. value of
17.79% was found for topsoil OM. The data were analyzed
comparatively according to kriging methods which are also used
widely in geostatistic. Several interpolation methods (Ordinary,Simple and Universal) and semivariogram models (Spherical,
Exponential and Gaussian) were tested in order to choose the suitable
methods. Average standard deviations of values estimated by simple
kriging interpolation method were less than average standard
deviations (topsoil OM ± 0.48, N ± 0.37, subsoil OM ± 0.18) of measured values. The most suitable interpolation method was simple
kriging method and exponantial semivariogram model for topsoil,
whereas the best optimal interpolation method was simple kriging
method and spherical semivariogram model for subsoil. The results
also showed that these computer based geostatistical methods should
be tested and calibrated for different experimental conditions and semivariogram models.