Abstract: In the proposed method for Web page-ranking, a
novel theoretic model is introduced and tested by examples of order
relationships among IP addresses. Ranking is induced using a
convexity feature, which is learned according to these examples
using a self-organizing procedure. We consider the problem of selforganizing
learning from IP data to be represented by a semi-random
convex polygon procedure, in which the vertices correspond to IP
addresses. Based on recent developments in our regularization
theory for convex polygons and corresponding Euclidean distance
based methods for classification, we develop an algorithmic
framework for learning ranking functions based on a Computational
Geometric Theory. We show that our algorithm is generic, and
present experimental results explaining the potential of our approach.
In addition, we explain the generality of our approach by showing its
possible use as a visualization tool for data obtained from diverse
domains, such as Public Administration and Education.
Abstract: The purpose of this paper is to study Database Models
to use them efficiently in E-commerce websites. In this paper we are
going to find a method which can save and retrieve information in Ecommerce
websites. Thus, semantic web applications can work with,
and we are also going to study different technologies of E-commerce
databases and we know that one of the most important deficits in
semantic web is the shortage of semantic data, since most of the
information is still stored in relational databases, we present an
approach to map legacy data stored in relational databases into the
Semantic Web using virtually any modern RDF query language, as
long as it is closed within RDF. To achieve this goal we study XML
structures for relational data bases of old websites and eventually we
will come up one level over XML and look for a map from relational
model (RDM) to RDF. Noting that a large number of semantic webs
get advantage of relational model, opening the ways which can be
converted to XML and RDF in modern systems (semantic web) is
important.
Abstract: Microarray data profiles gene expression on a whole
genome scale, therefore, it provides a good way to study associations
between gene expression and occurrence or progression of cancer.
More and more researchers realized that microarray data is helpful
to predict cancer sample. However, the high dimension of gene
expressions is much larger than the sample size, which makes this
task very difficult. Therefore, how to identify the significant genes
causing cancer becomes emergency and also a hot and hard research
topic. Many feature selection algorithms have been proposed in
the past focusing on improving cancer predictive accuracy at the
expense of ignoring the correlations between the features. In this
work, a novel framework (named by SGS) is presented for stable gene
selection and efficient cancer prediction . The proposed framework
first performs clustering algorithm to find the gene groups where
genes in each group have higher correlation coefficient, and then
selects the significant genes in each group with Bayesian Lasso and
important gene groups with group Lasso, and finally builds prediction
model based on the shrinkage gene space with efficient classification
algorithm (such as, SVM, 1NN, Regression and etc.). Experiment
results on real world data show that the proposed framework often
outperforms the existing feature selection and prediction methods,
say SAM, IG and Lasso-type prediction model.
Abstract: The aim of this paper is to know the sociodemographic
and operational-financial determinants of the services
quality perceived by users of the national health services. Through
the use of an inquiry conducted by the Ministry of Health,
comprehending 16.936 interviews in 2006, we intend to find out if
there is any characteristic that determines the 2006 inquiry results.
With the revision of the literature we also want to know if the
operational-financial results have implications in hospitals users-
perception on the quality of the received services. In order to achieve
our main goals we will make use of the regression analysis to find out
the possible dimensions that determine those results.
Abstract: Recently, grid computing has been widely focused on
the science, industry, and business fields, which are required a vast
amount of computing. Grid computing is to provide the environment
that many nodes (i.e., many computers) are connected with each
other through a local/global network and it is available for many
users. In the environment, to achieve data processing among nodes
for any applications, each node executes mutual authentication by
using certificates which published from the Certificate Authority
(for short, CA). However, if a failure or fault has occurred in the
CA, any new certificates cannot be published from the CA. As
a result, a new node cannot participate in the gird environment.
In this paper, an off-the-shelf scheme for dependable grid systems
using virtualization techniques is proposed and its implementation is
verified. The proposed approach using the virtualization techniques
is to restart an application, e.g., the CA, if it has failed. The system
can tolerate a failure or fault if it has occurred in the CA. Since
the proposed scheme is implemented at the application level easily,
the cost of its implementation by the system builder hardly takes
compared it with other methods. Simulation results show that the
CA in the system can recover from its failure or fault.
Abstract: Developing an accurate classifier for high dimensional microarray datasets is a challenging task due to availability of small sample size. Therefore, it is important to determine a set of relevant genes that classify the data well. Traditionally, gene selection method often selects the top ranked genes according to their discriminatory power. Often these genes are correlated with each other resulting in redundancy. In this paper, we have proposed a hybrid method using feature ranking and wrapper method (Genetic Algorithm with multiclass SVM) to identify a set of relevant genes that classify the data more accurately. A new fitness function for genetic algorithm is defined that focuses on selecting the smallest set of genes that provides maximum accuracy. Experiments have been carried on four well-known datasets1. The proposed method provides better results in comparison to the results found in the literature in terms of both classification accuracy and number of genes selected.
Abstract: Web-based cooperative learning focuses on (1) the interaction and the collaboration of community members, and (2) the sharing and the distribution of knowledge and expertise by network technology to enhance learning performance. Numerous research literatures related to web-based cooperative learning have demonstrated that cooperative scripts have a positive impact to specify, sequence, and assign cooperative learning activities. Besides, literatures have indicated that role-play in web-based cooperative learning environments enhances two or more students to work together toward the completion of a common goal. Since students generally do not know each other and they lack the face-to-face contact that is necessary for the negotiation of assigning group roles in web-based cooperative learning environments, this paper intends to further extend the application of genetic algorithm (GA) and propose a GA-based algorithm to tackle the problem of role assignment in web-based cooperative learning environments, which not only saves communication costs but also reduces conflict between group members in negotiating role assignments.
Abstract: An attempt in this paper proposes a re-modification to
the minimum moment approach of resource leveling which is a modified minimum moment approach to the traditional method by
Harris. The method is based on critical path method. The new approach suggests the difference between the methods in the
selection criteria of activity which needs to be shifted for leveling resource histogram. In traditional method, the improvement factor
found first to select the activity for each possible day of shifting. In
modified method maximum value of the product of Resources Rate
and Free Float was found first and improvement factor is then
calculated for that activity which needs to be shifted. In the proposed
method the activity to be selected first for shifting is based on the largest value of resource rate. The process is repeated for all the
remaining activities for possible shifting to get updated histogram.
The proposed method significantly reduces the number of iterations
and is easier for manual computations.
Abstract: The control design for unmanned underwater vehicles (UUVs) is challenging due to the uncertainties in the complex dynamic modeling of the vehicle as well as its unstructured operational environment. To cope with these difficulties, a practical robust control is therefore desirable. The paper deals with the application of coefficient diagram method (CDM) for a robust control design of an autonomous underwater vehicle. The CDM is an algebraic approach in which the characteristic polynomial and the controller are synthesized simultaneously. Particularly, a coefficient diagram (comparable to Bode diagram) is used effectively to convey pertinent design information and as a measure of trade-off between stability, response speed and robustness. In the polynomial ring, Kharitonov polynomials are employed to analyze the robustness of the controller due to parametric uncertainties.
Abstract: The ARMrayan Multimedia Mobile CMS (Content
Management System) is the first mobile CMS that gives the
opportunity to users for creating multimedia J2ME mobile
applications with their desired content, design and logo; simply,
without any need for writing even a line of code. The low-level
programming and compatibility problems of the J2ME, along with
UI designing difficulties, makes it hard for most people –even
programmers- to broadcast their content to the widespread mobile
phones used by nearly all people. This system provides user-friendly,
PC-based tools for creating a tree index of pages and inserting
multiple multimedia contents (e.g. sound, video and picture) in each
page for creating a J2ME mobile application. The output is a standalone
Java mobile application that has a user interface, shows texts
and pictures and plays music and videos regardless of the type of
devices used as long as the devices support the J2ME platform.
Bitmap fonts have also been used thus Middle Eastern languages can
be easily supported on all mobile phone devices. We omitted
programming concepts for users in order to simplify multimedia
content-oriented mobile applictaion designing for use in educational,
cultural or marketing centers. Ordinary operators can now create a
variety of multimedia mobile applications such as tutorials,
catalogues, books, and guides in minutes rather than months.
Simplicity and power has been the goal of this CMS. In this paper,
we present the software engineered-designed concepts of the
ARMrayan MCMS along with the implementation challenges faces
and solutions adapted.
Abstract: The objectives were to identify cyanide-degrading
bacteria and study cyanide removal efficiency. Agrobacterium
tumefaciens SUTS 1 was isolated. This is a new strain of
microorganisms for cyanide degradation. The maximum growth rate
of SUTS 1 obtained 4.7 × 108 CFU/ml within 4 days. The cyanide
removal efficiency was studied at 25, 50, and 150 mg/L cyanide. The
residual cyanide, ammonia, nitrate, nitrite, pH, and cell counts were
analyzed. At 25 and 50 mg/L cyanide, SUTS 1 obtained similar
removal efficiency approximately 87.50%. At 150 mg/L cyanide,
SUTS 1 enhanced the cyanide removal efficiency up to 97.90%. Cell
counts of SUTS 1 increased when the cyanide concentration was set
at lower. The ammonia increased when the removal efficiency
increased. The nitrate increased when the ammonia decreased but the
nitrite did not detect in all experiments. pH values also increased
when the cyanide concentrations were set at higher.
Abstract: Three dimensional simulations are carried out to estimate the effect of wind direction, wind speed and geometry on the flow and dispersion of vehicular pollutant in a street canyon. The pollutant sources are motor vehicles passing between the two buildings. Suitable emission factors for petrol and diesel vehicles at varying vehicle speed are used for the estimation of the rate of emission from the streets. The dispersion of automobile pollutant released from the street is simulated by introducing vehicular emission source term as a fixed-flux boundary condition at the ground level over the road. The emission source term is suitably calculated by adopting emission factors from literature for varying conditions of street traffic. It is observed that increase in wind angle disturbs the symmetric pattern of pollution distribution along the street length. The concentration increases in the far end of the street as compared to the near end.
Abstract: Wind energy has been shown to be one of the most
viable sources of renewable energy. With current technology, the low
cost of wind energy is competitive with more conventional sources of
energy such as coal. Most blades available for commercial grade
wind turbines incorporate a straight span-wise profile and airfoil
shaped cross sections. These blades are found to be very efficient at
lower wind speeds in comparison to the potential energy that can be
extracted. However as the oncoming wind speed increases the
efficiency of the blades decreases as they approach a stall point. This
paper explores the possibility of increasing the efficiency of the
blades at higher wind speeds while maintaining efficiency at the
lower wind speeds. The design intends to maintain efficiency at
lower wind speeds by selecting the appropriate orientation and size
of the airfoil cross sections based on a low oncoming wind speed and
given constant rotation rate. The blades will be made more efficient
at higher wind speeds by implementing a swept blade profile.
Performance was investigated using the computational fluid
dynamics (CFD).
Abstract: This research aimed to find out the determining
factors for ISO 14001 EMS implementation among SMEs in
Malaysia from the Resource based view. A cross-sectional approach
using survey was conducted. A research model been proposed which
comprises of ISO 14001 EMS implementation as the criterion
variable while physical capital resources (i.e. environmental
performance tracking and organizational infrastructures), human
capital resources (i.e. top management commitment and support,
training and education, employee empowerment and teamwork) and
organizational capital resources (i.e. recognition and reward,
organizational culture and organizational communication) as the
explanatory variables. The research findings show that only
environmental performance tracking, top management commitment
and support and organizational culture are found to be positively and
significantly associated with ISO 14001 EMS implementation. It is
expected that this research will shed new knowledge and provide a
base for future studies about the role played by firm-s internal
resources.
Abstract: The volume of XML data exchange is explosively increasing, and the need for efficient mechanisms of XML data management is vital. Many XML storage models have been proposed for storing XML DTD-independent documents in relational database systems. Benchmarking is the best way to highlight pros and cons of different approaches. In this study, we use a common benchmarking scheme, known as XMark to compare the most cited and newly proposed DTD-independent methods in terms of logical reads, physical I/O, CPU time and duration. We show the effect of Label Path, extracting values and storing in another table and type of join needed for each method's query answering.
Abstract: The present work analyses different parameters of pressure die casting to minimize the casting defects. Pressure diecasting is usually applied for casting of aluminium alloys. Good surface finish with required tolerances and dimensional accuracy can be achieved by optimization of controllable process parameters such as solidification time, molten temperature, filling time, injection pressure and plunger velocity. Moreover, by selection of optimum process parameters the pressure die casting defects such as porosity, insufficient spread of molten material, flash etc. are also minimized. Therefore, a pressure die casting component, carburetor housing of aluminium alloy (Al2Si2O5) has been considered. The effects of selected process parameters on casting defects and subsequent setting of parameters with the levels have been accomplished by Taguchi-s parameter design approach. The experiments have been performed as per the combination of levels of different process parameters suggested by L18 orthogonal array. Analyses of variance have been performed for mean and signal-to-noise ratio to estimate the percent contribution of different process parameters. Confidence interval has also been estimated for 95% consistency level and three conformational experiments have been performed to validate the optimum level of different parameters. Overall 2.352% reduction in defects has been observed with the help of suggested optimum process parameters.
Abstract: Some quality control tools use non metric subjective information coming from experts, who qualify the intensity of relations existing inside processes, but without quantifying them. In this paper we have developed a quality control analytic tool, measuring the impact or strength of the relationship between process operations and product characteristics. The tool includes two models: a qualitative model, allowing relationships description and analysis; and a formal quantitative model, by means of which relationship quantification is achieved. In the first one, concepts from the Graphs Theory were applied to identify those process elements which can be sources of variation, that is, those quality characteristics or operations that have some sort of prelacy over the others and that should become control items. Also the most dependent elements can be identified, that is those elements receiving the effects of elements identified as variation sources. If controls are focused in those dependent elements, efficiency of control is compromised by the fact that we are controlling effects, not causes. The second model applied adapts the multivariate statistical technique of Covariance Structural Analysis. This approach allowed us to quantify the relationships. The computer package LISREL was used to obtain statistics and to validate the model.
Abstract: Optical 3D measurement of objects is meaningful in
numerous industrial applications. In various cases shape acquisition
of weak textured objects is essential. Examples are repetition parts
made of plastic or ceramic such as housing parts or ceramic bottles as
well as agricultural products like tubers. These parts are often
conveyed in a wobbling way during the automated optical inspection.
Thus, conventional 3D shape acquisition methods like laser scanning
might fail. In this paper, a novel approach for acquiring 3D shape of
weak textured and moving objects is presented. To facilitate such
measurements an active stereo vision system with structured light is
proposed. The system consists of multiple camera pairs and auxiliary
laser pattern generators. It performs the shape acquisition within one
shot and is beneficial for rapid inspection tasks. An experimental
setup including hardware and software has been developed and
implemented.
Abstract: This paper proposes the hypothesis that multilateralism and regionalism are complementary, and that regional income convergence is likely with a like minded and committed regionalism that often has links geographically and culturally. The association between international trade, income per capita, and regional income convergence in founder members of ASEAN and SAARC, is explored by applying the Lumsdaine, and Papell approach. The causal relationships between the above variables are also studied in respective trade blocs by using Granger causality tests. The conclusion is that global reforms have had a greater impact on increasing trade for both trade blocs and induced convergence only in ASEAN-5 countries. The experience of ASEAN countries shows a two-way causal relationship between the flow from trade to regional income convergence, and vice versa. There is no evidence in SAARC countries for income convergence and causality.
Abstract: In this paper we propose an NLP-based method for
Ontology Population from texts and apply it to semi automatic
instantiate a Generic Knowledge Base (Generic Domain Ontology) in
the risk management domain. The approach is semi-automatic and
uses a domain expert intervention for validation. The proposed
approach relies on a set of Instances Recognition Rules based on
syntactic structures, and on the predicative power of verbs in the
instantiation process. It is not domain dependent since it heavily
relies on linguistic knowledge.
A description of an experiment performed on a part of the
ontology of the PRIMA1 project (supported by the European
community) is given. A first validation of the method is done by
populating this ontology with Chemical Fact Sheets from
Environmental Protection Agency2. The results of this experiment
complete the paper and support the hypothesis that relying on the
predicative power of verbs in the instantiation process improves the
performance.