Abstract: Market based models are frequently used in the resource
allocation on the computational grid. However, as the size of
the grid grows, it becomes difficult for the customer to negotiate
directly with all the providers. Middle agents are introduced to
mediate between the providers and customers and facilitate the
resource allocation process. The most frequently deployed middle
agents are the matchmakers and the brokers. The matchmaking agent
finds possible candidate providers who can satisfy the requirements
of the consumers, after which the customer directly negotiates with
the candidates. The broker agents are mediating the negotiation with
the providers in real time.
In this paper we present a new type of middle agent, the marketmaker.
Its operation is based on two parallel operations - through
the investment process the marketmaker is acquiring resources and
resource reservations in large quantities, while through the resale process
it sells them to the customers. The operation of the marketmaker
is based on the fact that through its global view of the grid it can
perform a more efficient resource allocation than the one possible in
one-to-one negotiations between the customers and providers.
We present the operation and algorithms governing the operation
of the marketmaker agent, contrasting it with the matchmaker and
broker agents. Through a series of simulations in the task oriented
domain we compare the operation of the three agents types. We find
that the use of marketmaker agent leads to a better performance in the
allocation of large tasks and a significant reduction of the messaging
overhead.
Abstract: Model-based approaches have been applied successfully
to a wide range of tasks such as specification, simulation, testing, and
diagnosis. But one bottleneck often prevents the introduction of these
ideas: Manual modeling is a non-trivial, time-consuming task.
Automatically deriving models by observing and analyzing running
systems is one possible way to amend this bottleneck. To
derive a model automatically, some a-priori knowledge about the
model structure–i.e. about the system–must exist. Such a model
formalism would be used as follows: (i) By observing the network
traffic, a model of the long-term system behavior could be generated
automatically, (ii) Test vectors can be generated from the model,
(iii) While the system is running, the model could be used to diagnose
non-normal system behavior.
The main contribution of this paper is the introduction of a model
formalism called 'probabilistic regression automaton' suitable for the
tasks mentioned above.
Abstract: This paper explores the knowledge and attitude of
women and men in decision making on pap smear screening. This
qualitative study recruited 52 respondents with 44 women and 8 men,
using the purposive sampling with snowballing technique through indepth
interviews. This study demonstrates several key findings:
Female respondents have better knowledge compared to male. Most
of the women perceived that pap smear screening is beneficial and
important, but to proceed with the test is still doubtful. Male
respondents were supportive in terms of sending their spouses to the
health facilities or give more freedom to their wives to choose and
making decision on their own health due to prominent reason that
women know best on their own health. It is expected that the results
from this study will provide useful guideline for healthcare providers
to prepare any action/intervention to provide an extensive education
to improve people-s knowledge and attitude towards pap smear.
Abstract: Support vector machines (SVMs) have shown
superior performance compared to other machine learning techniques,
especially in classification problems. Yet one limitation of SVMs is
the lack of an explanation capability which is crucial in some
applications, e.g. in the medical and security domains. In this paper, a
novel approach for eclectic rule-extraction from support vector
machines is presented. This approach utilizes the knowledge acquired
by the SVM and represented in its support vectors as well as the
parameters associated with them. The approach includes three stages;
training, propositional rule-extraction and rule quality evaluation.
Results from four different experiments have demonstrated the value
of the approach for extracting comprehensible rules of high accuracy
and fidelity.
Abstract: In this study, the contact problem of a layered composite which consists of two materials with different elastic constants and heights resting on two rigid flat supports with sharp edges is considered. The effect of gravity is neglected. While friction between the layers is taken into account, it is assumed that there is no friction between the supports and the layered composite so that only compressive tractions can be transmitted across the interface. The layered composite is subjected to a uniform clamping pressure over a finite portion of its top surface. The problem is reduced to a singular integral equation in which the contact pressure is the unknown function. The singular integral equation is evaluated numerically and the results for various dimensionless quantities are presented in graphical forms.
Abstract: The article investigates how 14- to 15- year-olds build informal conceptions of inferential statistics as they engage in a modelling process and build their own computer simulations with dynamic statistical software. This study proposes four primary phases of informal inferential reasoning for the students in the statistical modeling and simulation process. Findings show shifts in the conceptual structures across the four phases and point to the potential of all of these phases for fostering the development of students- robust knowledge of the logic of inference when using computer based simulations to model and investigate statistical questions.
Abstract: Using the animations video of teaching materials is an
effective learning method. However, we thought that more effective learning method is to produce the teaching video by learners
themselves. The learners who act as the producer must learn and understand well to produce and present video of teaching materials to
others. The purpose of this study is to propose the project based learning (PBL) technique by co-producing video of IT (information
technology) teaching materials. We used the T2V player to produce
the video based on TVML a TV program description language. By
proposed method, we have assigned the learners to produce the
animations video for “National Examination for Information
Processing Technicians (IPA examination)" in Japan, in order to get
them learns various knowledge and skill on IT field. Experimental
result showed that learning effect has occurred at the video production
process that useful for IT personnel resources development.
Abstract: The fuzzy technique is an operator introduced in order
to simulate at a mathematical level the compensatory behavior in
process of decision making or subjective evaluation. The following
paper introduces such operators on hand of computer vision
application.
In this paper a novel method based on fuzzy logic reasoning
strategy is proposed for edge detection in digital images without
determining the threshold value. The proposed approach begins by
segmenting the images into regions using floating 3x3 binary matrix.
The edge pixels are mapped to a range of values distinct from each
other. The robustness of the proposed method results for different
captured images are compared to those obtained with the linear Sobel
operator. It is gave a permanent effect in the lines smoothness and
straightness for the straight lines and good roundness for the curved
lines. In the same time the corners get sharper and can be defined
easily.
Abstract: Tourism is a phenomenon respected by the human communities since a long time ago. It has been evoloving continually based on a variety of social and economic needs and with respect to increasingly development of communication and considerable increase of tourist-s number and resulted exchange income has attained much out come such as employment for the communities. or the purpose of tourism development in this zone suitable times and locations need to be specified in the zone for the tourist-s attendance. One of the most important needs of the tourists is the knowledge of climate conditions and suitable times for sightseeing. In this survey, the climate trend condition has been identified for attending the tourists in Isfahan province using the modified tourism climate index (TCI) as well as SPSS, GIS, excel, surfer softwares. This index evoluates systematically the climate conditions for tourism affairs and activities using the monthly maximum mean parameters of daily temperature, daily mean temperature, minimum relative humidity, daily mean relative humidity, precipitation (mm), total sunny hours, wind speed and dust. The results obtaind using kendal-s correlation test show that the months January, February, March, April, May, June, July, August, September, October, November and December are significant and have an increasing trend that indicates the best condition for attending the tourists. S, P, T mean , T max and dust are estimated from 1976-2005 and do kendal-s correlation test again to see which parameter has been effective. Based on the test, we also observed on the effective parameters that the rate of dust in February, March, April, May, June, July, August, October and November is decreasing and precipitation in September and January is increasing and also the radiation rate in May and August is increasing that indicate a better condition of convenience. Maximum temperature in June is also decreasing. Isfahan province has two spring and fall peaks and the best places for tourism are in the north and western areas.
Abstract: Transferring information developed by other peoples is an ordinary event that happens during daily conversations, for example when employees sea each other in the organization, or when they are having lunch together, or attending a meeting, they use to talk about their experience, and discuss about their current projects, and talk about their successes over some specific problems. Despite the potential value of leveraging organizational memory and expertise by using OMS and ER, still small organizations haven-t been able to capitalize on its promised value. Each organization has its internal knowledge management system, in some of organizations the system face the lack of expert people to save their experience in the repository and in another hand on some other organizations there are lots of expert people but the organization doesn-t have the maximum use of their knowledge.
Abstract: Malaysia is aggressive in promoting the usage of ICT
to its mass population through the support by the government
policies and programs targeting the general population. However,
with the uneven distribution of the basic telecommunication
infrastructure between the urban and rural area, cost for being
“interconnected" that is considered high among the poorer rural
population and the lack of local contents that suit the rural population
needs or lifestyles, it is still a challenge for Malaysia to achieve its
Vision 2020 Agenda moving the nation towards an information
society by the year 2020. Among the existing programs that have
been carried out by the government to encourage the usage of ICT by
the rural population is “Kedaikom", a community telecenter with the
general aim is to engage the community to get exposed and to use the
ICT, encouraging the diffusion of the ICT technology to the rural
population. The research investigated by using a questionnaire
survey of how Kedaikom, as a community telecenter could play a
role in encouraging the rural or underserved community to use the
ICT. The result from the survey has proven that the community
telecenter could bridge the digital divide between the underserved
rural population and the well-accessed urban population in Malaysia.
More of the rural population, especially from the younger generation
and those with higher educational background are using the
community telecenter to be connected to the ICT.
Abstract: Green- spaces might be very attractive, but
where are the economic benefits? What value do nature and
landscape have for us? What difference will it make to jobs,
health and the economic strength of areas struggling with
deprivation and social problems? [1].There is a need to consider
green spaces from a different perspective. Green planning is not just
about flora and fauna, but also about planning for economic benefits
[2]. It is worth trying to quantify the value of green spaces since
nature and landscape are crucially important to our quality of life and
sustainable development. The reality, however, is that urban
development often takes place at the expense of green spaces.
Urbanization is an ongoing process throughout the world; however,
hyper-urbanization without environmental planning is destructive,
not constructive [3]. Urban spaces are believed to be more valuable
than other land uses, particular green areas, simply because of the
market value connected to urban spaces. However, attractive
landscapes can help raise the quality and value of the urban market
even more. In order to reach these objectives of integrated planning,
the Green-Value-Gap needs to be bridged. Economists have to
understand the concept of Green-Planning and the spinoffs, and
Environmentalists have to understand the importance of urban
economic development and the benefits thereof to green planning. An
interface between Environmental Management, Economic
Development and sustainable Spatial Planning are needed to bridge
the Green-Value-Gap.
Abstract: Brucellosis is a zoonotic disease; its symptoms and appearances are not exclusive in human and its traditional diagnosis is based on culture, serological methods and conventional PCR. For more sensitive, specific detection and differentiation of Brucella spp., the real time PCR method is recommended. This research has performed to determine the presence and prevalence of Brucella spp. and differentiation of Brucella abortus and Brucella melitensis in house mouse (Mus musculus) in west of Iran. A TaqMan analysis and single-step PCR was carried out in total 326 DNA of Mouse's spleen samples. From the total number of 326 samples, 128 (39.27%) gave positive results for Brucella spp. by conventional PCR, also 65 and 32 out of the 128 specimens were positive for B. melitensis, B. abortus, respectively. These results indicate a high presence of this pathogen in this area and that real time PCR is considerably faster than current standard methods for identification and differentiation of Brucella species. To our knowledge, this study is the first prevalence report of direct identification and differentiation of B. abortus and B. melitensis by real time PCR in mouse tissue samples in Iran.
Abstract: Nowadays, many manufacturing companies try to
reinforce their competitiveness or find a breakthrough by considering
collaboration. In Korea, more than 900 manufacturing companies are
using web-based collaboration systems developed by the
government-led project, referred to as i-Manufacturing. The system
supports some similar functions of Product Data Management (PDM)
as well as Project Management System (PMS). A web-based
collaboration system provides many useful functions for collaborative
works. This system, however, does not support new linking services
between buyers and suppliers. Therefore, in order to find new
collaborative partners, this paper proposes a framework which creates
new connections between buyers and suppliers facilitating their
collaboration, referred to as Excellent Manufacturer Scouting System
(EMSS). EMSS plays a role as a bridge between overseas buyers and
suppliers. As a part of study on EMSS, we also propose an evaluation
method of manufacturability of potential partners with six main factors.
Based on the results of evaluation, buyers may get a good guideline to
choose their new partners before getting into negotiation processes
with them.
Abstract: Morphological operators transform the original image
into another image through the interaction with the other image of
certain shape and size which is known as the structure element.
Mathematical morphology provides a systematic approach to analyze
the geometric characteristics of signals or images, and has been
applied widely too many applications such as edge detection,
objection segmentation, noise suppression and so on. Fuzzy
Mathematical Morphology aims to extend the binary morphological
operators to grey-level images. In order to define the basic
morphological operations such as fuzzy erosion, dilation, opening
and closing, a general method based upon fuzzy implication and
inclusion grade operators is introduced. The fuzzy morphological
operations extend the ordinary morphological operations by using
fuzzy sets where for fuzzy sets, the union operation is replaced by a
maximum operation, and the intersection operation is replaced by a
minimum operation.
In this work, it consists of two articles. In the first one, fuzzy set
theory, fuzzy Mathematical morphology which is based on fuzzy
logic and fuzzy set theory; fuzzy Mathematical operations and their
properties will be studied in details. As a second part, the application
of fuzziness in Mathematical morphology in practical work such as
image processing will be discussed with the illustration problems.
Abstract: Firms have invested heavily in knowledge
management (KM) with the aim to build a knowledge capability and
use it to achieve a competitive advantage. Research has shown,
however, that not all knowledge management projects succeed. Some
studies report that about 84% of knowledge management projects
fail. This paper has integrated studies on the impediments to
knowledge management into a theoretical framework. Based on this
framework, five cases documenting failed KM initiatives were
analysed. The analysis gave us a clear picture about why certain KM
projects fail. The high failure rate of KM can be explained by the
gaps that exist between users and management in terms of KM
perceptions and objectives
Abstract: In this paper, we propose a novel adaptive
spatiotemporal filter that utilizes image sequences in order to remove
noise. The consecutive frames include: current, previous and next
noisy frames. The filter proposed in this paper is based upon the
weighted averaging pixels intensity and noise variance in image
sequences. It utilizes the Appropriate Number of Consecutive Frames
(ANCF) based on the noisy pixels intensity among the frames. The
number of consecutive frames is adaptively calculated for each
region in image and its value may change from one region to another
region depending on the pixels intensity within the region. The
weights are determined by a well-defined mathematical criterion,
which is adaptive to the feature of spatiotemporal pixels of the
consecutive frames. It is experimentally shown that the proposed
filter can preserve image structures and edges under motion while
suppressing noise, and thus can be effectively used in image
sequences filtering. In addition, the AWA filter using ANCF is
particularly well suited for filtering sequences that contain segments
with abruptly changing scene content due to, for example, rapid
zooming and changes in the view of the camera.
Abstract: In this paper, mesh-free element free Galerkin (EFG) method is extended to solve two-dimensional potential flow problems. Two ideal fluid flow problems (i.e. flow over a rigid cylinder and flow over a sphere) have been formulated using variational approach. Penalty and Lagrange multiplier techniques have been utilized for the enforcement of essential boundary conditions. Four point Gauss quadrature have been used for the integration on two-dimensional domain (Ω) and nodal integration scheme has been used to enforce the essential boundary conditions on the edges (┌). The results obtained by EFG method are compared with those obtained by finite element method. The effects of scaling and penalty parameters on EFG results have also been discussed in detail.
Abstract: Iris localization is a very important approach in
biometric identification systems. Identification process usually is
implemented in three levels: iris localization, feature extraction, and
pattern matching finally. Accuracy of iris localization as the first step
affects all other levels and this shows the importance of iris
localization in an iris based biometric system. In this paper, we
consider Daugman iris localization method as a standard method,
propose a new method in this field and then analyze and compare the
results of them on a standard set of iris images. The proposed method
is based on the detection of circular edge of iris, and improved by
fuzzy circles and surface energy difference contexts. Implementation
of this method is so easy and compared to the other methods, have a
rather high accuracy and speed. Test results show that the accuracy of
our proposed method is about Daugman method and computation
speed of it is 10 times faster.
Abstract: The standard approach to image reconstruction is to stabilize the problem by including an edge-preserving roughness penalty in addition to faithfulness to the data. However, this methodology produces noisy object boundaries and creates a staircase effect. The existing attempts to favor the formation of smooth contour lines take the edge field explicitly into account; they either are computationally expensive or produce disappointing results. In this paper, we propose to incorporate the smoothness of the edge field in an implicit way by means of an additional penalty term defined in the wavelet domain. We also derive an efficient half-quadratic algorithm to solve the resulting optimization problem, including the case when the data fidelity term is non-quadratic and the cost function is nonconvex. Numerical experiments show that our technique preserves edge sharpness while smoothing contour lines; it produces visually pleasing reconstructions which are quantitatively better than those obtained without wavelet-domain constraints.