Abstract: Problem solving has traditionally been one of the principal research areas for artificial intelligence. Yet, although artificial intelligence reasoning techniques have been employed in several product support systems, the benefit of integrating product support, knowledge engineering, and problem solving, is still unclear. This paper studies the synergy of these areas and proposes a knowledge engineering framework that integrates product support systems and artificial intelligence techniques. The framework includes four spaces; the data, problem, hypothesis, and solution ones. The data space incorporates the knowledge needed for structured reasoning to take place, the problem space contains representations of problems, and the hypothesis space utilizes a multimodal reasoning approach to produce appropriate solutions in the form of virtual documents. The solution space is used as the gateway between the system and the user. The proposed framework enables the development of product support systems in terms of smaller, more manageable steps while the combination of different reasoning techniques provides a way to overcome the lack of documentation resources.
Abstract: To distinguish small retinal hemorrhages in early
diabetic retinopathy from dust artifacts, we analyzed hue, lightness,
and saturation (HLS) color spaces. The fundus of 5 patients with
diabetic retinopathy was photographed. For the initial experiment, we
placed 4 different colored papers on the ceiling of a darkroom. Using
each color, 10 fragments of house dust particles on a magnifier were
photographed. The colored papers were removed, and 3 different
colored light bulbs were suspended from the ceiling. Ten fragments of
house dust particles on the camera-s object lens were photographed.
We then constructed an experimental device that can photograph
artificial eyes. Five fragments of house dust particles under the ocher
fundus of the artificial eye were photographed. On analyzing HLS
color space of the dust artifact, lightness and saturation were found to
be highly sensitive. However, hue was not highly sensitive.
Abstract: Smart Grids employ wireless sensor networks for
their control and monitoring. Sensors are characterized by limitations
in the processing power, energy supply and memory spaces, which
require a particular attention on the design of routing and data
management algorithms.
Since most routing algorithms for sensor networks, focus on
finding energy efficient paths to prolong the lifetime of sensor
networks, the power of sensors on efficient paths depletes quickly,
and consequently sensor networks become incapable of monitoring
events from some parts of their target areas. In consequence, the
design of routing protocols should consider not only energy
efficiency paths, but also energy efficient algorithms in general.
In this paper we propose an energy efficient routing protocol for
wireless sensor networks without the support of any location
information system. The reliability and the efficiency of this protocol
have been demonstrated by simulation studies where we compare
them to the legacy protocols. Our simulation results show that these
algorithms scale well with network size and density.
Abstract: Symbolic Circuit Analysis (SCA) is a technique used
to generate the symbolic expression of a network. It has become a
well-established technique in circuit analysis and design. The
symbolic expression of networks offers excellent way to perform
frequency response analysis, sensitivity computation, stability
measurements, performance optimization, and fault diagnosis. Many
approaches have been proposed in the area of SCA offering different
features and capabilities. Numerical Interpolation methods are very
common in this context, especially by using the Fast Fourier
Transform (FFT). The aim of this paper is to present a method for
SCA that depends on the use of Wavelet Transform (WT) as a
mathematical tool to generate the symbolic expression for large
circuits with minimizing the analysis time by reducing the number of
computations.
Abstract: India-s North-Eastern part, comprising of seven states, is a lowly developed, tribal population dominated region in India. Inspite of the common Mongoloid origin and lifestyle of majority of the population residing here, sharp differences exist in the status of their socio-economic development. The present paper, through a state-wise analysis, makes an attempt to find out the extent of this disparity, especially on the socio-economic front. It illustrates the situations prevailing in health, education, economic and social cohesion sector. Discussion on the implications of such disparity on social stability finds that the causes of frequent insurgency activities, that have been penetrating the region for a long time, thereby creating communal conflicts, can be traced in the economic deprivation and disparity. In the last section, the paper makes policy prescription and suggests how by taking care of disparity and deprivation both poverty and the problem of communal conflicts can be controlled.
Abstract: This paper present an efficient and reliable technique of optimization which combined fuel cost economic optimization and emission dispatch using the Sigmoid Decreasing Inertia Weight Particle Swarm Optimization algorithm (PSO) to reduce the cost of fuel and pollutants resulting from fuel combustion by keeping the output of generators, bus voltages, shunt capacitors and transformer tap settings within the security boundary. The performance of the proposed algorithm has been demonstrated on IEEE 30-bus system with six generating units. The results clearly show that the proposed algorithm gives better and faster speed convergence then linearly decreasing inertia weight.
Abstract: This work aims to describe the process of developing
services and applications of seamless communication within a
Telecom Italia long-term research project, which takes as central aim
the design of a wearable communication device. In particular, the
objective was to design a wrist phone integrated into everyday life of
people in full transparency. The methodology used to design the
wristwatch was developed through several subsequent steps also
involving the Personas Layering Framework. The data collected in
this phases have been very useful for designing an improved version
of the first two concepts of wrist phone going to change aspects
related to the four critical points expressed by the users.
Abstract: Texture classification is an important image processing
task with a broad application range. Many different techniques for
texture classification have been explored. Using sparse approximation
as a feature extraction method for texture classification is a relatively
new approach, and Skretting et al. recently presented the Frame
Texture Classification Method (FTCM), showing very good results on
classical texture images. As an extension of that work the FTCM is
here tested on a real world application as detection of abnormalities
in mammograms. Some extensions to the original FTCM that are
useful in some applications are implemented; two different smoothing
techniques and a vector augmentation technique. Both detection of
microcalcifications (as a primary detection technique and as a last
stage of a detection scheme), and soft tissue lesions in mammograms
are explored. All the results are interesting, and especially the results
using FTCM on regions of interest as the last stage in a detection
scheme for microcalcifications are promising.
Abstract: Inconel 718, a nickel based super-alloy is an
extensively used alloy, accounting for about 50% by weight of
materials used in an aerospace engine, mainly in the gas turbine
compartment. This is owing to their outstanding strength and
oxidation resistance at elevated temperatures in excess of 5500 C.
Machining is a requisite operation in the aircraft industries for the
manufacture of the components especially for gas turbines. This
paper is concerned with optimization of the surface roughness when
turning Inconel 718 with cermet inserts. Optimization of turning
operation is very useful to reduce cost and time for machining. The
approach is based on Response Surface Method (RSM). In this work,
second-order quadratic models are developed for surface roughness,
considering the cutting speed, feed rate and depth of cut as the cutting
parameters, using central composite design. The developed models
are used to determine the optimum machining parameters. These
optimized machining parameters are validated experimentally, and it
is observed that the response values are in reasonable agreement with
the predicted values.
Abstract: The main objective developed in this paper is to find a
graphic technique for modeling, simulation and diagnosis of the
industrial systems. This importance is much apparent when it is about
a complex system such as the nuclear reactor with pressurized water
of several form with various several non-linearity and time scales. In
this case the analytical approach is heavy and does not give a fast
idea on the evolution of the system. The tool Bond Graph enabled us
to transform the analytical model into graphic model and the
software of simulation SYMBOLS 2000 specific to the Bond Graphs
made it possible to validate and have the results given by the
technical specifications. We introduce the analysis of the problem
involved in the faults localization and identification in the complex
industrial processes. We propose a method of fault detection applied
to the diagnosis and to determine the gravity of a detected fault. We
show the possibilities of application of the new diagnosis approaches
to the complex system control. The industrial systems became
increasingly complex with the faults diagnosis procedures in the
physical systems prove to become very complex as soon as the
systems considered are not elementary any more. Indeed, in front of
this complexity, we chose to make recourse to Fault Detection and
Isolation method (FDI) by the analysis of the problem of its control
and to conceive a reliable system of diagnosis making it possible to
apprehend the complex dynamic systems spatially distributed applied
to the standard pressurized water nuclear reactor.
Abstract: This paper describes an enhanced cookie-based
method for counting the visitors of web sites by using a web log
processing system that aims to cope with the ambitious goal of
creating countrywide statistics about the browsing practices of real
human individuals. The focus is put on describing a new more
efficient way of detecting human beings behind web users by placing
different identifiers on the client computers. We briefly introduce our
processing system designed to handle the massive amount of data
records continuously gathered from the most important content
providers of the Hungary. We conclude by showing statistics of
different time spans comparing the efficiency of multiple visitor
counting methods to the one presented here, and some interesting
charts about content providers and web usage based on real data
recorded in 2007 will also be presented.
Abstract: Recent articles have addressed the problem to construct the confidence intervals for the mean of a normal distribution where the parameter space is restricted, see for example Wang [Confidence intervals for the mean of a normal distribution with restricted parameter space. Journal of Statistical Computation and Simulation, Vol. 78, No. 9, 2008, 829–841.], we derived, in this paper, analytic expressions of the coverage probability and the expected length of confidence interval for the normal mean when the whole parameter space is bounded. We also construct the confidence interval for the normal variance with restricted parameter for the first time and its coverage probability and expected length are also mathematically derived. As a result, one can use these criteria to assess the confidence interval for the normal mean and variance when the parameter space is restricted without the back up from simulation experiments.
Abstract: Recent evidences on liquidity and valuation of securities in the capital markets clearly show the importance of stock market liquidity and valuation of firms. In this paper, relationship between transparency, liquidity, and valuation is studied by using data obtained from 70 companies listed in Tehran Stock Exchange during2003-2012. In this study, discriminatory earnings management, as a sign of lack of transparency and Tobin's Q, was used as the criteria of valuation. The results indicate that there is a significant and reversed relationship between earnings management and liquidity. On the other hand, there is a relationship between liquidity and transparency.The results also indicate a significant relationship between transparency and valuation. Transparency has an indirect effect on firm valuation alone or through the liquidity channel. Although the effect of transparency on the value of a firm was reduced by adding the variable of liquidity, the cumulative effect of transparency and liquidity increased.
Abstract: QoS Routing aims to find paths between senders and
receivers satisfying the QoS requirements of the application which
efficiently using the network resources and underlying routing
algorithm to be able to find low-cost paths that satisfy given QoS
constraints. The problem of finding least-cost routing is known to be
NP hard or complete and some algorithms have been proposed to
find a near optimal solution. But these heuristics or algorithms either
impose relationships among the link metrics to reduce the complexity
of the problem which may limit the general applicability of the
heuristic, or are too costly in terms of execution time to be applicable
to large networks. In this paper, we analyzed two algorithms namely
Characterized Delay Constrained Routing (CDCR) and Optimized
Delay Constrained Routing (ODCR). The CDCR algorithm dealt an
approach for delay constrained routing that captures the trade-off
between cost minimization and risk level regarding the delay
constraint. The ODCR which uses an adaptive path weight function
together with an additional constraint imposed on the path cost, to
restrict search space and hence ODCR finds near optimal solution in
much quicker time.
Abstract: In this paper we introduce a new class of mg-continuous mapping and studied some of its basic properties.We obtain some characterizations of such functions. Moreover we define sub minimal structure and further study certain properties of mg-closed sets.
Abstract: Ground-level tropospheric ozone is one of the air
pollutants of most concern. It is mainly produced by photochemical
processes involving nitrogen oxides and volatile organic compounds
in the lower parts of the atmosphere. Ozone levels become
particularly high in regions close to high ozone precursor emissions
and during summer, when stagnant meteorological conditions with
high insolation and high temperatures are common.
In this work, some results of a study about urban ozone
distribution patterns in the city of Badajoz, which is the largest and
most industrialized city in Extremadura region (southwest Spain) are
shown. Fourteen sampling campaigns, at least one per month, were
carried out to measure ambient air ozone concentrations, during
periods that were selected according to favourable conditions to
ozone production, using an automatic portable analyzer.
Later, to evaluate the ozone distribution at the city, the measured
ozone data were analyzed using geostatistical techniques. Thus, first,
during the exploratory analysis of data, it was revealed that they were
distributed normally, which is a desirable property for the subsequent
stages of the geostatistical study. Secondly, during the structural
analysis of data, theoretical spherical models provided the best fit for
all monthly experimental variograms. The parameters of these
variograms (sill, range and nugget) revealed that the maximum
distance of spatial dependence is between 302-790 m and the
variable, air ozone concentration, is not evenly distributed in reduced
distances. Finally, predictive ozone maps were derived for all points
of the experimental study area, by use of geostatistical algorithms
(kriging). High prediction accuracy was obtained in all cases as
cross-validation showed. Useful information for hazard assessment
was also provided when probability maps, based on kriging
interpolation and kriging standard deviation, were produced.
Abstract: Due to the call of global warming effects, city planners aim at actions for reducing carbon emission. One of the approaches is to promote the usage of public transportation system toward the transit-oriented-development. For example, rapid transit system in Taipei city and Kaohsiung city are opening. However, until November 2008 the average daily patronage counted only 113,774 passengers at Kaohsiung MRT systems, much less than which was expected. Now the crucial questions: how the public transport competes with private transport? And more importantly, what factors would enhance the use of public transport? To give the answers to those questions, our study first applied regression to analyze the factors attracting people to use public transport around cities in the world. It is shown in our study that the number of MRT stations, city population, cost of living, transit fare, density, gasoline price, and scooter being a major mode of transport are the major factors. Subsequently, our study identified successful and unsuccessful cities in regard of the public transport usage based on the diagnosis of regression residuals. Finally, by comparing transportation strategies adopted by those successful cities, our conclusion stated that Kaohsiung City could apply strategies such as increasing parking fees, reducing parking spaces in downtown area, and reducing transfer time by providing more bus services and public bikes to promote the usage of public transport.
Abstract: In this paper we canvass three case studies of unique
research partnerships between universities and schools in the wider
community. In doing so, we consider those areas of indeterminate
zones of professional practice explored by academics in their
research activities within the wider community. We discuss three
cases: an artist-in-residence program designed to engage primary
school children with new understandings about local Indigenous
Australian issues in their pedagogical and physical landscapes; an
assessment of pedagogical concerns in relation to the use of physical
space in classrooms; and the pedagogical underpinnings of a
costumed museum school program. In doing so, we engage issues of
research as playing an integral part in the development,
implementation and maintenance of academic engagements with
wider community issues.
Abstract: Next generation wireless/mobile networks will be IP based cellular networks integrating the internet with cellular networks. In this paper, we propose a new architecture for a high speed transport system and a mobile management protocol for mobile internet users in a transport system. Existing mobility management protocols (MIPv6, HMIPv6) do not consider real world fast moving wireless hosts (e.g. passengers in a train). For this reason, we define a virtual organization (VO) and proposed the VO architecture for the transport system. We also classify mobility as VO mobility (intra VO) and macro mobility (inter VO). Handoffs in VO are locally managed and transparent to the CH while macro mobility is managed with Mobile IPv6. And, from the features of the transport system, such as fixed route and steady speed, we deduce the movement route and the handoff disruption time of each handoff. To reduce packet loss during handoff disruption time, we propose pre-registration scheme using pre-registration. Moreover, the proposed protocol can eliminate unnecessary binding updates resulting from sequence movement at high speed. The performance evaluations demonstrate our proposed protocol has a good performance at transport system environment. Our proposed protocol can be applied to the usage of wireless internet on the train, subway, and high speed train.
Abstract: When trying to enumerate all BIBD-s for given parameters,
their natural solution space appears to be huge and grows extremely with the number of points of the design. Therefore,
constructive enumerations are often carried out by assuming additional
constraints on design-s structure, automorphisms being mostly used ones. It remains a hard task to construct designs with trivial
automorphism group – those with no additional symmetry – although it is believed that most of the BIBD-s belong to that case. In
this paper, very many new designs with parameters 2-(13, 5, 5), 2-(16, 6, 5) and 2-(21, 6, 4) are constructed, assuming an action of an
automorphism of order 3. Even more, it was possible to construct millions of such designs with no non-trivial automorphisms.