Abstract: In this paper, a fuzzy algorithm and a fuzzy multicriteria
decision framework are developed and used for a practical
question of optimizing biofuels policy making. The methodological
framework shows how to incorporate fuzzy set theory in a decision
process of finding a sustainable biofuels policy among several policy
options. Fuzzy set theory is used here as a tool to deal with
uncertainties of decision environment, vagueness and ambiguities of
policy objectives, subjectivities of human assessments and imprecise
and incomplete information about the evaluated policy instruments.
Abstract: Telemedicine is brought to life by contemporary changes of our world and summarizes the entire range of services that are at the crossroad of traditional healthcare and information technology. It is believed that eHealth can help in solving critical issues of rising costs, care for ageing and housebound population, staff shortage. It is a feasible tool to provide routine as well as specialized health service as it has the potential to improve both the access to and the standard of care. eHealth is no more an optional choice. It has already made quite a way but it still remains a fantastic challenge for the future requiring cooperation and coordination at all possible levels. The strategic objectives of this paper are: 1. To start with an attempt to clarify the mass of terms used nowadays; 2. To answer the question “Who needs eHealth"; 3. To focus on the necessity of bridging telemedicine and medical (health) informatics as well as on the dual relationship between them; as well as 4. To underline the need of networking in understanding, developing and implementing eHealth.
Abstract: The objective of the paper is to develop the forecast
model for the HW flows. The methodology of the research included
6 modules: historical data, assumptions, choose of indicators, data
processing, and data analysis with STATGRAPHICS, and forecast
models. The proposed methodology was validated for the case study
for Latvia. Hypothesis on the changes in HW for time period of
2010-2020 have been developed and mathematically described with
confidence level of 95.0% and 50.0%. Sensitivity analysis for the
analyzed scenarios was done. The results show that the growth of
GDP affects the total amount of HW in the country. The total amount
of the HW is projected to be within the corridor of – 27.7% in the
optimistic scenario up to +87.8% in the pessimistic scenario with
confidence level of 50.0% for period of 2010-2020. The optimistic
scenario has shown to be the least flexible to the changes in the GDP
growth.
Abstract: In this study, a Loop Back Algorithm for component
connected labeling for detecting objects in a digital image is
presented. The approach is using loop back connected component
labeling algorithm that helps the system to distinguish the object
detected according to their label. Deferent than whole window
scanning technique, this technique reduces the searching time for
locating the object by focusing on the suspected object based on
certain features defined. In this study, the approach was also
implemented for a face detection system. Face detection system is
becoming interesting research since there are many devices or
systems that require detecting the face for certain purposes. The input
can be from still image or videos, therefore the sub process of this
system has to be simple, efficient and accurate to give a good result.
Abstract: The aim of this paper is to rank the impact of Object
Oriented(OO) metrics in fault prediction modeling using Artificial
Neural Networks(ANNs). Past studies on empirical validation of
object oriented metrics as fault predictors using ANNs have focused
on the predictive quality of neural networks versus standard
statistical techniques. In this empirical study we turn our attention to
the capability of ANNs in ranking the impact of these explanatory
metrics on fault proneness. In ANNs data analysis approach, there is
no clear method of ranking the impact of individual metrics. Five
ANN based techniques are studied which rank object oriented
metrics in predicting fault proneness of classes. These techniques are
i) overall connection weights method ii) Garson-s method iii) The
partial derivatives methods iv) The Input Perturb method v) the
classical stepwise methods. We develop and evaluate different
prediction models based on the ranking of the metrics by the
individual techniques. The models based on overall connection
weights and partial derivatives methods have been found to be most
accurate.
Abstract: The impact of OO design on software quality
characteristics such as defect density and rework by mean of
experimental validation. Encapsulation, inheritance, polymorphism,
reusability, Data hiding and message-passing are the major attribute
of an Object Oriented system. In order to evaluate the quality of an
Object oriented system the above said attributes can act as indicators.
The metrics are the well known quantifiable approach to express any
attribute. Hence, in this paper we tried to formulate a framework of
metrics representing the attributes of object oriented system.
Empirical Data is collected from three different projects based on
object oriented paradigms to calculate the metrics.
Abstract: Computer aided design accounts with the support of
parametric software in the design of machine components as well as
of any other pieces of interest. The complexities of the element under
study sometimes offer certain difficulties to computer design, or ever
might generate mistakes in the final body conception. Reverse
engineering techniques are based on the transformation of already
conceived body images into a matrix of points which can be
visualized by the design software. The literature exhibits several
techniques to obtain machine components dimensional fields, as
contact instrument (MMC), calipers and optical methods as laser
scanner, holograms as well as moiré methods. The objective of this
research work was to analyze the moiré technique as instrument of
reverse engineering, applied to bodies of nom complex geometry as
simple solid figures, creating matrices of points. These matrices were
forwarded to a parametric software named SolidWorks to generate
the virtual object. Volume data obtained by mechanical means, i.e.,
by caliper, the volume obtained through the moiré method and the
volume generated by the SolidWorks software were compared and
found to be in close agreement. This research work suggests the
application of phase shifting moiré methods as instrument of reverse
engineering, serving also to support farm machinery element designs.
Abstract: The seismic rehabilitation designs of two reinforced
concrete school buildings, representative of a wide stock of similar
edifices designed under earlier editions of the Italian Technical
Standards, are presented in this paper. The mutual retrofit solution
elaborated for the two buildings consists in the incorporation of a
dissipative bracing system including pressurized fluid viscous springdampers
as passive protective devices. The mechanical parameters,
layouts and locations selected for the constituting elements of the
system; the architectural renovation projects developed to properly
incorporate the structural interventions and improve the appearance
of the buildings; highlights of the installation works already
completed in one of the two structures; and a synthesis of the
performance assessment analyses carried out in original and
rehabilitated conditions, are illustrated. The results of the analyses
show a remarkable enhancement of the seismic response capacities of
both structures. This allows reaching the high performance objectives
postulated in the retrofit designs with much lower costs and
architectural intrusion as compared to traditional rehabilitation
interventions designed for the same objectives.
Abstract: Flat double-layer grid is from category of space structures that are formed from two flat layers connected together with diagonal members. Increased stiffness and better seismic resistance in relation to other space structures are advantages of flat double layer space structures. The objective of this study is assessment and calculation of Behavior factor of flat double layer space structures. With regarding that these structures are used widely but Behavior factor used to design these structures against seismic force is not determined and exact, the necessity of study is obvious. This study is theoretical. In this study we used structures with span length of 16m and 20 m. All connections are pivotal. ANSYS software is used to non-linear analysis of structures.
Abstract: The electromagnetic spectrum is a natural resource
and hence well-organized usage of the limited natural resources is the
necessities for better communication. The present static frequency
allocation schemes cannot accommodate demands of the rapidly
increasing number of higher data rate services. Therefore, dynamic
usage of the spectrum must be distinguished from the static usage to
increase the availability of frequency spectrum. Cognitive radio is not
a single piece of apparatus but it is a technology that can incorporate
components spread across a network. It offers great promise for
improving system efficiency, spectrum utilization, more effective
applications, reduction in interference and reduced complexity of
usage for users. Cognitive radio is aware of its environmental,
internal state, and location, and autonomously adjusts its operations
to achieve designed objectives. It first senses its spectral environment
over a wide frequency band, and then adapts the parameters to
maximize spectrum efficiency with high performance. This paper
only focuses on the analysis of Bit-Error-Rate in cognitive radio by
using Particle Swarm Optimization Algorithm. It is theoretically as
well as practically analyzed and interpreted in the sense of
advantages and drawbacks and how BER affects the efficiency and
performance of the communication system.
Abstract: The objective of this project is to produce computer
assisted instruction(CAI) for welding and brazing in order to
determine the efficiency of the instruction package and the study
accomplishment of learner by studying through computer assisted
instruction for welding and brazing it was examined through the
target group surveyed from the 30 students studying in the two year
of 5-year-academic program, department of production technology
education, faculty of industrial education and technology, king
mongkut-s university of technology thonburi. The result of the
research indicated that the media evaluated by experts and subject
matter quality evaluation of computer assisted instruction for welding
and brazing was in line for the good criterion. The mean of score
evaluated before the study, during the study and after the study was
34.58, 83.33 and 83.43, respectively. The efficiency of the lesson was
83.33/83.43 which was higher than the expected value, 80/80. The
study accomplishment of the learner, who utilizes computer assisted
instruction for welding and brazing as a media, was higher and equal
to the significance statistical level of 95%. The value was 1.669
which was equal to 35.36>1.669. It could be summarized that
computer assisted instruction for welding and brazing was the
efficient media to use for studying and teaching.
Abstract: An information procuring and processing emerging technology wireless sensor network (WSN) Consists of autonomous nodes with versatile devices underpinned by applications. Nodes are equipped with different capabilities such as sensing, computing, actuation and wireless communications etc. based on application requirements. A WSN application ranges from military implementation in the battlefield, environmental monitoring, health sector as well as emergency response of surveillance. The nodes are deployed independently to cooperatively monitor the physical and environmental conditions. The architecture of WSN differs based on the application requirements and focus on low cost, flexibility, fault tolerance capability, deployment process as well as conserve energy. In this paper we have present the characteristics, architecture design objective and architecture of WSN
Abstract: Cognitive Dissonance can be conceived both as a concept related to the tendency to avoid internal contradictions in certain situations, and as a higher order theory about information processing in the human mind. In the last decades, this last sense has been strongly surpassed by the former, as nearly all experiment on the matter discuss cognitive dissonance as an output of motivational contradictions. In that sense, the question remains: is cognitive dissonance a process intrinsically associated with the way that the mind processes information, or is it caused by such specific contradictions? Objective: To evaluate the effects of cognitive dissonance in the absence of rewards or any mechanisms to manipulate motivation. Method: To solve this question, we introduce a new task, the hypothetical social arrays paradigm, which was applied to 50 undergraduate students. Results: Our findings support the perspective that the human mind shows a tendency to avoid internal dissonance even when there are no rewards or punishment involved. Moreover, our findings also suggest that this principle works outside the conscious level.
Abstract: In this paper we report a study aimed at determining
the effects of animation on usability and appeal of educational
software user interfaces. Specifically, the study compares 3
interfaces developed for the Mathsigner™ program: a static
interface, an interface with highlighting/sound feedback, and an
interface that incorporates five Disney animation principles. The
main objectives of the comparative study were to: (1) determine
which interface is the most effective for the target users of
Mathsigner™ (e.g., children ages 5-11), and (2) identify any Gender
and Age differences in using the three interfaces. To accomplish
these goals we have designed an experiment consisting of a
cognitive walkthrough and a survey with rating questions. Sixteen
children ages 7-11 participated in the study, ten males and six
females. Results showed no significant interface effect on user task
performance (e.g., task completion time and number of errors);
however, interface differences were seen in rating of appeal, with
the animated interface rated more 'likeable' than the other two.
Task performance and rating of appeal were not affected
significantly by Gender or Age of the subjects.
Abstract: We study the problem of reconstructing a three dimensional binary matrices whose interiors are only accessible through few projections. Such question is prominently motivated by the demand in material science for developing tool for reconstruction of crystalline structures from their images obtained by high-resolution transmission electron microscopy. Various approaches have been suggested to reconstruct 3D-object (crystalline structure) by reconstructing slice of the 3D-object. To handle the ill-posedness of the problem, a priori information such as convexity, connectivity and periodicity are used to limit the number of possible solutions. Formally, 3Dobject (crystalline structure) having a priory information is modeled by a class of 3D-binary matrices satisfying a priori information. We consider 3D-binary matrices with periodicity constraints, and we propose a polynomial time algorithm to reconstruct 3D-binary matrices with periodicity constraints from two orthogonal projections.
Abstract: Many systems in the natural world exhibit chaos or non-linear behavior, the complexity of which is so great that they appear to be random. Identification of chaos in experimental data is essential for characterizing the system and for analyzing the predictability of the data under analysis. The Lyapunov exponents provide a quantitative measure of the sensitivity to initial conditions and are the most useful dynamical diagnostic for chaotic systems. However, it is difficult to accurately estimate the Lyapunov exponents of chaotic signals which are corrupted by a random noise. In this work, a method for estimation of Lyapunov exponents from noisy time series using unscented transformation is proposed. The proposed methodology was validated using time series obtained from known chaotic maps. In this paper, the objective of the work, the proposed methodology and validation results are discussed in detail.
Abstract: Evaporator is an important and widely used heat
exchanger in air conditioning and refrigeration industries. Different
methods have been used by investigators to increase the heat transfer
rates in evaporators. One of the passive techniques to enhance heat
transfer coefficient is the application of microfin tubes. The
mechanism of heat transfer augmentation in microfin tubes is
dependent on the flow regime of two-phase flow. Therefore many
investigations of the flow patterns for in-tube evaporation have been
reported in literatures. The gravitational force, surface tension and
the vapor-liquid interfacial shear stress are known as three dominant
factors controlling the vapor and liquid distribution inside the tube. A
review of the existing literature reveals that the previous
investigations were concerned with the two-phase flow pattern for
flow boiling in horizontal tubes [12], [9]. Therefore, the objective of
the present investigation is to obtain information about the two-phase
flow patterns for evaporation of R-134a inside horizontal smooth and
microfin tubes. Also Investigation of heat transfer during flow
boiling of R-134a inside horizontal microfin and smooth tube have
been carried out experimentally The heat transfer coefficients for
annular flow in the smooth tube is shown to agree well with Gungor
and Winterton-s correlation [4]. All the flow patterns occurred in the
test can be divided into three dominant regimes, i.e., stratified-wavy
flow, wavy-annular flow and annular flow. Experimental data are
plotted in two kinds of flow maps, i.e., Weber number for the vapor
versus weber number for the liquid flow map and mass flux versus
vapor quality flow map. The transition from wavy-annular flow to
annular or stratified-wavy flow is identified in the flow maps.
Abstract: Since supply chains highly impact the financial
performance of companies, it is important to optimize and analyze
their Key Performance Indicators (KPI). The synergistic combination
of Particle Swarm Optimization (PSO) and Monte Carlo simulation is
applied to determine the optimal reorder point of warehouses in
supply chains. The goal of the optimization is the minimization of the
objective function calculated as the linear combination of holding and
order costs. The required values of service levels of the warehouses
represent non-linear constraints in the PSO. The results illustrate that
the developed stochastic simulator and optimization tool is flexible
enough to handle complex situations.
Abstract: Quality costs are the costs associated with preventing,
finding, and correcting defective work. Since the main language of
corporate management is money, quality-related costs act as means of
communication between the staff of quality engineering departments
and the company managers. The objective of quality engineering is to
minimize the total quality cost across the life of product. Quality
costs provide a benchmark against which improvement can be
measured over time. It provides a rupee-based report on quality
improvement efforts. It is an effective tool to identify, prioritize and
select quality improvement projects. After reviewing through the
literature it was noticed that a simplified methodology for data
collection of quality cost in a manufacturing industry was required.
The quantified standard methodology is proposed for collecting data
of various elements of quality cost categories for manufacturing
industry. Also in the light of research carried out so far, it is felt
necessary to standardise cost elements in each of the prevention,
appraisal, internal failure and external failure costs. . Here an attempt
is made to standardise the various cost elements applicable to
manufacturing industry and data is collected by using the proposed
quantified methodology. This paper discusses the case study carried
in luggage manufacturing industry.
Abstract: Electronic Systems are the core of everyday lives.
They form an integral part in financial networks, mass transit,
telephone systems, power plants and personal computers. Electronic
systems are increasingly based on complex VLSI (Very Large Scale
Integration) integrated circuits. Initial electronic design automation is
concerned with the design and production of VLSI systems. The next
important step in creating a VLSI circuit is Physical Design. The
input to the physical design is a logical representation of the system
under design. The output of this step is the layout of a physical
package that optimally or near optimally realizes the logical
representation. Physical design problems are combinatorial in nature
and of large problem sizes. Darwin observed that, as variations are
introduced into a population with each new generation, the less-fit
individuals tend to extinct in the competition of basic necessities.
This survival of fittest principle leads to evolution in species. The
objective of the Genetic Algorithms (GA) is to find an optimal
solution to a problem .Since GA-s are heuristic procedures that can
function as optimizers, they are not guaranteed to find the optimum,
but are able to find acceptable solutions for a wide range of
problems. This survey paper aims at a study on Efficient Algorithms
for VLSI Physical design and observes the common traits of the
superior contributions.