Abstract: To comply with the international human right
legislation concerning the freedom of movement, transport systems
are required to be made accessible in order that all citizens, regardless
of their physical condition, have equal possibilities to use them. In
Hungary, apparently there is a considerable default in the
improvement of accessible public transport. This study is aiming to
overview the current Hungarian situation and to reveal the reasons of
the deficiency. The result shows that in spite of the relatively
favourable juridical background linked to the accessibility needs and
to the rights of persons with disabilities there is a strong delay in
putting all in practice in the field of public transport. Its main reason
is the lack of financial resource and referring to this the lack of
creating mandatory regulations. In addition to this the proprietary
rights related to public transport are also variable, which also limits
the improvement possibilities. Consequently, first of all an accurate
and detailed regulatory procedure is expected to change the present
unfavourable situation and to create the conditions of the fast
realization, which is already behind time.
Abstract: The convergence of heterogeneous wireless access technologies characterizes the 4G wireless networks. In such converged systems, the seamless and efficient handoff between
different access technologies (vertical handoff) is essential and remains a challenging problem. The heterogeneous co-existence of access technologies with largely different characteristics creates a decision problem of determining the “best" available network at
“best" time to reduce the unnecessary handoffs. This paper proposes a dynamic decision model to decide the “best" network at “best"
time moment to handoffs. The proposed dynamic decision model make the right vertical handoff decisions by determining the “best"
network at “best" time among available networks based on, dynamic
factors such as “Received Signal Strength(RSS)" of network and
“velocity" of mobile station simultaneously with static factors like Usage Expense, Link capacity(offered bandwidth) and power
consumption. This model not only meets the individual user needs but also improve the whole system performance by reducing the unnecessary handoffs.
Abstract: In this paper, a novel method using Bees Algorithm is proposed to determine the optimal allocation of FACTS devices for maximizing the Available Transfer Capability (ATC) of power transactions between source and sink areas in the deregulated power system. The algorithm simultaneously searches the FACTS location, FACTS parameters and FACTS types. Two types of FACTS are simulated in this study namely Thyristor Controlled Series Compensator (TCSC) and Static Var Compensator (SVC). A Repeated Power Flow with FACTS devices including ATC is used to evaluate the feasible ATC value within real and reactive power generation limits, line thermal limits, voltage limits and FACTS operation limits. An IEEE30 bus system is used to demonstrate the effectiveness of the algorithm as an optimization tool to enhance ATC. A Genetic Algorithm technique is used for validation purposes. The results clearly indicate that the introduction of FACTS devices in a right combination of location and parameters could enhance ATC and Bees Algorithm can be efficiently used for this kind of nonlinear integer optimization.
Abstract: This paper presents a novel approach to assessing textile porosity by the application of the image analysis techniques. The images of different types of sample fabrics, taken through a microscope when the fabric is placed over a constant light source,transfer the problem into the image analysis domain. Indeed, porosity can thus be expressed in terms of a brightness percentage index calculated on the digital microscope image. Furthermore, it is meaningful to compare the brightness percentage index with the air permeability and the tightness indices of each fabric type. We have experimentally shown that there exists an approximately linear relation between brightness percentage and air permeability indices.
Abstract: Availability of raw materials is important for
Indonesia as a furniture exporting country. Teak log as raw materials
is supplied to the furniture industry by Perum Perhutani (PP). PP
needs to involve carbon trading for nature conservation. PP also has
an obligation in the Corporate Social Responsibility program. PP and
furniture industry also must prosecute the regulations related to
ecological issues and labor rights. This study has the objective to
create the relationship model between supplier and manufacturer to
fulfill teak log demand that involving teak forest carbon
sequestration. A model is formulated as Goal Programming to get the
favorable solution for teak log procurement and support carbon
sequestration that considering economical, ecological, and social
aspects of both supplier and manufacturer. The results show that the
proposed model can be used to determine the teak log quantity
involving carbon trading to achieve the seven goals to be satisfied the
sustainability considerations.
Abstract: In this study, we used shape memory alloys as
actuators to build a biomorphic robot which can imitate the motion of
an earthworm. The robot can be used to explore in a narrow space.
Therefore we chose shape memory alloys as actuators. Because of the
small deformation of a wire shape memory alloy, spiral shape memory
alloys are selected and installed both on the X axis and Y axis (each
axis having two shape memory alloys) to enable the biomorphic robot
to do reciprocating motion. By the mechanism we designed, the robot
can increase the distance as it moves in a duty cycle. In addition, two
shape memory alloys are added to the robot head for controlling right
and left turns. By sending pulses through the I/O card from the
controller, the signals are then amplified by a driver to heat the shape
memory alloys in order to make the SMA shrink to pull the mechanism
to move.
Abstract: since in natural accidents, facilities that relate to this vita element are underground so, it is difficult to find quickly some right, exact and definite information about water utilities. There fore, this article has done operationally in Boukan city in Western Azarbaijan of Iran and it tries to represent operation and capabilities of Geographical Information system (GIS) in urban water management at the time of natural accidents. Structure of this article is that firstly it has established a comprehensive data base related to water utilities by collecting, entering, saving and data management, then by modeling water utilities we have practically considered its operational aspects related to water utility problems in urban regions.
Abstract: The most influential programming paradigm today
is object oriented (OO) programming and it is widely used in
education and industry. Recognizing the importance of equipping
students with OO knowledge and skills, it is not surprising that most
Computer Science degree programs offer OO-related courses. How
do we assess whether the students have acquired the right objectoriented
skills after they have completed their OO courses? What are
object oriented skills? Currently none of the current assessment
techniques would be able to provide this answer. Traditional forms of
OO programming assessment provide a ways for assigning numerical
scores to determine letter grades. But this rarely reveals information
about how students actually understand OO concept. It appears
reasonable that a better understanding of how to define and assess
OO skills is needed by developing a criterion referenced model. It is
even critical in the context of Malaysia where there is currently a
growing concern over the level of competency of Malaysian IT
graduates in object oriented programming. This paper discussed the
approach used to develop the criterion-referenced assessment model.
The model can serve as a guideline when conducting OO
programming assessment as mentioned. The proposed model is
derived by using Goal Questions Metrics methodology, which helps
formulate the metrics of interest. It concluded with a few suggestions
for further study.
Abstract: Solution of some practical problems is reduced to the
solution of the integro-differential equations. But for the numerical
solution of such equations basically quadrature methods or its
combination with multistep or one-step methods are used. The
quadrature methods basically is applied to calculation of the integral
participating in right hand side of integro-differential equations. As
this integral is of Volterra type, it is obvious that at replacement with
its integrated sum the upper limit of the sum depends on a current
point in which values of the integral are defined. Thus we receive the
integrated sum with variable boundary, to work with is hardly.
Therefore multistep method with the constant coefficients, which is
free from noted lack and gives the way for finding it-s coefficients is
present.
Abstract: increased competition and increased costs of
designing made it important for the firms to identify the right
products and the right methods for manufacturing the products. Firms
should focus on customers and identify customer demands directly to
design the right products. Several management methods and
techniques that are currently available improve one or more functions
or processes in an industry and do not take the complete product life
cycle into consideration. On the other hand target costing is a method
/ philosophy that takes financial, manufacturing and customer aspects
into consideration during designing phase and helps firms in making
product design decisions to increase the profit / value of the
company. It uses various techniques to identify customer demands, to
decrease costs of manufacturing and finally to achieve strategic goals.
Target Costing forms an integral part of total product design /
redesign based on strategic plans.
Abstract: Collaborative networked learning (hereafter CNL)
was first proposed by Charles Findley in his work “Collaborative
networked learning: online facilitation and software support" as part
of instructional learning for the future of the knowledge worker. His
premise was that through electronic dialogue learners and experts
could interactively communicate within a contextual framework to
resolve problems, and/or to improve product or process knowledge.
Collaborative learning has always been the forefront of educational
technology and pedagogical research, but not in the mainstream of
operations management. As a result, there is a large disparity in the
study of CNL, and little is known about the antecedents of network
collaboration and sharing of information among diverse employees in
the manufacturing environment. This paper presents a model to
bridge the gap between theory and practice. The objective is that
manufacturing organizations will be able to accelerate organizational
learning and sharing of information through various collaborative
Abstract: Number of documents being created increases at an
increasing pace while most of them being in already known topics
and little of them introducing new concepts. This fact has started a
new era in information retrieval discipline where the requirements
have their own specialties. That is digging into topics and concepts
and finding out subtopics or relations between topics. Up to now IR
researches were interested in retrieving documents about a general
topic or clustering documents under generic subjects. However these
conventional approaches can-t go deep into content of documents
which makes it difficult for people to reach to right documents they
were searching. So we need new ways of mining document sets
where the critic point is to know much about the contents of the
documents. As a solution we are proposing to enhance LSI, one of
the proven IR techniques by supporting its vector space with n-gram
forms of words. Positive results we have obtained are shown in two
different application area of IR domain; querying a document
database, clustering documents in the document database.
Abstract: Well-developed strategic marketing planning is the essential
prerequisite for establishment of the right and unique competitive
advantage. Typical market, however, is a heterogeneous
and decentralized structure with natural involvement of individual
or group subjectivity and irrationality. These features cannot be
fully expressed with one-shot rigorous formal models based on,
e.g. mathematics, statistics or empirical formulas. We present an
innovative solution, extending the domain of agent based computational
economics towards the concept of hybrid modeling in service
provider and consumer market such as telecommunications. The
behavior of the market is described by two classes of agents -
consumer and service provider agents - whose internal dynamics
are fundamentally different. Customers are rather free multi-state
structures, adjusting behavior and preferences quickly in accordance
with time and changing environment. Producers, on the contrary,
are traditionally structured companies with comparable internal processes
and specific managerial policies. Their business momentum is
higher and immediate reaction possibilities limited. This limitation
underlines importance of proper strategic planning as the main
process advising managers in time whether to continue with more
or less the same business or whether to consider the need for future
structural changes that would ensure retention of existing customers
or acquisition of new ones.
Abstract: This paper solves the Non Linear Schrodinger
Equation using the Split Step Fourier method for modeling an optical
fiber. The model generates a complex wave of optical pulses and
using the results obtained two graphs namely Loss versus
Wavelength and Dispersion versus Wavelength are generated. Taking
Chromatic Dispersion and Polarization Mode Dispersion losses into
account, the graphs generated are compared with the graphs
formulated by JDS Uniphase Corporation which uses standard values
of dispersion for optical fibers. The graphs generated when compared
with the JDS Uniphase Corporation plots were found to be more or
less similar thus verifying that the model proposed is right.
MATLAB software was used for doing the modeling.
Abstract: This paper presents a general trainable framework
for fast and robust upright human face and non-human object
detection and verification in static images. To enhance the
performance of the detection process, the technique we develop is
based on the combination of fast neural network (FNN) and
classical neural network (CNN). In FNN, a useful correlation is
exploited to sustain high level of detection accuracy between input
image and the weight of the hidden neurons. This is to enable the
use of Fourier transform that significantly speed up the time
detection. The combination of CNN is responsible to verify the
face region. A bootstrap algorithm is used to collect non human
object, which adds the false detection to the training process of the
human and non-human object. Experimental results on test images
with both simple and complex background demonstrate that the
proposed method has obtained high detection rate and low false
positive rate in detecting both human face and non-human object.
Abstract: In this study, the effect of L-arginine was examined at the neuromuscular junction of the chick biventer cervicis muscle. LArginine at 500 μg/ ml, decreased twitch response to electerical stimulation, and produced rightward shift of the dose- response curve for acetylcholine or carbachol. L-Arginine at 1000μg/ ml produced a strong shift to the right of the dose – response curve for acetylcholine or carbachol with a reduction in the efficacy. The inhibitory effect of L-arginine on the twitch response was blocked by caffeine (200μg/ ml). NO levels were also measured in the chick biventer cervicis muscle homogenates, using spectrophotometric method for the direct detection of NO, nitrite and nitrate. Total nitrite (nitrite + nitrate) was measured by a spectrophotometer at 540 nm after the conversion of nitrate to nitrite by copperized cadmium granules. NO levels were found to be significantly increased in concentrations 500 and 1000μg/ ml of L-arginine in comparison with the control group (p
Abstract: Evolvable hardware (EHW) is a developing field that
applies evolutionary algorithm (EA) to automatically design circuits,
antennas, robot controllers etc. A lot of research has been done in this
area and several different EAs have been introduced to tackle
numerous problems, as scalability, evolvability etc. However every
time a specific EA is chosen for solving a particular task, all its
components, such as population size, initialization, selection
mechanism, mutation rate, and genetic operators, should be selected
in order to achieve the best results. In the last three decade the
selection of the right parameters for the EA-s components for solving
different “test-problems" has been investigated. In this paper the
behaviour of mutation rate for designing logic circuits, which has not
been done before, has been deeply analyzed. The mutation rate for an
EHW system modifies the number of inputs of each logic gates, the
functionality (for example from AND to NOR) and the connectivity
between logic gates. The behaviour of the mutation has been
analyzed based on the number of generations, genotype redundancy
and number of logic gates for the evolved circuits. The experimental
results found provide the behaviour of the mutation rate during
evolution for the design and optimization of simple logic circuits.
The experimental results propose the best mutation rate to be used for
designing combinational logic circuits. The research presented is
particular important for those who would like to implement a
dynamic mutation rate inside the evolutionary algorithm for evolving
digital circuits. The researches on the mutation rate during the last 40
years are also summarized.
Abstract: Repeated observation of a given area over time yields
potential for many forms of change detection analysis. These
repeated observations are confounded in terms of radiometric
consistency due to changes in sensor calibration over time,
differences in illumination, observation angles and variation in
atmospheric effects.
This paper demonstrates applicability of an empirical relative
radiometric normalization method to a set of multitemporal cloudy
images acquired by Resourcesat1 LISS III sensor. Objective of this
study is to detect and remove cloud cover and normalize an image
radiometrically. Cloud detection is achieved by using Average
Brightness Threshold (ABT) algorithm. The detected cloud is
removed and replaced with data from another images of the same
area. After cloud removal, the proposed normalization method is
applied to reduce the radiometric influence caused by non surface
factors. This process identifies landscape elements whose reflectance
values are nearly constant over time, i.e. the subset of non-changing
pixels are identified using frequency based correlation technique. The
quality of radiometric normalization is statistically assessed by R2
value and mean square error (MSE) between each pair of analogous
band.
Abstract: Supplier selection, in real situation, is affected by
several qualitative and quantitative factors and is one of the most
important activities of purchasing department. Since at the time of
evaluating suppliers against the criteria or factors, decision makers
(DMS) do not have precise, exact and complete information, supplier
selection becomes more difficult. In this case, Grey theory helps us
to deal with this problem of uncertainty. Here, we apply Technique
for Order Preference by Similarity to Ideal Solution (TOPSIS)
method to evaluate and select the best supplier by using interval
fuzzy numbers. Through this article, we compare TOPSIS with some
other approaches and afterward demonstrate that the concept of
TOPSIS is very important for ranking and selecting right supplier.
Abstract: One of the major cause of eye strain and other
problems caused while watching television is the relative illumination between the screen and its surrounding. This can be
overcome by adjusting the brightness of the screen with respect to the surrounding light. A controller based on fuzzy logic is proposed
in this paper. The fuzzy controller takes in the intensity of light
surrounding the screen and the present brightness of the screen as input. The output of the fuzzy controller is the grid voltage corresponding to the required brightness. This voltage is given to CRT and brightness is controller dynamically. For the given test system data, different de-fuzzifier methods have been implemented and the results are compared. In order to validate the effectiveness of
the proposed approach, a fuzzy controller has been designed by obtaining a test data from a real time system. The simulations are
performed in MATLAB and are verified with standard system data. The proposed approach can be implemented for real time
applications.