Abstract: A method is presented for the construction of arbitrary
even-input sorting networks exhibiting better properties than the
networks created using a conventional technique of the same type.
The method was discovered by means of a genetic algorithm combined
with an application-specific development. Similarly to human
inventions in the area of theoretical computer science, the evolved
invention was analyzed: its generality was proven and area and time
complexities were determined.
Abstract: Enterprise Wide Information Systems (EWIS)
implementation involves the entire business and will require changes
throughout the firm. Because of the scope, complexity and
continuous nature of ERP, the project-based approach to managing
the implementation process resulted in failure rates of between 60%
and 80%. In recent years ERP systems have received much attention.
The organizational relevance and risk of ERP projects make it
important for organizations to focus on ways to make ERP
implementation successful. Once these systems are in place,
however, their performance depends on the identified macro
variables viz. 'Business Process', 'Decision Making' and 'Individual
/ Group working'. The questionnaire was designed and administered.
The responses from 92 organizations were compiled. The
relationship of these variables with EWIS performance is analyzed
using inferential statistical measurements. The study helps to
understand the performance of model presented. The study suggested
in keeping away from the calamities and thereby giving the
necessary competitive edge. Whenever some discrepancy is
identified during the process of performance appraisal care has to be
taken to draft necessary preventive measures. If all these measures
are taken care off then the EWIS performance will definitely deliver
the results.
Abstract: Adhesion strength of exterior or interior coating of
steel pipes is too important. Increasing of coating adhesion on
surfaces can increase the life time of coating, safety factor of
transmitting line pipe and decreasing the rate of corrosion and costs.
Preparation of steel pipe surfaces before doing the coating process is
done by shot and grit blasting. This is a mechanical way to do it.
Some effective parameters on that process, are particle size of
abrasives, distance to surface, rate of abrasive flow, abrasive physical
properties, shapes, selection of abrasive, kind of machine and its
power, standard of surface cleanness degree, roughness, time of
blasting and weather humidity. This search intended to find some
better conditions which improve the surface preparation, adhesion
strength and corrosion resistance of coating. So, this paper has
studied the effect of varying abrasive flow rate, changing the
abrasive particle size, time of surface blasting on steel surface
roughness and over blasting on it by using the centrifugal blasting
machine. After preparation of numbers of steel samples (according to
API 5L X52) and applying epoxy powder coating on them, to
compare strength adhesion of coating by Pull-Off test. The results
have shown that, increasing the abrasive particles size and flow rate,
can increase the steel surface roughness and coating adhesion
strength but increasing the blasting time can do surface over blasting
and increasing surface temperature and hardness too, change,
decreasing steel surface roughness and coating adhesion strength.
Abstract: In this paper, some problem formulations of dynamic object parameters recovery described by non-autonomous system of ordinary differential equations with multipoint unshared edge conditions are investigated. Depending on the number of additional conditions the problem is reduced to an algebraic equations system or to a problem of quadratic programming. With this purpose the paper offers a new scheme of the edge conditions transfer method called by conditions shift. The method permits to get rid from differential links and multipoint unshared initially-edge conditions. The advantage of the proposed approach is concluded by capabilities of reduction of a parametric identification problem to essential simple problems of the solution of an algebraic system or quadratic programming.
Abstract: Nurses in an Armed Force Hospital (AFH) expose to stronger stress than those in a civil hospital, especially in an emergency department (ED). Ironically, stresses of these nurses received few if any attention in academic research in the past. This study collects 227 samples from the emergency departments of four armed force hospitals in central and southern Taiwan. The research indicates that the top five stressors are a massive casualty event, delayed physician support, overloads of routine work, overloads of assignments, and annoying paper work. Excessive work loading was found to be the primary source of stress. Nurses who were perceived to have greater stress levels were more inclined to deploy emotion-oriented approaches and more likely to seek job rotations. Professional stressors and problem-oriented approaches were positively correlated. Unlike other local studies, this study concludes that the excessive work-loading is more stressful in an AFH.
Abstract: Effect of high temperature exposure on properties of cement mortar containing municipal solid waste incineration (MSWI) bottom ash as partial natural aggregate replacement is analyzed in the paper. The measurements of mechanical properties, bulk density, matrix density, total open porosity, sorption and desorption isotherms are done on samples exposed to the temperatures of 20°C to 1000°C. TGA analysis is performed as well. Finally, the studied samples are analyzed by IR spectroscopy in order to evaluate TGA data.
Abstract: This paper describes a novel and effective approach to content-based image retrieval (CBIR) that represents each image in the database by a vector of feature values called “Standard deviation of mean vectors of color distribution of rows and columns of images for CBIR". In many areas of commerce, government, academia, and hospitals, large collections of digital images are being created. This paper describes the approach that uses contents as feature vector for retrieval of similar images. There are several classes of features that are used to specify queries: colour, texture, shape, spatial layout. Colour features are often easily obtained directly from the pixel intensities. In this paper feature extraction is done for the texture descriptor that is 'variance' and 'Variance of Variances'. First standard deviation of each row and column mean is calculated for R, G, and B planes. These six values are obtained for one image which acts as a feature vector. Secondly we calculate variance of the row and column of R, G and B planes of an image. Then six standard deviations of these variance sequences are calculated to form a feature vector of dimension six. We applied our approach to a database of 300 BMP images. We have determined the capability of automatic indexing by analyzing image content: color and texture as features and by applying a similarity measure Euclidean distance.
Abstract: A method for solving linear and non-linear Goursat
problem is given by using the two-dimensional differential transform
method. The approximate solution of this problem is calculated in
the form of a series with easily computable terms and also the exact
solutions can be achieved by the known forms of the series solutions.
The method can easily be applied to many linear and non-linear
problems and is capable of reducing the size of computational work.
Several examples are given to demonstrate the reliability and the
performance of the presented method.
Abstract: Data mining is an extraordinarily demanding field referring to extraction of implicit knowledge and relationships, which are not explicitly stored in databases. A wide variety of methods of data mining have been introduced (classification, characterization, generalization...). Each one of these methods includes more than algorithm. A system of data mining implies different user categories,, which mean that the user-s behavior must be a component of the system. The problem at this level is to know which algorithm of which method to employ for an exploratory end, which one for a decisional end, and how can they collaborate and communicate. Agent paradigm presents a new way of conception and realizing of data mining system. The purpose is to combine different algorithms of data mining to prepare elements for decision-makers, benefiting from the possibilities offered by the multi-agent systems. In this paper the agent framework for data mining is introduced, and its overall architecture and functionality are presented. The validation is made on spatial data. Principal results will be presented.
Abstract: The client server systems using mobile
communications networks for data transmission became very
attractive for many economic agents, in the purpose of promoting and
offering electronic services to their clients. E-services are suitable for
business developing and financial benefits increasing. The products
or services can be efficiently delivered to a large number of clients,
using mobile Internet access technologies. The clients can have
access to e-services, anywhere and anytime, with the support of 3G,
GPRS, WLAN, etc., channels bandwidth, data services and protocols.
Based on the mobile communications networks evolution and
development, a convergence of technological and financial interests
of mobile operators, software developers, mobile terminals producers
and e-content providers is established. These will lead to a high level
of integration of IT&C resources and will facilitate the value added
services delivery through the mobile communications networks. In
this paper it is presented a client server system, for e-services access,
with Smartphones and PDA-s mobile software applications, installed
on Symbian and Windows Mobile operating systems.
Abstract: According to the density of the chips, designers are
trying to put so any facilities of computational and storage on single
chips. Along with the complexity of computational and storage
circuits, the designing, testing and debugging become more and more
complex and expensive. So, hardware design will be built by using
very high speed hardware description language, which is more
efficient and cost effective. This paper will focus on the
implementation of 32-bit ALU design based on Verilog hardware
description language. Adder and subtracter operate correctly on both
unsigned and positive numbers. In ALU, addition takes most of the
time if it uses the ripple-carry adder. The general strategy for
designing fast adders is to reduce the time required to form carry
signals. Adders that use this principle are called carry look- ahead
adder. The carry look-ahead adder is to be designed with combination
of 4-bit adders. The syntax of Verilog HDL is similar to the C
programming language. This paper proposes a unified approach to
ALU design in which both simulation and formal verification can
co-exist.
Abstract: In an open real-time system environment, the coexistence of different kinds of real-time and non real-time applications makes the system scheduling mechanism face new requirements and challenges. One two-level scheduling scheme of the open real-time systems is introduced, and points out that hard and soft real-time applications are scheduled non-distinctively as the same type real-time applications, the Quality of Service (QoS) cannot be guaranteed. It has two flaws: The first, it can not differentiate scheduling priorities of hard and soft real-time applications, that is to say, it neglects characteristic differences between hard real-time applications and soft ones, so it does not suit a more complex real-time environment. The second, the worst case execution time of soft real-time applications cannot be predicted exactly, so it is not worth while to cost much spending in order to assure all soft real-time applications not to miss their deadlines, and doing that may cause resource wasting. In order to solve this problem, a novel two-level real-time scheduling mechanism (including scheduling profile and scheduling algorithm) which adds the process of dealing with soft real-time applications is proposed. Finally, we verify real-time scheduling mechanism from two aspects of theory and experiment. The results indicate that our scheduling mechanism can achieve the following objectives. (1) It can reflect the difference of priority when scheduling hard and soft real-time applications. (2) It can ensure schedulability of hard real-time applications, that is, their rate of missing deadline is 0. (3) The overall rate of missing deadline of soft real-time applications can be less than 1. (4) The deadline of a non-real-time application is not set, whereas the scheduling algorithm that server 0 S uses can avoid the “starvation" of jobs and increase QOS. By doing that, our scheduling mechanism is more compatible with different types of applications and it will be applied more widely.
Abstract: The group mutual exclusion (GME) problem is an
interesting generalization of the mutual exclusion problem. Several
solutions of the GME problem have been proposed for message
passing distributed systems. However, none of these solutions is
suitable for real time distributed systems. In this paper, we propose a
token-based distributed algorithms for the GME problem in soft real
time distributed systems. The algorithm uses the concepts of priority
queue, dynamic request set and the process state. The algorithm uses
first come first serve approach in selecting the next session type
between the same priority levels and satisfies the concurrent
occupancy property. The algorithm allows all n processors to be
inside their CS provided they request for the same session. The
performance analysis and correctness proof of the algorithm has also
been included in the paper.
Abstract: Model Predictive Control has been previously applied
to supply chain problems with promising results; however hitherto
proposed systems possessed no information on future demand. A
forecasting methodology will surely promote the efficiency of
control actions by providing insight on the future. A complete supply
chain management framework that is based on Model Predictive
Control (MPC) and Time Series Forecasting will be presented in this
paper. The proposed framework will be tested on industrial data in
order to assess the efficiency of the method and the impact of
forecast accuracy on overall control performance of the supply chain.
To this end, forecasting methodologies with different characteristics
will be implemented on test data to generate forecasts that will serve
as input to the Model Predictive Control module.
Abstract: In designing river intakes and diversion structures, it is paramount that the sediments entering the intake are minimized or, if possible, completely separated. Due to high water velocity, sediments can significantly damage hydraulic structures especially when mechanical equipment like pumps and turbines are used. This subsequently results in wasting water, electricity and further costs. Therefore, it is prudent to investigate and analyze the performance of lateral intakes affected by sediment control structures. Laboratory experiments, despite their vast potential and benefits, can face certain limitations and challenges. Some of these include: limitations in equipment and facilities, space constraints, equipment errors including lack of adequate precision or mal-operation, and finally, human error. Research has shown that in order to achieve the ultimate goal of intake structure design – which is to design longlasting and proficient structures – the best combination of sediment control structures (such as sill and submerged vanes) along with parameters that increase their performance (such as diversion angle and location) should be determined. Cost, difficulty of execution and environmental impacts should also be included in evaluating the optimal design. This solution can then be applied to similar problems in the future. Subsequently, the model used to arrive at the optimal design requires high level of accuracy and precision in order to avoid improper design and execution of projects. Process of creating and executing the design should be as comprehensive and applicable as possible. Therefore, it is important that influential parameters and vital criteria is fully understood and applied at all stages of choosing the optimal design. In this article, influential parameters on optimal performance of the intake, advantages and disadvantages, and efficiency of a given design are studied. Then, a multi-criterion decision matrix is utilized to choose the optimal model that can be used to determine the proper parameters in constructing the intake.
Abstract: Today the social marketing was constituted as a tool
of significant value in what he refers to the promotion of changes of
behaviors, attitudes end practices. With the objective of analyzing the
benefits that the social marketing can bring for the organizations that
use it the research was of the exploratory and descriptive. In the
present study the comparative method was used, through a qualitative
approach, to analyze the activities developed by three institutions:
the Recovery Center Rosa de Saron, the House of Recovery for
addicts and Teen Challenge Institute Children's Cancer of the
Wasteland (ICIA), kindred of pointing out the benefits of the social
marketing in organizations that don-t seek the profit.
Abstract: Photonic Crystal (PhC) based devices are being
increasingly used in multifunctional, compact devices in integrated
optical communication systems. They provide excellent
controllability of light, yet maintaining the small size required for
miniaturization. In this paper, the band gap properties of PhCs and
their typical applications in optical waveguiding are considered.
Novel PhC based applications such as nonlinear switching and
tapers are considered and simulation results are shown using the
accurate time-domain numerical method based on Finite Difference
Time Domain (FDTD) scheme. The suitability of these devices for
novel applications is discussed and evaluated.
Abstract: This research investigates risk factors for defective products in autoparts factories. Under a Bayesian framework, a generalized linear mixed model (GLMM) in which the dependent variable, the number of defective products, has a Poisson distribution is adopted. Its performance is compared with the Poisson GLM under a Bayesian framework. The factors considered are production process, machines, and workers. The products coded RT50 are observed. The study found that the Poisson GLMM is more appropriate than the Poisson GLM. For the production Process factor, the highest risk of producing defective products is Process 1, for the Machine factor, the highest risk is Machine 5, and for the Worker factor, the highest risk is Worker 6.
Abstract: Today-s economy is in a permanent change, causing
merger and acquisitions and co operations between enterprises. As a
consequence, process adaptations and realignments result in systems
integration and software development projects. Processes and
procedures to execute such projects are still reliant on craftsman-ship
of highly skilled workers. A generally accepted, industrialized
production, characterized by high efficiency and quality, seems
inevitable.
In spite of this, current concepts of software industrialization are
aimed at traditional software engineering and do not consider the
characteristics of systems integration. The present work points out
these particularities and discusses the applicability of existing
industrial concepts in the systems integration domain. Consequently
it defines further areas of research necessary to bring the field of
systems integration closer to an industrialized production, allowing a
higher efficiency, quality and return on investment.
Abstract: Global competitiveness has recently become the
biggest concern of both manufacturing and service companies.
Electronic commerce, as a key technology enables the firms to reach
all the potential consumers from all over the world. In this study, we
have presented commonly used electronic payment systems, and then
we have shown the evaluation of these systems in respect to different
criteria. The payment systems which are included in this research are
the credit card, the virtual credit card, the electronic money, the
mobile payment, the credit transfer and the debit instruments. We
have realized a systematic comparison of these systems in respect to
three main criteria: Technical, economical and social. We have
conducted a fuzzy multi-criteria decision making procedure to deal
with the multi-attribute nature of the problem. The subjectiveness
and imprecision of the evaluation process are modeled using
triangular fuzzy numbers.