Abstract: CONWIP (constant work-in-process) as a pull
production system have been widely studied by researchers to date.
The CONWIP pull production system is an alternative to pure push
and pure pull production systems. It lowers and controls inventory
levels which make the throughput better, reduces production lead
time, delivery reliability and utilization of work. In this article a
CONWIP pull production system was simulated. It was simulated
push and pull planning system. To compare these systems via a
production planning system (PPS) game were adjusted parameters of
each production planning system. The main target was to reduce the
total WIP and achieve throughput and delivery reliability to
minimum values. Data was recorded and evaluated. A future state
was made for real production of plastic components and the setup of
the two indicators with CONWIP pull production system which can
greatly help the company to be more competitive on the market.
Abstract: Motor imagery classification provides an important basis for designing Brain Machine Interfaces [BMI]. A BMI captures and decodes brain EEG signals and transforms human thought into actions. The ability of an individual to control his EEG through imaginary mental tasks enables him to control devices through the BMI. This paper presents a method to design a four state BMI using EEG signals recorded from the C3 and C4 locations. Principle features extracted through principle component analysis of the segmented EEG are analyzed using two novel classification algorithms using Elman recurrent neural network and functional link neural network. Performance of both classifiers is evaluated using a particle swarm optimization training algorithm; results are also compared with the conventional back propagation training algorithm. EEG motor imagery recorded from two subjects is used in the offline analysis. From overall classification performance it is observed that the BP algorithm has higher average classification of 93.5%, while the PSO algorithm has better training time and maximum classification. The proposed methods promises to provide a useful alternative general procedure for motor imagery classification
Abstract: The purpose of this paper is to detect human in images.
This paper proposes a method for extracting human body feature descriptors consisting of projected edge component series. The feature descriptor can express appearances and shapes of human with local
and global distribution of edges. Our method evaluated with a linear SVM classifier on Daimler-Chrysler pedestrian dataset, and test with
various sub-region size. The result shows that the accuracy level of
proposed method similar to Histogram of Oriented Gradients(HOG)
feature descriptor and feature extraction process is simple and faster than existing methods.
Abstract: The main aim of Supply Chain Management (SCM) is
to produce, distribute, logistics and deliver goods and equipment in
right location, right time, right amount to satisfy costumers, with
minimum time and cost waste. So implementing techniques that
reduce project time and cost, and improve productivity and
performance is very important. Emerging technologies such as the
Radio Frequency Identification (RFID) are now making it possible to
automate supply chains in a real time manner and making them more
efficient than the simple supply chain of the past for tracing and
monitoring goods and products and capturing data on movements of
goods and other events. This paper considers concepts, components
and RFID technology characteristics by concentration of warehouse
and inventories management. Additionally, utilization of RFID in the
role of improving information management in supply chain is
discussed. Finally, the facts of installation and this technology-s
results in direction with warehouse and inventory management and
business development will be presented.
Abstract: Recently the use of data mining to scientific bibliographic data bases has been implemented to analyze the pathways of the knowledge or the core scientific relevances of a laureated novel or a country. This specific case of data mining has been named citation mining, and it is the integration of citation bibliometrics and text mining. In this paper we present an improved WEB implementation of statistical physics algorithms to perform the text mining component of citation mining. In particular we use an entropic like distance between the compression of text as an indicator of the similarity between them. Finally, we have included the recently proposed index h to characterize the scientific production. We have used this web implementation to identify users, applications and impact of the Mexican scientific institutions located in the State of Morelos.
Abstract: This paper presents the design and implementation of
the WebGD, a CORBA-based document classification and retrieval
system on Internet. The WebGD makes use of such techniques as Web,
CORBA, Java, NLP, fuzzy technique, knowledge-based processing
and database technology. Unified classification and retrieval model,
classifying and retrieving with one reasoning engine and flexible
working mode configuration are some of its main features. The
architecture of WebGD, the unified classification and retrieval model,
the components of the WebGD server and the fuzzy inference engine
are discussed in this paper in detail.
Abstract: The aim of this research is to design a collaborative
framework that integrates risk analysis activities into the geospatial
database design (GDD) process. Risk analysis is rarely undertaken
iteratively as part of the present GDD methods in conformance to
requirement engineering (RE) guidelines and risk standards.
Accordingly, when risk analysis is performed during the GDD, some
foreseeable risks may be overlooked and not reach the output
specifications especially when user intentions are not systematically
collected. This may lead to ill-defined requirements and ultimately in
higher risks of geospatial data misuse. The adopted approach consists
of 1) reviewing risk analysis process within the scope of RE and
GDD, 2) analyzing the challenges of risk analysis within the context
of GDD, and 3) presenting the components of a risk-based
collaborative framework that improves the collection of the
intended/forbidden usages of the data and helps geo-IT experts to
discover implicit requirements and risks.
Abstract: Construction projects generally take place in
uncontrolled and dynamic environments where construction waste is
a serious environmental problem in many large cities. The total
amount of waste and carbon dioxide emissions from transportation
vehicles are still out of control due to increasing construction
projects, massive urban development projects and the lack of
effective tools for minimizing adverse environmental impacts in
construction. This research is about utilization of the integrated
applications of automated advanced tracking and data storage
technologies in the area of environmental management to monitor
and control adverse environmental impacts such as construction
waste and carbon dioxide emissions. Radio Frequency Identification
(RFID) integrated with the Global Position System (GPS) provides
an opportunity to uniquely identify materials, components, and
equipments and to locate and track them using minimal or no worker
input. The transmission of data to the central database will be carried
out with the help of Global System for Mobile Communications
(GSM).
Abstract: Face recognition is a technique to automatically
identify or verify individuals. It receives great attention in
identification, authentication, security and many more applications.
Diverse methods had been proposed for this purpose and also a lot of
comparative studies were performed. However, researchers could not
reach unified conclusion. In this paper, we are reporting an extensive
quantitative accuracy analysis of four most widely used face
recognition algorithms: Principal Component Analysis (PCA),
Independent Component Analysis (ICA), Linear Discriminant
Analysis (LDA) and Support Vector Machine (SVM) using AT&T,
Sheffield and Bangladeshi people face databases under diverse
situations such as illumination, alignment and pose variations.
Abstract: In this work a new method for low complexity
image coding is presented, that permits different settings and great
scalability in the generation of the final bit stream. This coding
presents a continuous-tone still image compression system that
groups loss and lossless compression making use of finite arithmetic
reversible transforms. Both transformation in the space of color and
wavelet transformation are reversible. The transformed coefficients
are coded by means of a coding system in depending on a
subdivision into smaller components (CFDS) similar to the bit
importance codification. The subcomponents so obtained are
reordered by means of a highly configure alignment system
depending on the application that makes possible the re-configure of
the elements of the image and obtaining different importance levels
from which the bit stream will be generated. The subcomponents of
each importance level are coded using a variable length entropy
coding system (VBLm) that permits the generation of an embedded
bit stream. This bit stream supposes itself a bit stream that codes a
compressed still image. However, the use of a packing system on the
bit stream after the VBLm allows the realization of a final highly
scalable bit stream from a basic image level and one or several
improvement levels.
Abstract: The purpose of this study is to analyze Green IT industry in major developed countries and to suggest overall directions for IT-Energy convergence industry. Recently, IT industry is pointed out as a problem such as environmental pollution, energy exhaustion, and high energy consumption. Therefore, Green IT gets focused which concerns as solution of these problems. However, since it is a beginning stage of this convergence area, there are only a few studies of IT-Energy convergence industry. According to this, this study examined the major developed countries in terms of institution arrangements, resources, markets and companies based on Van de Ven(1999)'s social system framework that shows relationship among key components of industrial infrastructure. Subsequently, the direction of the future study of convergence on IT and Energy industry is proposed.
Abstract: Bridges are one of the main components of
transportation networks. They should be functional before and after
earthquake for emergency services. Therefore we need to assess
seismic performance of bridges under different seismic loadings.
Fragility curve is one of the popular tools in seismic evaluations. The
fragility curves are conditional probability statements, which give the
probability of a bridge reaching or exceeding a particular damage
level for a given intensity level. In this study, the seismic
performance of a two-span simply supported concrete bridge is
assessed. Due to usual lack of empirical data, the analytical fragility
curve was developed by results of the dynamic analysis of bridge
subjected to the different time histories in near-fault area.
Abstract: Biomimicry has many potential benefits as many
technologies found in nature are superior to their man-made
counterparts. As technological device components approach the micro
and nanoscale, surface properties such as surface adhesion and friction
may need to be taken into account. Lowering surface adhesion by
manipulating chemistry alone might no longer be sufficient for such
components and thus physical manipulation may be required.
Adhesion reduction is only one of the many surface functions
displayed by micro/nano-structured cuticles of insects. Here, we
present a mini review of our understanding of insect cuticle structures
and the relationship between the structure dimensions and the
corresponding functional mechanisms. It may be possible to introduce
additional properties to material surfaces (indeed multi-functional
properties) based on the design of natural surfaces.
Abstract: This paper proposes a framework for product
development including hardware and software components. It
provides separation of hardware dependent software, modifications of
current product development process, and integration of software
modules with existing product configuration models and assembly
product structures. In order to decide the dependent software, the
framework considers product configuration modules and engineering
changes of associated software and hardware components. In order to
support efficient integration of the two different hardware and
software development, a modified product development process is
proposed. The process integrates the dependent software development
into product development through the interchanges of specific product
information. By using existing product data models in Product Data
Management (PDM), the framework represents software as modules
for product configurations and software parts for product structure.
The framework is applied to development of a robot system in order to
show its effectiveness.
Abstract: This work explores blind image deconvolution by recursive function approximation based on supervised learning of neural networks, under the assumption that a degraded image is linear convolution of an original source image through a linear shift-invariant (LSI) blurring matrix. Supervised learning of neural networks of radial basis functions (RBF) is employed to construct an embedded recursive function within a blurring image, try to extract non-deterministic component of an original source image, and use them to estimate hyper parameters of a linear image degradation model. Based on the estimated blurring matrix, reconstruction of an original source image from a blurred image is further resolved by an annealed Hopfield neural network. By numerical simulations, the proposed novel method is shown effective for faithful estimation of an unknown blurring matrix and restoration of an original source image.
Abstract: Human activity is a major concern in a wide variety of
applications, such as video surveillance, human computer interface
and face image database management. Detecting and recognizing
faces is a crucial step in these applications. Furthermore, major
advancements and initiatives in security applications in the past years
have propelled face recognition technology into the spotlight. The
performance of existing face recognition systems declines significantly
if the resolution of the face image falls below a certain level.
This is especially critical in surveillance imagery where often, due to
many reasons, only low-resolution video of faces is available. If these
low-resolution images are passed to a face recognition system, the
performance is usually unacceptable. Hence, resolution plays a key
role in face recognition systems. In this paper we introduce a new
low resolution face recognition system based on mixture of expert
neural networks. In order to produce the low resolution input images
we down-sampled the 48 × 48 ORL images to 12 × 12 ones using
the nearest neighbor interpolation method and after that applying
the bicubic interpolation method yields enhanced images which is
given to the Principal Component Analysis feature extractor system.
Comparison with some of the most related methods indicates that
the proposed novel model yields excellent recognition rate in low
resolution face recognition that is the recognition rate of 100% for
the training set and 96.5% for the test set.
Abstract: Efficient preprocessing is very essential for automatic
recognition of handwritten documents. In this paper, techniques on
segmenting words in handwritten Arabic text are presented. Firstly,
connected components (ccs) are extracted, and distances among
different components are analyzed. The statistical distribution of this
distance is then obtained to determine an optimal threshold for words
segmentation. Meanwhile, an improved projection based method is
also employed for baseline detection. The proposed method has been
successfully tested on IFN/ENIT database consisting of 26459
Arabic words handwritten by 411 different writers, and the results
were promising and very encouraging in more accurate detection of
the baseline and segmentation of words for further recognition.
Abstract: Wastages such as grated coconut meat, spent tea and used sugarcane had contributed negative impacts to the environment. Vermicomposting method is fully utilized to manage the wastes towards a more sustainable approach. The worms that are used in the vermicomposting are Eisenia foetida and Eudrillus euginae. This research shows that the vermicompost of wastages has voltage of electrical energy and is able to light up the Light-Emitting Diode (LED) device. Based on the experiment, the use of replicated and double compartments of the component will produce double of voltage. Hence, for conclusion, this harmless and low cost technology of vermicompost can act as a dry cell in order to reduce the usage of hazardous chemicals that can contaminate the environment.
Abstract: Light is one of the most important qualitative and
symbolic factors and has a special position in architecture and urban
development in regard to practical function. The main function of
light, either natural or artificial, is lighting up the environment and
the constructional forms which is called lighting. However, light is
used to redefine the urban spaces by architectural genius with regard
to three aesthetic, conceptual and symbolic factors. In architecture
and urban development, light has a function beyond lighting up the
environment, and the designers consider it as one of the basic
components. The present research aims at studying the function of
light and color in architectural view and their effects in buildings.
Abstract: The computer, among the most important inventions of the twentieth century, has become an increasingly important component in our everyday lives. Computer games also have become increasingly popular among people day-by-day, owing to their features based on realistic virtual environments, audio and visual features, and the roles they offer players. In the present study, the metaphors students have for computer games are investigated, as well as an effort to fill the gap in the literature. Students were asked to complete the sentence—‘Computer game is like/similar to….because….’— to determine the middle school students’ metaphorical images of the concept for ‘computer game’. The metaphors created by the students were grouped in six categories, based on the source of the metaphor. These categories were ordered as ‘computer game as a means of entertainment’, ‘computer game as a beneficial means’, ‘computer game as a basic need’, ‘computer game as a source of evil’, ‘computer game as a means of withdrawal’, and ‘computer game as a source of addiction’, according to the number of metaphors they included.